Tuesday, 15 January 2013

VMware - RDM


About Raw Device Mapping

An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. The RDM allows a virtual machine to directly access and use the storage device. The RDM contains metadata
for managing and redirecting disk access to the physical device.

The file gives you some of the advantages of direct access to a physical device while keeping some advantages of a virtual disk in VMFS. As a result, it merges VMFS manageability with raw device access.

RDMs can be described in terms such as mapping a raw device into a datastore, mapping a system LUN, or mapping a disk file to a physical disk volume. 

Physical Compatibility Mode

In physical mode the SCSI commands are passed via the VMkernel, as all the hardware characteristics are exposed to the VM.
VMFS-5 supports up to 64TB disks size.


  • Can not convert a 2TB+ RDM disk to a virtual disk or clone the VM for that fact, it is not supported.
  • No snapshot can be taken with PCM disks.
  • MS Clustering is supported in PCM.


Virtual Compatibility Mode


  • In virtual compatibility mode the maximum VMDK file can only be 2TB in size.
  • Virtual machine snapshots are available for RDMs with virtual compatibility mode
  • You can storage vMotion VCM disk to other datastores.
  • MS Clustering is not supported in VCM.

VMware - Roles and Permissions

I came across this strange issue yesterday 14th Jan 2013, where by I modified some AD groups and forgot to remove the groups from vCenter "Hosts and Clusters" and "Networking" beforehand. 
The results I was getting were very strange because every time I modified the Permissions in vCenter using the roles I created, it kept on changing back to the ready-only role. I was scratching my head until I remembered that permissions set on the "networking" inventory were not removed and once I did that the strange behavior stopped. Phew it did not create any problems for the end users.

So if you ever do this please remember to remove the permissions from "Hosts and Clusters" and "Networking" and then modify your users and groups in AD. 

VMware - Building your own lab

If you want to build your own VMware lab environment, I've put together a list of kits that you may be interested in.
I can't guarantee that the built in NIC are fully compatible with ESXi 5.1, but I've seen a post where by the user said it worked without any problems.

My lab includes 4 esxi hosts, this is to actually simulate site to site scenario by having 2 esxi in site A and 2 in site B. The sites will be connected by a router and switches. The kit below is for a small form factor case and also to reduce the amount of power the kit will draw.

Specification


1: Komputerbay 16GB (2x 8GB) DDR3 PC3-12800 1600MHz DIMM x £39.99 from Amazon

2: Gigabyte GA-Z77N-WIFI, Intel Z77, S 1155 x £88.52  from Scan


3: Intel Core i3 3220 Ivy Bridge Dual Core Processor x £88.96 from Scan


4: CIT MTX001B, Black, mini-ITX Case with 300W PSU x £27.47 from Scan

VMware - Snapshot explained


In VMware the snapshot file grows by 16MB block size. Now there is no formula out there that you can use to predetermine the size of the snapshot beforehand. 
But VMware's best practice is to keep the snapshot no more then 24-72 hours & 1-2 snapshots per VM. Snapshot files will grow to the maximum size of the original disk, to avoid any problems and to mitigate future issues either delete the snapshot or consolidate it when you are happy that your VM is running as it should be.

Now I’ve taken some screenshots of a virtual machine with the memory and quiesce options to show the size of the snapshots.

1: What the snapshot options look like.





















2: This shows the state of the files before any snapshots. See that the vswp file is 4GB in size, that is the memory that was allocated to the server during creation.


3: The 1st snapshot I took was with the memory option which by the way is the default. As you can see the size of the delta.vmdk file is 16MB and that the vmsn file is 4GB in size, which is the size of the memory in the virtual server.










4: The 2nd snapshot was taken with only the "quiesce" option ticked and as you can see the delta.vmdk file is still 16MB but the vmsn file is only 31K. The reason is because there were no real disk I/O happening on the server at the time of the snapshot.










5: The 3rd snapshot was taken with both the "memory& quiesce" options ticked. The delta.vmdk is still 16MB, but the vmsn file is a combination of the memory and disk state.














Tuesday, 28 February 2012

VMware - Recommended disk or LUN sizes for VMware ESX/ESXi installations


VMware ESX 3.0.3 and 3.5

VMware ESX 3.x requires a minimum of approximately 8GB.

System Disk Partitions

100MB boot partition
5GB system root partition
VMFS partition, if defined, spans the remainder of the disk
Extended partition
System Disk Logical Partitions

1GB swap partition
2GB system log partition
110MB VMkernel core dump partition

VMware ESX 4.0 and 4.1

For VMware ESX 4.x, the ESX Console OS is instead situated within a virtual machine disk (.vmdk) file on a VMFS file system. The size of this disk file varies between deployments, based on the size of the logical unit used. A minimum requirement is approximately 8GB.

Notes:
The stored Console OS virtual machine disk files may increase in size over the course of a deployment, accommodating for additional log data and files.
It can be stored on a SAN LUN or different block device than the system disk, as long as it has been partitioned and formatted as VMFS.
The Console OS disk, as a best practice, should not be situated on a shared SAN LUN.

System Disk Partitions

1100MB boot partition
110MB VMkernel core dump partition
Extended partition spans the remainder of the disk
Logical VMFS partition spanning the remainder of the extended partition
VMFS Partition / Console OS VMDK Partitions

These are partitions that reside within the Console OS VMDK, stored on the formatted VMFS volume:
600MB swap partition
2GB system log partition
Extended partition spanning remainder of Console OS .vmdk file
5GB system (root) partition
VMware ESXi 3.5, 4.0, and 4.1 (Installable)

VMware ESXi installations benefit from reduced space and memory requirements due to the omission of the Console OS. It requires approximately 6GB of space without a defined VMFS partition or datastore.

When additional block devices are provided, they may be formatted and utilized as VMFS, or in some cases as additional scratch storage space.

System Disk Partitions

4.2MB FAT boot partition
Extended partition
4.3GB FAT partition for scratch storage and swap
Remainder of device may be formatted as VMFS

Note: The minimum size for a VMFS datastore is approximately 1GB.
System Disk Logical Partitions

250MB FAT partition for a hypervisor bootbank
250MB FAT partition for a second hypervisor bootbank
110MB diagnostic partition for VMkernel core dumps
286MB FAT partition for the store partition (VMware Tools, VMware vSphere/Infrastructure Client, core)

VMware ESXi 3.5, 4.0, and 4.1 (Embedded / USB)

Embedded VMware ESXi installations typically utilize approximately 1GB of non-volatile flash media via USB. An additional 4-5GB of space may be defined on local storage for additional scratch storage and swap to be stored.

Persistent logging is not included with embedded ESXi. VMware recommends configuring remote syslog services for troubleshooting or when anticipating technical issues. For additional information, see Enabling syslog on ESXi (1016621).
USB Device Primary Partitions

4.2MB FAT boot partition
Extended partition
USB Device Logical Partitions

250MB FAT partition for a hypervisor bootbank
250MB FAT partition for a second hypervisor bootbank
110MB diagnostic partition for VMkernel core dumps
286MB FAT partition for the store partition (VMware Tools, VMware vSphere/Infrastructure Client, core).
Local Disk Partitions (If Present)

4.3GB FAT partition for scratch storage and swap
110MB diagnostic partition for VMkernel core dumps
Remainder of device may be formatted as VMFS

Note: The minimum size for a VMFS datastore is approximately 1GB.
For additional information on these installation requirements, see the respective installation guide for your chosen product in the VMware Documentation pages.

VMware ESXi 5.0 (Installable) 

For fresh installations, several new partitions are created for the boot banks, the scratch partition, and the locker.

Fresh ESXi installations use GUID Partition Tables (GPT) instead of MSDOS-based partitioning. The partition table itself is fixed as part of the binary image, and is written to the disk at the time the system is installed. The ESXi installer leaves the scratch and VMFS partitions blank and ESXi creates them when the host is rebooted for the first time  after installation or upgrade.

One 4GB VFAT scratch partition is created for system swap. See “About the Scratch Partition,” in the vSphere Installation and Setup Guide.
The VFAT scratch partition is created only on the disk from which the ESXi host is booting. On the other disks, the software creates a VMFS5 partition on each disk, using the  whole disk.

During ESXi installation, the installer creates a 110MB diagnostic partition for core dumps.

VMware ESXi 5.0 (Embedded/USB)

One 110MB diagnostic partition for core dumps, if this partition is not present on another disk. The VFAT scratch and diagnostic partitions are created only on the disk from which the ESXi host is booting. On other disks, the software creates one VMFS5 partition per blank disk, using the whole disk. Only blank disks are formatted.

In ESXi Embedded, all visible blank internal disks with VMFS are also formatted by default.

vSphere 5.0 supports booting ESXi hosts from the Unified Extensible Firmware Interface (UEFI). With UEFI you can boot systems from hard drives, CD-ROM drives, or USB media.

ESXi can boot from a disk larger than 2TB provided that the system firmware and the firmware on any addin card that you are using support it. See the vendor documentation.

USB key size for ESXi 5.0 embedded is vendor dependent.