Link to Part 1
Having ran VSAN in my lab for a few weeks I have had a chance to try out some of the features. The following is my experience. Will I continue to use it? Im not sure…..mainly for the memory overhead and SSD usage. As my lab is nested within a workstation I need as much memory as I can get until my lab can be upgraded to 64GB it may be OpenFiler I use for the storage plus I dont need any fault tolerance as the underlining hardware is the same. Nice to get some hands on though!
After initial setup I noticed that any sort of load on the vmnics my hosts would disconnect from vCentre but also become unresponsive to and network requests. To counter this I first updated the virtual adapters of the VMs to e1000e adapters. To do this change the config file as below
I then separated out the network adapters – ESXi management and the VM production network I marks as Bridged and the VMKernel ports I changed to VMnet2 / VMnet3 / VMnet4.
You can setup storage policies and assign them to VMs, these policies can have certain rules based on data services such as the number of fault tolerance assigned per VM. Storage policies can be assigned by tags or by rules based on data services, I will show both but use the latter. First you need to create Tags. To create a new tag first you have to create a new category open the Web Client – Home – Tags – New Category. Call the category Production and select Datastore
Now create the tag. Home – Storage – VSAN datastore – Manage – Tags – New Tag. I call it Production VMs and choose the category created above
Now we can create storage policies. Go to Home – VM Storage Policies – Create a new VM storage policy. Give the policy a name Production Servers and pick the correct vCentre. Read the next page about Rule Sets to understand what you are going to set up
Now specify the rule set you can choose to use the tag previously created of by data services. Data services I have is VSAN, available in the drop down box. This gives you options such as number of failures tolerated. To use the tag based assigned select the tag Production VMs
Points to consider
- Number of Disk Stripes per object – should be left at 1 unless the IOPS requirement of the VM is not met by the flash cache.
- Flash Read Cache Reservation – should be left at 0 unless there is a specific performance requirement. When left at 0 the VSAN scheduler will take care of the fair cache allocation. Not supported in a all-flash configuration
- Proportional Capacity – should be left at 0 unless thick provisioning is required
- Force Provisioned – VMs will be brought to compliance when resources become available
I choose the rules based on data services and set the number of fault tolerance to 1 . Finish the wizard and open Hosts and Clusters – New Virtual Machine. Run through the wizard until Select Storage you can now select the storage policy Production Servers. This will list compatible storage available given the policy created. In my case the compatible datastore will create two copies of the data and one witness
Once the VM is created go to Home – Host & Clusters – VM – Monitor – Policies – Physical Disk Placement. Notice I have two copies of the data and a witness component split between 3 hosts in a RAID1 set.
Is the data now stuck on VSAN? No you can still storage vMotion to a standard VMFS datastore. Go to Migrate the VM and choose Change Storage Only. On the Select Storage screen change the storage policy back to Datastore Default and pick a standard VMFS datastore
To move the VM back or another VMs onto VSAN use the same storage vMotion method but this time select the Production Servers policy. Note on the compatibility box it indicate how much data will be taken up by this policy to allow for 1 fault tolerance. Once its complete notice the Physical Disk Placement is spread across 3 hosts again in a RAID1
Adding a new host to VSAN
A question I thought of was, yes its nice now to have 3 hosts setup but what if I want to add more hosts and add further tolerance to the VMs? To add a new host to VSAN first add the the host to the cluster, add the require VMKernel adapter and assign it for VSAN. Once the VMKernel adapter is in place it is ready to go, if you have set it to automatic the disks will be assigned. I have it set to manually so have to then add the disks. These steps are covered in part 1 found here. I now have 5 hosts in the VSAN cluster with no extra requirements other then a VMKernel adapter
Now I want to change the Storage Policy to a fault tolerance of 2, create a new Storage Policy as above but set the number of failures tolerated to 2 called Production Servers – FT2
Now to change a storage policy for a running VM go to Home – Host and Clusters – VM – right-click – VM Policies – Edit VM Storage Policies. Drop the list down and pick the new policy – Production Servers – FT2. Note the changes allocated to the storage
Notice the components and witness disks are spread across all 5 hosts
With the data spread across all hosts what happens about Maintenance Mode? Now you have options for maintenance mode – Ensure Accessibility / Full Data Migration and No Data Migration
I want to Ensure Accessibility as I dont have any more hosts to do a full data migration. The host will enter maintenance mode and mark one of the disks as absent
One the host is back the disks appear as Active again. Now what happens if I pull the plug on a host running a active component disk? The component running on the offline host becomes active but the VM is still accessible. Warnings on each host reports a memeber of the cluster is offline
Thats about as far as I am going to go for my lab, at the minute while my lab is only 32GB of RAM im going to leave this disabled but will defiantly fire it back up once I get some more resources. If you doing this yourself and want to enabling monitoring you can enable SAN Observer a VMware white paper is available here on how to enable it and how to use it.