VCAP6-DCV Deployment – Objective 2.2 – Manage Complex Storage Solutions 4


Main Study Page

Objectives 2.2 are broke down as the following

  • Identify and tag (mark) SSD and local devices
  • Administer hardware acceleration for VAAI
  • Configure, administer, and apply storage policies
  • Prepare storage for maintenance
  • Apply space utilization data to manage storage resources
  • Provision and manage storage resources according to Virtual Machine requirements
  • Configure datastore alarms, including Virtual SAN alarms
  • Expand (Scale up / Scale Out) Virtual SAN hosts and diskgroups

Identify and tag (mark) SSD and local devices

ESXi doesn’t always recognise certain devices such as flash devices and you have to mark them manually.  Web Client – Host – Manage – Storage Devices – Marks the selected disks as flash/HDD disks

vcap2.2-01

To change via CLI first I connect to the host and run the following

>esxcli storage nmp device list

vcap2.2-03

Make a note of the ID, in my case its “eui.b7f3695b58afcb166c9ce9008a6551a3”.  I then run the following and include the ID and SATP type

>esxcli storage nmp satp rule add -s VMW_SATP_ALUA -d eui.b7f3695b58afcb166c9ce9008a6551a3 -o enable_ssd

>esxcli storage core claiming reclaim -d eui.b7f3695b58afcb166c9ce9008a6551a3

vcap2.2-05

The disk now is displayed as SSD

vcap2.2-06

To mark a disk as local from the same page right click a remote datastore and select Mark as Local

vcap2.2-02


Administer hardware acceleration for VAAI

Storage device supports hardware acceleration for features for block and file storage types, block features are as follows

  • Full Clone/ Copy Offload – providing faster VM cloning and migrations
  • Block Zeroing – zero’s out a large number of blocks to provide newly allocated storage
  • Hardware assist locking – allows disk locking per sector instead of an entire LUN as with SCSI reservation

File features

  • Full Clone / Copy Offload – providing faster VM cloning and migrations
  • Reserve Space – enables storage arrays to allocate space for thick provisioned disks
  • Native Snapshots – VM snapshots to be offloaded to the device

Hardware acceleration for block storage is enabled by default, for file a VIB needs to be installed.  To disable hardware acceleration go to Web Client – Host – Manage – Settings – Advanced System Settings.  Change the following

 VMFS3.HardwareAcceleratedLocking=0

vcap2.2-07

DataMover.HardwareAcceleratedInit=0

vcap2.2-08

DataMover.HardwareAcceleratedMove=0

vcap2.2-09

Block storage integrates with vSphere 5.x and later releases by using ESXi extensions Storage APIs – Array Integration – formally VAAI.  These extensions are implemented as the T10 SCSI based commands, devices that support the T10 SCSI standard ESXi can communicate directly with the device and does not require VAAI.  If the device doesn’t support T10 SCSI standard ESXi reverts back to VAAI plug ins.  File services still require VAAI plug ins.

To display plug ins run the following

>esxcli storage core plugin list –plugin-class=VAAI 

To verify the hardware acceleration support run the following to find the device ID

>esxcli –storage core device list

Make a note of the device ID then run

>esxcli storage core device vaai status get -d=deviceID

vcap2.2-10

To configure hardware acceleration for a new storage devices two claim rules need to be created – one for VAAI filter and another for VAAI plug-in.  This procedure is for block storage that does not support T10 SCSI commands and instead uses VAAI plug-ins.  Two claim rules need to be added – loaded then run.  An example taken from VMware documentation is as follows

>esxcli storage core claimrule add –claimrule-class=Filter –plugin=VAAI_FILTER –type=vendor –vendor=IBM –autoassign

>esxcli storage core claimrule add –claimrule-class=VAAI –plugin=VMW_VAAIP_T10 –type=vendor –vendor=IBM –autoassign

>esxcli storage core claimrule load –claimrule-class=Filter

>esxcli storage core claimrule load –claimrule-class=VAAI

>esxcli storage core claimrule run –claimrule-class=Filter

vcap2.2-12

 


Configure, administer, and apply storage policies

Storage policies let you specify storage requirements for applications that run on VMs and are used with VSAN and Virtual Volumes to determine how VMs are provisioned.  Storage policies are made up of rules that can be based on data services that are advertised by providers such as VSAN and VVols storage providers (VASA providers) or can based on tags.

When rules are created for a VM storage policy using storage providers it will reference advertised data services for a specific datastore, the datastore can provide the VM with a specific set of characteristics  for capacity, performance, availability and redundancy.  Tags can be used for storage that is not represented by storage provides.  The following will illustrate creating a new policy using tags.

First I need to create a new tag – Web Client – Home – Tags – New Tag.  Add a name with description and I also create a new category and link it to Datastore and Datastore Cluster

vcap2.2-13

I now need to add the tag to the datastore Web Client – Datastore – Manage – Tags – Assign Tag

vcap2.2-17

vcap2.2-18

Once the tag has been assigned I can create a new Storage Policy.  Web Client – Policies and Profiles – VM Storage Policies.  Add a name and description along with the relevant vCenter

vcap2.2-14

For the rule set I add Rules based on tags and pick the previously created tag

vcap2.2-15  vcap2.2-16

The datastore with the assigned tag should now be compatible and then finish

vcap2.2-19

Once complete I can create a new VM and assign the storage based on the policy

vcap2.2-20

I have a previous blog post on creating Storage Policies based on data services, in this case VSAN.  See here.


Prepare storage for maintenance

Storage in a datastore cluster can be prepared for maintenance similar to a ESXi host, to enter a daastore into maintenance mode Web Client – Storage – Datastore – Maintenance Mode – Enter Maintenance Mode

vcap2.2-21


Apply space utilization data to manage storage resources

This objective is a bit ambiguous so I will cover it the best I can.  Virtual disk provisioning policy can be specified when performing certain operations such as creating virtual disks, cloning and migrating VMs.    Disk formats are as follows

  • Thick Provision Lazy Zeroed – space allocated when the disk is created and is zeroed out on demand at a later time on first write form the VM
  • Thick Provision Eager Zeroed – space is allocated on creation and zeroed out on the physical storage device
  • Thin Provision – starts small and grows as required until it reaches its maximum configured capacity

Thin provisioned disks can be over subscribed and will need to be managed by migrating VMs or extending datastores, alarms can be configured to monitor for this that I will show in the next section.  Thin Provisioned disks can be inflated to a thick provisoned disk perform a storage vMotion and apply the change or manually inflate the disk.  Web Client – Storage – Datastore – VM Folder.  Right click the thin provisioned .vmdk file and select Inflate

vcap2.2-22

When using thin provisioned LUNs on a storage device, if the storage device uses Storage APIs – Array Integration the host can integrate with the storage device and become aware of the underlying thin provisioned LUN and the space usage.  To identify thin provisioned storage device run the following

>esxcli storage core device list

Once I have found the volume I want to check I need to add the device ID as below

>esxcli storage core device list -d eui.b12d6bd0ec1665d46c9ce90054614622

vcap2.2-23

When a datastore resides on a thin provisioned LUN it is possible to use esxcli to reclaim unused storage blocks.  Run the following command

>esxcli storage vmfs unmap -l device_name

or

>esxcli storage vmfs unmap -u device_ID

VMware KB relating to unmap can be found here


Provision and manage storage resources according to Virtual Machine requirements

I believe a lot of this is covered about in terms of storage policies and thin / thick provisioned disks.


Configure datastore alarms, including Virtual SAN alarms

To view configurable datastore alarms Web Client – vCenter Inventory Lists – vCenter Servers – vCenter – Manage – Alarm Definitions.  Find the alarms relating to datastores

vcap2.2-28

vSphere 6 now includes VSAN health alarms, VSAN 5.5 does not have these enabled by default and they must be created based on VMKernel Observations (VOBs)

vcap2.2-24

I can edit these to view / change the trigger of the alarm and add an action such as sending notification emails

vcap2.2-25

To add a notification email I need to add an email address and choose what alerts and how often

vcap2.2-26  vcap2.2-27


Expand (Scale up / Scale Out) Virtual SAN hosts and diskgroups

I have done a previous blog post on scaling out VSAN see here


Leave a comment

Your email address will not be published. Required fields are marked *

4 thoughts on “VCAP6-DCV Deployment – Objective 2.2 – Manage Complex Storage Solutions