VCAP6-DCV Deployment – Objective 2.1 Implement Complex Storage Solutions 6

Main Study Page

Objectives 2.1 are broke down as the following

  • Determine use cases for Raw Device Mapping
  • Apply storage presentation characteristics according to a deployment plan:
    • VMFS re-signaturing
    • LUN masking using PSA-related commands
  • Create / Configure multiple VMkernels for use with iSCSI port binding
  • Configure / Manage vSphere Flash Read Cache
  • Create / Configure Datastore Clusters
  • Upgrade VMware storage infrastructure
  • Deploy virtual volumes
  • Deploy and configure VMware Virtual SAN
  • Configure / View VMFS locking mechanisms
    • ATS-Only mechanism
    • ATS_SCSI mechanism
  • Configure Storage I/O Control to allow I/O prioritization
  • Configure Storage Multi-pathing according to a deployment plan

Determine use cases for Raw Device Mapping

An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a physical storage device which allows direct access to the storage device.  The RDM is a speacial mapping file in a VMFS volume that manages metadata and redirecting access to the physical device.  VMware recommend to use VMFS datastores for most virtual disk storage but there are occasions you might need to use a RDM.  One example to use raw LUNs with RDMs would be in any MSCS clustering scenario, cluster data and quorum disks should be configured as RDMs rather then virtual disks.

RDMs can be configured in two different compatibility modes – Virtual or Physical.  Virtual compatibility mode allows the RDM to act like a virtual disk including the use of snapshots.  Physical compatibility mode allows direct access to the SCSI device for lower level control and providing the greatest flexibility for SAN management software.

vMotion is supported for VMs using RDMs, the mapping file acts as a proxy to allow vCenter to migrate the VM by using the same mechanism that exists for migrating virtual disk files.  To do this the LUN IDs must be consistent across all ESXi hosts.

VMFS5 support greater than 2TB RDMs  in both virtual and physical mode, you cannot relocate larger than 2TB RDMs to datastores other than VMFS5.

VMware documentation provide a good table that list the available features and when to use a RDM or a virtual disk, documentation can be found here.


Apply storage presentation characteristics according to a deployment plan

VMFS resignaturing

Each VMFS datastore created in a storage disk has a unique signature, also known as a UUID.  When the storage disk is replicated or a snapshot is taken on the storage the resulting disk copy is identical including the disk signature.  You may need to bring this copy online which may need the datastore to be resignatured depending if both copies can be accessed by the host.

To assign a new signature you must first clone an existing volume.  I will show the process using a Nimble array, first I take a snapshot of an existing Nimble volume that has already been presented to my ESXi hosts and has been formatted with VMFS


At this stage the snapshot is offline, if I do a rescan on my ESXi hosts the volume is not available


I now bring the snapshot online and do another rescan on my ESXi hosts.  This time I can see the volume but it will detect it is already a VMFS volume and I will be prompted to Keep the Existing signature or to Assign a New Signature


To see the command line references to do this see here

LUN masking using PSA-related commands

It is possible to prevent a host from accessing LUNs or from using indivisual paths to a particular LUN by using esxcli commands to mask the paths.  To mask paths you create claim rules that assign MASK_PATH plug-in to the specified paths.

To check what rule IDs there are run the following, the claim rule should have rule IDs in the range of 101 – 200

>esxcli storage core claimrule list

Assign the MASK_PATH plug-in to a path by creating a new claim rule for the plug-in

>esxcli storage core claimrule add -P MASK_PATH

Load the MASK_PATH claim

>esxcli storage core claimrul load

Run the list command again to verify it has been added.  Remove other claim rules for the masked path

>esxcli storage core claiming unclaim 

Finally run the path claiming rules

>esxcli storage core claimrule run

Create / Configure multiple VMkernels for use with iSCSI port binding

To use iSCSI on a host you must bind a VMKernel port, you can bind one or more VMKernel ports to the software iSCSI adapter but for a dependant hardware iSCSI adapter only one VMKernel port associated with the correct physical NIC is available.

To bind a single VMKernel adapter I first create a new VMKernel adapter and give it a relevant name and IP address on the iSCSI subnet.  Web Client – Host – Manage – Networking – VMKernel Adapters


I now need to add the new interface to the iSCSI adapter bindings, I am using the iSCSI Software Adapter.  Web Client – Host – Manage – Storage – Storage Adapters – iSCSI Software Adapter – Network Port Binding – Add


You can use mulitple VMKernel ports bound to iSCSI, to have multiple paths to an iSCSI array that broadcasts a single IP address must follow these guidelines

  • iSCSI ports of the array target must reside in the same broadcast domain and IP subnet as the VMkernel adapters
  • iSCSI ports of the array target must reside in the same broadcast domain and IP subnet as the VMkernel adapters
  • All VMkernel adapters used for iSCSI connectivity must reside in the same virtual switch
  • Port binding does not support network routing

To have multiple VMKernel adapters with multiple uplinks with standard vSwitches you can create two separate vSwitch’s with a single VMKernel port with a single uplink port or you can have multiple VMKernal ports with multiple uplink ports under one vSwtich.  The following diagrams are taken from the VMware documentation

1:1 mapping using separate vSwitches


1:1 mapping using single vSwitch


To use a single switch the network policy must be configured so that only one physical adapter is active for each VMKernel port.  By default on a standard vSwitch all adapters appear active you must override this.  Web Client – Host – Manage – Networking – Virtual Switches – vSwitch – VMKernel Port Group – Edit Settings – Teaming and Failover.  Select to Override and move one physical adapter to Unused and one to Active


Once changed the VMKernel adapter will be available for iSCSI port binding


Configure / Manage vSphere Flash Read Cache

It is possible to create a single virtualised caching layer on a ESXi host by aggregating local flash devices, these devices must be local devices, must not already be in use for such things as VSAN and can be made up of a mixture of different device types such as SAS, SATA or PCI express connectivity.

When you setup a virtual flash resource it created a new file system known as Virtual Flash File System (VFFS) which is optimised for flash devices and non-persistent.  Web Client – Host – Manage – Settings – Virtual Flash – Virtual Flash Resource Management.  Add the available device, in my example I dont have any available


Create / Configure Datastore Clusters

A datastore cluster is a collection of datastores with shared resources and a shared management interface, you can use Storage DRS to manage storage resources.  The following resource management capabilities are available per configured datastore cluster.

  • Space utilization load balancing – Storage vMotion migrations balance space across the cluster following an exceeded configured threshold.
  • I/O latency load balancing – you can set an I/O latency threshold for bottleneck avoidance, once exceeded Storage DRS recommends migrations.
  • Anti-affinity rules – virtual disks for certian VMs can be kept on different datastores.

To create a datastore cluster open Web Client – Datacenters – right click datacenter – New Datastore Cluster


Give the datastore cluster a name and choose to enable Storage DRS or to leave it to manual.


When enabling Storage DRS choose if the automation is automatic or manual and set the I/O threshold for migrations

vcap2.1-15  vcap2.1-16

Choose the compute resources and datastores that will be apart of the cluster.  The datastores can be new ones or datastores that are already in use

vcap2.1-17  vcap2.1-18

Once complete the datastore cluster will be displayed as an object in vCenter


Upgrade VMware storage infrastructure

Upgrading a VMFS datastore is fairly simple process but you must consider certain guidelines, vSphere 6 uses VMFS5 anything below VMFS3 is not supported by vSphere 6 and must be upgraded using a previous version of ESXi.

You can upgrade a VMFS3 to VMFS5 whilst a VM is running and all the files are preserved on the datastore.  The upgrade process is one-way process that cannot be reverted back.  An upgraded VMFS3 datastore will retain its limits, file block size (1,2,4,8MB) and subblock size (64KB).  Newly formatted VMFS5 have a 1MB file block size and 8KB subblock size.

The datastores in my lab are the latest but to run the upgrade go to Web Client – vCenter Inventory Lists – Datastore – Manage – Settings – Upgrade to VMFS5

This objective may also point to extending a VMFS datastore.  To extend a datastore I must first extend the volume on the storage device, once extended I must perform a rescan on the host.  Once the rescan is complete I go to Web Client – Storage – datastore – Manage – Increase.  A dialogue box will appear with the extended volume, it will display the full size of the volume not just the extra that has been added


Select the disk and hit next then choose the size I want to extend the volume by


Deploy virtual volumes

Virtual Volumes create a fundamental change for vSphere storage management, with Virtual Volumes an individual VM becomes a unit of storage management rather then a datastore.  This functionality helps to imporve granularity arranging storage around the needs of individual VMs making storage VM centric.  This granularity allows policies to be created and by creating a volume per virtual disk allows policies to be set per disk.

Operations such as snapshots, cloning and replication can be handled by the storage as Virtual Volumes maps directly to a storage system.  Virtual Volumes are made up of the following concepts.

  • Virtual Volumes – virtual volumes are made up of VM files, virtual disks and their derivatives .
  • Storage Providers – also known as VASA providers is a software component that acts as a storage awareness service for vSphere managing communication between vCenter / hosts and storage system.
  • Storage Containers – a pool or raw storage capacity that a storage system provides for virtual volumes.
  • Protocol Endpoints – ESXi hosts use these as a logical I/O proxy to establish a data path on demand from VMs to their virtual volume.
  • Virtual Datastores – represents a storage container in vCenter.

VMware documentation explains this further and the architecture around it see here. 

Sadly in my lab I dont have any tech to run Virtual Volumes.  To configure Virtual Volumes see VMware’s documentation, there is a whitepaper on configuring Virtual Volumes using a HP 3PAR that can be found here.

Deploy and configure VMware Virtual SAN

I have done a previous blog post on how to set up a VSAN cluster see here.

Configure / View VMFS locking mechanisms

Locking mechanisms are used when multiple hosts access the same VMFS datastore in a shared storage environment.  Two different locking mechanisms exist – ATS-only and ATS+SCSI.  ATS-only is used on all newly formatted VMFS5 datastores if the underlying storage supports it and supports discrete locking per disk.  ATS+SCSI means ATS is attempted first and if that fails it will revert back to SCSI reservations, a SCSI reservation will lock an entire storage device and requires metadata protection.  Datastores that have been upgraded from VMFS3 uses the ATS+SCSI mechanism.

To display current locking information connect to the host and run

>esxcli storage vmfs lockmode list


To change the lockmode run the following, change “ats” to “scsi” to change to ATS+SCSI mode.

>esxcli storage vmfs lockmode set –ats –volume-label=Lab-NIM-VMFS-01


Configure Storage I/O Control to allow I/O prioritization

Storage I/O Control extends the shares and limits to handle storage I/O resources that can control the amount of I/O that is allocated to a VM during periods of I/O congestion.  You can set shares per VM and adjust this as needed, by default all VMs shares are set to Normal (1000) with unlimited IOPS.  Storage I/O control can only be enabled on datastores that are managed by a single vCenter server and is not supported on RDMs and datastores with multiple extents.

To enable Storage I/O Control go to Web Client – Datastore – Manage – Settings – General -Edit


To change the settings for a VM edit the settings – Hard disk – Shares


Configure Storage Multi-pathing according to a deployment plan

Storage multipathing maintains a constant connection between hosts and storage preventing any single point of failure.  Mulitplathing provides path failover and load balancing by distributing I/O loads.  When a host is started up it will discover all physical paths to storage devices, based on a set of claim rules the host determines which mulitpathing plug-in (MPP) should claim the path and be responsible for managing multipath support to the device.

For paths managed by the native mulitpathing plug-in (NMP) a second claim rule is applied which determines which Storage Array Type Plug-in (SATP) should be used to manage the paths for a specific array type and which Path Selection Plug-in (PSP) is to be used for each storage device.  The PSP can be changed in the Web Client but to change the SATP the claim rule must be modified using CLI.

To view the mulipathing details for a datastore Web Client – Datastore – Connectivity and Multipathing.  Select Edit to change the PSP


To change the SATP the claim you must use CLI, to list NMP SATPs loaded run the following

>esxcli  storage nmp satp list


The following is an example to change the NMP SATP rule, see VMware documentation for the options here.

>esxcli storage nmp satp rule add


Leave a comment

Your email address will not be published. Required fields are marked *

6 thoughts on “VCAP6-DCV Deployment – Objective 2.1 Implement Complex Storage Solutions

  • Chris Lewis

    Hi Kyle,

    Another great article, just a small point that the commands for ATS and ATS-SCSI are not quite correct:

    esxcli storage vmfs lockmode set –ats –volume-label=Lab-NIM-VMFS-01

    should be

    esxcli storage vmfs lockmode set –ats –volume-label=Lab-NIM-VMFS-01

    (note the double -) this maybe just font/type face issues in your article

  • takeshi

    If ATS+SCSI mode has it, is this?

    > To change the lockmode run the following, change “ats” to “scsi” to change to ATS+SCSI mode.
    > esxcli storage vmfs lockmode set –ats –volume-label=Lab-NIM-VMFS-01

    esxcli storage vmfs lockmode set –scsi -–volume-label=Lab-NIM-VMFS-01

  • takeshi

    The problem of creating the VMFS datastore by re-recognizing the masked by the exam.

    In the “esxcli storage core claim ruling list” , the MASK_PATH rule was set.
    I deleted this MASK_PATH rule and implemented “esxcli storage core claimrule load” ,
    but LUN did not recognize it again.

    After all, I restarted the ESXi host and let me recognize it again.
    Is there any way to make it online again?

    • takeshi

      I understood how to be recognized again without restarting.

      1. Delete MASK_PATH rule
      “esxcli storage core claimrule remove -r xxx”
      2. Check rule
      “esxcli storage core claimrule list”
      3. Load rule
      “esxcli storage core claimrule load”
      4. Check rule
      “esxcli storage core claimrule list”
      5.Unclaim all paths to a device
      “esxcli storage core claiming unclaim -t location”
      6.Execute path claiming rules.
      “esxcli storage core claimrule run”

      The masked LUN can not use the -d option because there is no device name.
      Device: No associated device