VCAP6-DCV Deployment – Objective 3.2 – Implement and Manage vSphere 6.x Distributed Switch (vDS) Networks 1

Main Study Page

Objectives 3.2 are broke down as the following

  • Deploy a LAG and migrate to LACP
  • Migrate a vSS network to a hybrid or full vDS solution
  • Analyze vDS settings using command line tools
  • Configure Advanced vDS settings (NetFlow, QOS, etc.)
  • Determine which appropriate discovery protocol to use for specific hardware vendors
  • Configure VLANs/PVLANs according to a deployment plan
  • Create / Apply traffic marking and filtering rules

Deploy a LAG and migrate to LACP

To aggregate the bandwidth of multiple physical NICs on a host you can create a link aggregation group (LAG) on a virtual distributed switch (vDS) and use it to handle the traffic of distributed port groups.  Once created you must migrate from standalone uplinks to the LAG.  A host can support up to 32 LAGs, however the number of LAGs that can actually be used depends on the capabilities of the underlying physical switch, for example if the physical switch supports up to four ports in a LACP port channel you can connect up to four physical NICs per host to a LAG.

Each host that is to use LACP, a separate LACP port channel is needed on the physical switch.  The following must be considered

  • The number of ports in the LACP port channel must be equal to the number of physical NICs that will be grouped on the host.
  • The hashing algorithm of the LACP port channel on the physical switch must match what is configured on the vDS LAG.
  • All physical NICs that is connected to the LACP port channel must be configured with the same speed and duplex settings.

LACP also has the following limitations on a vDS

  • LACP support is not compatible with software iSCSI mulitpathing.
  • LACP support settings are not available in Host Profiles.
  • LACP support is not possible between nested ESXi hosts.
  • LACP support does not work with the ESXi dump collector.
  • LACP support does not work with port mirroring.
  • Team and failover health check does not work for LAG ports.

To create a new LAG – Web Client – Networking – vDS – Manage – LACP – New Link Aggregation Group.  Give the LAG a name and select the number of ports.

Select the mode as either Active or Passive

  • Active – All LAG ports are in an active negotiation mode, the LAG ports initiate negotiations with the LACP port channel.
  • Passive – All LAG ports are in passive negotiation mode, they respond to LACP packets they receive but do not initiate LACP negotiations.


Select the Load Balancing Mode – this must match the LACP port channel hashing algorithms on the physical switch.



The next step is to add the physical NICs to the newly created LAG.  Web Client – Networking – vDS – Actions – Add and Manage Hosts.  Choose Manage Host Networking – Next.  Select Attached Host and add the relevant hosts.  To duplicate the settings to multiple hosts select Configure identical network settings on mulitple hosts (template mode).


Choose which host to use as a template


In this example I use Manage physical adapters (template mode)


I want to change my current standalone uplink ports to LAG, I select the vmnic and Assign Uplink



To replicate the changes to the other host I choose Apply to all


Analyse the impact and finish the wizard.  Now if I check the networking config for my hosts I see the newly created LAG as an uplink


To add this uplink to a port group I need to change the Teaming and Failover.  Web Client – Networking – vDS – Port Group – Edit Settings – Teaming and Failover.  Move the LAG into the active uplinks column



It is possible to use the wizard to migrate network traffic to the newly created LAG, by assigning the LAG as an active uplink across multiple hosts.  Web Client – Networking – vDS – Manage – LACP – Migrate network traffic to LAGs


In this example I choose Manage Distribution Port Group – Teaming and Failover


I click Select distributed port groups to select the relevent port groups that will use the LAG as an uplink


Move the LAG to the Active uplink column and complete the wizard


Migrate a vSS network to a hybrid or full vDS solution

To migrate virtual machine networks to a vDS – Web Client – Networking – vDS – Actions – Migrate VM to Another Network.  I choose the vSS port group as the source and the vDS port group as the destination.


Then select the VMs to migrate and complete.

To migrate VMKernel adapters – Web Client – Host – Manage – Networking – Virtual Switches –  Migrate physical or virtual adapters to this distributed switch.  I then choose Manage VMKernel adapters Next


Select the VMKernel adapter and choose Assign port group


Choose the relevant vDS port group and complete the wizard


Analyze vDS settings using command line tools

To view current vDS settings I can run the following command

>esxcli network vswitch dvs list


Configure Advanced vDS settings (NetFlow, QOS, etc.)

NetFlow is used to analyse VMs IP traffic that flows through a vDS by sending reports to a NetFlow collector.  To edit NetFlow settings – Web Client – Networking – vDS – Settings – Edit NetFlow


Add the relevant collector IP address and port.  Add Switch IP address to assign a single network device to the vDS for the NetFlow collector instead of multiple devices corresponding to each host


To enable NetFlow to monitor IP packets that are passing the ports of a distributed port group – Web Client – Networking – vDS – Actions – Distributed Port Group – Manage Distributed Port Groups


I choose Monitoring to configure NetFlow


Then the relevant port group


Then Enable


Priority tagging is a mechanism to mark traffic that has higher QoS demands.  Assign priority tags to traffic, such as VoIP and streaming video, that has higher networking requirements for bandwidth, low latency, and so on. You can mark the traffic with a CoS tag in Layer 2 of the network protocol stack or with a DSCP tag in Layer 3.  To enable this on a port group I go to Web Client – Network – vDS – Related Objects – Distributed Port Groups – Port Group – Edit Settings – Traffic filtering and marketing.  Change the status to Enabled

Once enabled a rule needs to be created by selecting the Add button.  In my example I have added the setting for an example VoIP config from the VMware documentation


Determine which appropriate discovery protocol to use for specific hardware vendors

Switch discovery protocols help admins discover which physical switch port is connected to a port group.  vSphere 6 support two protocols – Cisco Discovery Protocol (CDP) and Link Layer Discovery Protocol (LLDP).  As well as which port is connected, these protocols also list information such as software versions and device IDs.  To edit this Web Client – Networking – vDS – Edit Settings – Advanced


Configure VLANs/PVLANs according to a deployment plan

To change the VLAN information on a vDS port group I go to Web Client – Network – vDS – Port Group – Edit Settings – VLAN.  Options here are

  • None (EST) –  The physical switch performs the VLAN tagging.  The host network adapters are connected to access ports on the physical switch.
  • VLAN (VST) – The virtual switch performs the VLAN tagging before the packets leave the host.  Set between 1-4094.  The host network adapters must be connected to trunk ports on the physical switch.
  • VLAN Trunking (VGT) – The virtual machine performs the VLAN tagging.  The virtual switch preserves the VLAN tags when it forwards the packets between the virtual machine networking stack and external switch.  For security reasons, you can configure a distributed switch to pass only packets that belong to particular VLANs.



Private VLANs (PVLAN) are used to solve VLAN ID limitations by adding further segmentation of the logical broadcast domain into smaller broadcast sub domains.  PVLAN is identified by its primary VLAN ID then a primary VLAN ID can have multiple secondary VLAN IDs.  Primary VLANs are Promiscuous, so that ports on a private VLAN can communicate with ports configured as the primary VLAN.  Ports on a secondary VLAN can be either Isolated, communicating only with promiscuous ports, or Community, communicating with both promiscuous ports and other ports on the same secondary VLAN.

Before you can change a port group to use a PVLAN it must first be enabled on vDS.  Web Client – Networking – vDS – Manage – Private VLAN – Edit.  Here set the primary VLAN ID then set the secondary VLAN ID with VLAN type


Now PVLAN can be set at the port group


Create / Apply traffic marking and filtering rules

In vSphere 6 vDS supports Basic multicast filtering and Mulitcast snooping for filtering multicast packets that are related to individual multicast groups.

  • Basic – vDS forwards multicast traffic for VMs according to the destination MAC address for the multicast group.
  • IGMP/MLD Snooping – multicast snooping on a vSphere Distributed Switch to forward traffic in a precise manner according to Internet Group Management Protocol (IGMP) or Multicast Listener Discovery (MLD) membership information that virtual machines send to subscribe for multicast traffic.

Use IGMP/MLD snooping when workloads on the vDS subscribe to more then 32 multicast groups or must receive traffic from a specific source.

Web Client – Networking – vDS – Edit Settings – Advanced – Multicast Filtering Mode.  Select Basic or IGMP/MLD Snooping


When IGMP/MLD snooping is enabled the switch send general queries about memberships of VMS in case of snooping querier is not configured on the physical switch, the default time for sending snooping queries is 125 seconds.  To change – Web Client – Host – Manage – Settings – Advanced System Settings – Net.IGMPQueryInterval – Edit


Leave a comment

Your email address will not be published. Required fields are marked *

One thought on “VCAP6-DCV Deployment – Objective 3.2 – Implement and Manage vSphere 6.x Distributed Switch (vDS) Networks

  • Oriol

    Thanks for your job. Really helpful for the one that are trying to get the certification 🙂

    Just a little type
    >esxcli network vswitch dvs list
    >esxcli network vswitch dvs vmware list