VMFS Recovery™
Recover data from damaged or formatted VMFS disks or VMDK files
Recover data from damaged or formatted VMFS disks or VMDK files
Last updated: Apr 08, 2024

VMware Distributed Switch – The Complete Guide

https://www.diskinternals.com/vmfs-recovery/what-is-vmware-vds/The VMware vSphere Distributed Switch (VDS) is a central platform to access, edit, and monitor your virtual machines, and switch between VMs easily. Basically, the VDS acts as a switch that allows networking access within the vSphere environment.

Also called vSwitch, VMware offers two types of network switches; the vSphere Standard Switch (VSS) and the vSphere Distributed Switch (VDS). VSS is the older version used in older vSphere versions, while VDS is the newer version with numerous improvements and features to offer.

This article looks into the functions of the vSphere Distributed Switch (VDS), while also comparing it to the predecessor version – the vSphere Standard Switch (VSS).

What is a VDS (vSphere Distributed Switch)?

VMware vSphere Distributed Switch (VDS) is simply a centralized interface for virtual machine networking management. It also allows you to switch and enter data centers, and configure, monitor, and administer virtual machine access. So to say, the VDS is a critical component of VMware vSphere.

Commonly referred to as the vSwitch, the VDS facilitates communication between ESXi servers and virtual machines, allowing them to interact with the physical networking layer. This upgraded vSwitch version was introduced in vSphere version 4.x to meet the demanding virtualization needs of VMware customers, which the vSphere Standard Switch (VSS) could not handle.

VDS is a perfect fit for large-scale environments where seamless virtual networking pass-through is required. But, to clearly understand the advantages of the VDS, it is important to learn about its predecessor, the vSphere Standard Switch (VSS), which was the default network connectivity provider for vSphere ESXi hosts and VMs.

VMware Distributed Switch Configuration

This is a guide on how you can configure vDS on vSPhere environments. Ensure to follow each step carefully.

1 – Creating a VDS on vSphere 6.7

Here, we will set up VDS on a vSphere 6.7 environment that has two ESXi hosts running vSphere 6.7 and by vCenter:

  • ESXiA – 192.168.102.208 (the management interface IP address)
  • ESXi – 192.168.102.209
  • vCenter – 192.168.102.104

  • Step One: Open vSphere Client and head on to the Hosts and Clusters tab to access your ESXi in the datacenter.
  • Step Two: Right-click on the datacenter that stores your ESXi hosts and select Distributed Switch 🡺 New Distributed Switch.
  • Step Three: When the new distributed switch wizard opens, choose a name for the new switch you’re creating, select a datacenter for it, choose ESXi version, and the VDS settings such as number of ports for uplinks, Network I/O Control, default port group, and enter the port group name.

Step Four: Review the settings and configurations you made and click the “Finish” button. Once the wizard interface closes, your VDS has been created. You can access it by navigating to Networking 🡺 the datacenter you stored it 🡺 VM Network. If you need to further edit the VDS, click on it from here, go to the Configure tab, and edit.

2 – Adding ESXi Hosts to a VDS

After you have created a VDS, the next is to add your ESXi hosts to it; this is to enable VMware Distributing Switching on your hosts. Adding hosts to a new VDS is quite a straightforward process:

  • Step One: On vSphere Client, go to the Networking section, right-click on the newly created VDS, and select click Add and Manage Hosts.
  • Step Two: Click on “New Hosts” and select the ESXi hosts you want to add to the VDS.
  • Step Three: Add or remove physical adapters attached to the VDS. These physical adapters are Network Interface Controllers (NICs). You can also add uplinks here, too.
  • Step Four: After the physical adapters, you need to add or remove VMkernel adapters too. Of course, you can leave this at its default settings and proceed.
  • Step Five: You can also migrate your VMs to the VDS; this tab allows you to migrate your virtual machines or other network adapters to the vSphere Distributed Virtual Switch.

Step Six: Review the settings and options you selected, then click the “Finish” button.

3 – Adding VMkernel Adapters

With your hosts and VMs added to the VDS, you can custom-add VMkernel network adapters to a port group of the vSwitch for different features and connectivity, maybe, vMotion. Here is how to achieve this:

  • Step One: On the vCenter interface, go to Network and select the VDS to add VMkernel Adapters. Right-click on the VDS and select Add VMkernel Adapters.
  • Step Two: Attach the ESXi hosts you want the service to add the adapters too
  • Step Three: Adjust any settings you feel need to be changed, for example, you can change the default MTU value for a port group to 800 bytes or higher. Also, ensure to tick any Available Services option you want to enable, for example, vMotion.
  • Step Four: Here, you need to set the IPV4 addresses for your VMkernel adapters; you can use static IP addresses for your ESXi servers or obtain them automatically.

Step Five: Review all settings you made and finish the process.

4 – Cross-Checking VDS Configuration

For clarity's sake, after you’re done configuring your new vSwitch, you should recheck all the settings you chose to ensure they are accurate and implemented as required. To do this, from the Network tab, click on the vSphere Distributed Switch, and go to Configure 🡺 Settings 🡺 Topology. This will display a topological view of your VDS elements; from here, you can see the attached hosts and VMs, and you can view the distribution switch settings for each element.

If you wish, you can export the VDS configuration so you won’t go through rigorous processes when next want to use the configuration, maybe on another vSphere account. To export your settings, go to the distributed virtual switch tab and click on Actions 🡺 Settings 🡺 Export Configuration.

What is a vSphere Standard Switch (vSS)?

Before the VDS was introduced, VSS existed, and doing the primary functions the VDS is meant to upscale. vSphere Standard Switch (vSS) facilitates network connectivity for ESXi hosts and virtual machines; it also handles VMKernel traffic. But being a “Standard” switch, the VSS can only work with a single host; so, if you have multiple hosts, you can only set up on VSS for each host.

vSphere standard switches bridge internal traffic between the VMs in a VLAN. One of its core benefits is that you do not need to purchase the vSphere Enterprise Plus license to access the feature. You create VSS at the host level, so it doesn’t support inbound traffic shaping and vMotion.

This functionality of VSS is similar to that of a physical Ethernet switch; it allows the VM network adapters and physical NICs on the ESXi host to use its (the VSS switch’s) logical ports. VSS is created by default the moment you install VMware ESXi. Also, when you set a management IP address to your ESXi host, it automatically becomes the first VMkernel port on the default vSwitch0 of the host.

Typical of virtual switches, the VSS comprises a management plane and a data plane, so each vSphere Standard Switch can be managed individually.

Challenges of the vSphere Standard Switch (VSS)

The reason for VDS is to address the limitations of the VSS. Standard switches are not very efficient in dynamic environments or enterprise-level deployments. This is because VLANS associated with VSS port groups must be managed and maintained with the specific ESXi host; two different ESXi hosts in the same cluster cannot read or access the VSS switches on each other – they can only access the one residing within the host.

So to say, if you create 50 VSS port groups on an ESXi host, you must repeat the same on other ESXi hosts to access those particular port groups. Using VSS in a dynamic environment requires creating VSS port groups and VLAN IDs with the same name and details for all ESXi hosts in a vSphere cluster.

Note: with VSS, any VM migrated via vMotion from one host to another in a cluster must have the same virtual networks on the destination ESXi host where it is being migrated, or the network

VSS looks all easy and simple when you have a few ESXi hosts; if you run and manage hundreds of ESXi hosts that require the same vSwitch, or you need to make a switch adjustment that should apply to all your hosts. That’s when the limitations of VSS become very apparent, and you start to look out for a more flexible alternative – the VDS.

Differences Between Distributed and Standard Switch

  • A VDS provides a centralized management platform for all multiple hosts, while a VSS is managed within an ESXi.
  • You can mirror network traffic from one virtual switch port to another using a distributed switch.
  • VDS provides network I/O control to minimize network congestion and prioritize traffic when a network is overwhelmed.
  • Multiple advanced features are available on VDS, including Link Aggregation Control Protocol (LACP), NetFlow, Private VLANs (PVLANs), and Link Layer Discovery Protocol (LLDP) – you won’t find these on a standard switch.
  • The VDS supports vNetwork Switch API for connecting with third-party applications.
  • You don’t need vCenter to use a standard switch for ESXi network access, but vCenter is required for distributed switch functionality.
  • VDS allows you to export and import vSwitch configurations, which could serve as a backup; you won’t find any of such features on a standard switch.

There are many features available with VDS that are not available with VSS. Even outside dynamic environments, VDS offers a lot of flexibility compared with VSS.

Migrating From vSphere Standard Switch (vSS) to vSphere Distributed Switch (vDS)

If you are using VSS and want to migrate to VDS, first, you need a vSphere Enterprise Plus license, and then you can carry out the migration. Interestingly, VMware has “safeguards” to prevent network connectivity loss during the migration. However, any misconfiguration would still lead to network loss. To get started, here are some important points to keep in mind:

  • Know the total number of VMkernel Network Adapters existing on your vSphere Standard Switches
  • Pinpoint the ESXi host that backs the VSS
  • Confirm if you’d love to use the same physical ESXi adapters of the VSS for the vDS migration.
  • Ascertain the virtual machines connected to the existing standard switches
  • It is much safer to use a VSS to VDS migrating wizard to complete the migration without missing any step in the migration procedure.

The Guide:

  • Step One: Right-click on the VDS you want to migrate to and select Add and Manage Hosts.
  • Step Two: Choose Manage host networking and click on “Attached hosts…” to select the ESXi hosts you want to move.
  • Step Three: Select the hosts you wish to migrate (you can select multiple hosts). On the “Manage Physical Adapters” tab, select the existing physical adapters backing the VSS; when you select the adapter(s), also ensure to “Assign uplink” to link the VDS to an uplink port group.
  • Step Four: To mitigate network connectivity issues, you need to migrate the management VMkernel adapter. So, click on Assign Port Group and select the VDS Port Group you want to reassign the management VMkernel adapter.
  • Step Five: Here is where it gets a bit tricky, you will lose connections to any VM previously connected to the VSS since the physical network adapters have been reassigned. Click on “Assign Port Group” and reassign the VM to another port group on the vDS.

Step Six: Review your edits and configurations, then click on “Finish” to complete the migration.

How to Recover VMware Data from Disk?

If you discover that your crucial files went missing after these advanced actions of managing and migrating virtual switches, you can recover everything back using DiskInternals VMFS Recovery software. The DiskInternals VMFS Recovery software is a program that allows you to recover VMFS files and VM data for VMs and ESXi hosts on VMware vSphere environments.

The app is connected remotely via SSH to ESXi hosts and can mount virtual drives as physical drives. It also supports all known Windows and Linux OS file systems and features a built-in Recovery Wizard to guide you through the recovery process. You can install the software on any Windows OS system.

Of course, DiskInternals VMFS Recovery can recover corrupt and damaged VM files and VMDK images. It allows you to preview the files after recovery to ensure they are the actual ones you need to recover.

Conclusion

While VSS is still okay for use in today’s data environments, it is not ideal for enterprise-level deployment. Switching between VSS and VDS is possible, but requires taking each migration step seriously and assigning the right parameter/ports. In case your files get lost, you can recover them using DiskInternals VMFS Recovery.

Related articles

FREE DOWNLOADVer 4.21, WinBUY NOWFrom $699

Please rate this article.