I am putting a link to the official VMware documentation on this as I Googled it just to confirm to myself I am not doing anything wrong! What I need to do is migrate the physical NICs and Management/ VM Network VMkernel NIC from a standard switch to a distributed switch. Process is simple and straight-forward, and one that I have done numerous times; yet it fails for me now!
Here’s a copy paste from the documentation:
- Navigate to Home > Inventory > Networking.
- Right-click the dVswitch.
- If the host is already added to the dVswitch, click Manage Hosts, else Click Add Host.
- Select the host(s), click Next.
- Select the physical adapters ( vmnic) to use for the vmkernel, click Next.
- Select the Virtual adapter ( vmk) to migrate and click Destination port group field. For each adapter, select the correct port group from dropdown, Click Next.
- Click Next to omit virtual machine networking migration.
- Click Finish after reviewing the new vmkernel and Uplink assignment.
- The wizard and the job completes moving both the vmk interface and the vmnic to the dVswitch.
Basically add physical NICs to the distributed switch & migrate vmk NICs as part of the process. For good measure I usually migrate only one physical NIC from the standard switch to the distributed switch, and then separately migrate the vmk NICs.
Here’s what happens when I am doing the above now. (Note: now. I never had an issue with this earlier. Am guessing it must be some bug in a newer 5.5 update, or something’s wrong in the underlying network at my firm. I don’t think it’s the networking coz I got my network admins to take a look, and I tested that all NICs on the host have connectivity to the outside world (did this by making each NIC the active one and disabling the others)).
First it’s stuck in progress:
And then vCenter cannot see the host any more:
Oddly I can still ping the host on the vmk NIC IP address. However I can’t SSH into it, so the Management bits are what seem to be down. The host has connectivity to the outside world because it passes the Management network tests from DCUI (which I can connect to via iLO). I restarted the Management agents too, but nope – cannot SSH or get vCenter to see the host. Something in the migration step breaks things. Only solution is to reboot and then vCenter can see the host.
Here’s what I did to workaround anyways.
First I moved one physical NIC to the distributed switch.
Then I created a new management portgroup and VMkernel NIC on that for management traffic. Assigned it a temporary IP.
Next I opened a console to the host. Here’s the current config on the host:
~ # esxcli network ip interface ipv4 get
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type DHCP DNS
---- ------------ ------------- -------------- ------------ --------
vmk0 10.xxx.xx.30 255.255.255.0 10.xxx.xx.255 STATIC false
vmk1 10.xxx.xx.24 255.255.255.0 10.xxx.xx.255 STATIC false
vmk2 10.xxx.xx.25 255.255.255.0 10.xxx.xx.255 STATIC false
vmk3 188.8.131.52 255.255.255.0 184.108.40.206 STATIC false
vmk4 10.xxx.xx.23 255.255.255.0 10.xxx.xx.255 STATIC false
The interface vmk0 (or its IPv4 address rather) is what I wanted to migrate. The interface vmk4 is what I created temporarily.
I now removed the IPv4 address of the existing vmk NIC and assigned that to the new one. Also, confirmed the changes just to be sure. As soon as I did so vCenter picked up the changes. I then tried to move the remaining physical NIC over to the distributed switch, but that failed. Gave an error that the existing connection was forcibly closed by the host. So I rebooted the host. Post-reboot I found that the host now thought it had no IP, even though it was responding to the old IP via the new vmk. So this approach was a no-go (but still leaving it here as a reminder to myself that this does not work).
I now migrated vmk0 from the standard switch to the distributed switch. As before, this will fail – vCenter will lose connectivity to the ESX host. But that’s why I have a console open. As expected the output of
esxcli network ip interface list shows me that vmk0 hasn’t moved to the distributed switch:
So now I go ahead and remove the IPv4 address of vmk0 and assign that to vmk4 (the new one). Also confirmed the changes.
~ # esxcli network ip interface ipv4 set -i vmk0 -t none
~ # esxcli network ip interface ipv4 set -i vmk4 -I 10.xxx.xx.30 -N 255.255.255.0 -t static
~ # esxcli network ip interface ipv4 get
Next I rebooted (
reboot) the host, and via the CLI I removed vmk0 (for some reason the GUI showed both vmk0 and vmk4 with the same IP I assigned above).
~ # esxcli network ip interface remove --interface-name vmk0
Post-reboot I can go back to the GUI and move the remaining physical NIC over to the distributed switch. :) Yay!