Contact

Subscribe via Email

Subscribe via RSS/JSON

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

Elsewhere

Removing Datastores from an ESX host

Datastores in ESX hosts are made up of extents. Extents can be thought of as the underlying physical disk/ LUN that goes into making up the datastore.

A datastore is usually made up of a single extent, but can span multiple extents too. So removing a datastore from an ESX hosts means you dismount the datastore and then detach the extents.

Datastores have friendly names that you assign when creating it. Extents have names that usually start with naa or eui.

In vSphere client when you select a host, go to its Configuration tab, Storage, select Datastores view – the “Identification” column shows the datastore name and the “Device” column shows the extent name.

In PowerCLI the same information can be seeing usingĀ  Get-View or the ExtensionData property object of a datastore object (as in my previous post).

Anyways, to remove a datastore from an ESX host you first go to the Datastores screen as above, select the datastore, right click and select “Unmount”. This will do a bunch of checks (such as whether any VMs running on that host have their disks on this datastore) and then let you unmount it. This only removes the datastore name from the ESX host though; the host can still see and mount the datastore. So the next step is to also detach the extent from the host – i.e. unpresent the underlying disk/ LUN from the host.

For this you need the extent names. Get these as above (by expanding the “Device” column to see the name; or use PowerCLI). Then go to the Devices view (instead of the Datastores view that you currently are on). Expand the “Identifier” column now and find the extents that we want to detach. Once you find this right click and select “Detach”. This too does some checks and then lets you detach the extent if it’s not in use.

That’s it.

p.s. Too lazy to take screenshots. Sorry about that. :)

It is possible to vMotion VMs across ESX hosts without shared storage

Today (well actually, a few days ago; but today is when I read more about it) I learnt that you can vMotion VMs across hosts without shared storage.

This is only for vSphere 5.1 and above. That’s a pretty cool feature, especially because at work we are migrating all our VMs to new hosts & storage and one of things we were wondering about was how to move the VMs across. The new hosts have 3Par storage while the old hosts have StoreVirtual storage, so the thinking was that we’d probably have to give the new hosts access to the StoreVirtual storage and then do a vMotion. Now we won’t have to!

There’s no separate name for this sort of vMotion and it seems to be a not quite hyped feature. For anyone interested here’s some screenshots on how to do such a vMotion.

For starters here’s my testlab setup:

setupOne datacenter. Two clusters. Cluster one has two hosts with shared storage. Cluster two has a single host with no shared storage. UBUNTU1 is a VM I would like to migrate over.

Note that host esx03 has no connectivity to the shared storage either. I have removed the iSCSI VMkernel mappings from it so there’s no confusion.

esx03 shared storageESX01 and ESX02 have access to shared storage.

esx01 shared storageMigration is quite simple. Right click the VM and select Migrate. Choose the option to migrate both host and datastore. If the VM is powered on (which it would be as we are doing vMotion instead of a cold migration) you will see the option is grayed out in the older/ C# vSphere client.

migrate host and datastore - 1That’s because the newer features of vSphere 5.1 are only available in the web client so you’ll have to use that instead (thanks to this blog post for pointing me to that).

migrate host and datastore - 2Select the destination host. Note that vMotion is only between datacenters so you can only chose a host in the same datacenter (as opposed to cold migration which can happen between datacenters).

select destination

Select Datacenter

select destination host

Select Host

Select Datastore

Select Datastore

Notice that any datastore accessible from the destination host can be selected.

And that’s it. vMotion begins and I have easily live migrated a VM from one host to another without any shared storage. Cool! :)

setup2

Notes on vSphere High Availability (HA)

Just some notes on vSphere HA as I reading along on that. Nothing new here …

Starting with vSphere 5.0 HA has a Master/ Slave model. One ESXi host is elected as a Master, the rest are Slaves. The Master is the one with the most number of datastores connected to it; if all ESXi hosts have the same number of datastores connected to it, the Master is the one with the largest Managed Object ID (MOID). Note that the MOID is interpreted lexically – so an MOID 99 is larger than 100. PowerCLI can be used to view the MOIDs:

Also, the MOID is a vCenter specific construct. Whenever a host, VM, datastore, etc is added to vCenter it is assigned an MOID. For instance here are the MOIDs of my datastores:

Although I haven’t used this it’s also possible to find MOIDs vSphere Managed Object Browser. See this KB article for more info.

Back to the topic – the above is how a Master is elected. There’s only one Master per cluster. When it comes to HA, the Fault Domain Manager (FDM) on this Master is responsible for most of the tasks (which is why even if vCenter is down for a while HA can continue working). vCenter checks with the Master and the Master communicates with vCenter to keep each other abreast of the cluster situation.

  • FDM is installed at /opt/vmware/fdm/fdm/
  • FDM config files are at /etc/opt/vmware/fdm/

The Master monitors the Slave hosts and if a Slave goes down/ is unreachable the Master is responsible for starting these Protected VMs elsewhere. The Master is also responsible for keeping the Slaves abreast of the cluster configuration.

Slaves are limited to monitoring VMs running with them. Slaves monitor the VM health and if a Protected VM powers down they inform the Master so it can be restarted. (Note on Protected VMs: once you enable VM monitoring on a cluster or set a VM as Protected, the VM must be powered off and powered on to be protected). Slaves also keep in touch with each other and if they find the Master is down they conduct an election to select a new Master.

The only time vCenter communicates with Slaves is when a new Master needs to be elected or when the Master reports a Slave as missing and so vCenter tries to contact it.

Slaves send network heartbeats to the Master every second. When a Master stops receiving heartbeats from a Slave it knows it is offline or partitioned/ isolated. Similarly when a Slave stops receiving heartbeats from a Master it knows the Master is offline or partitioned/ isolated.

  • If a Slave is cut off from all other hosts (Master and Slaves) it is considered isolated (caveat: you can also specify up to 10 isolation IP addresses to ping – if these are reachable but the Master and Slaves are not, the Slave does not consider itself isolated, only partitioned).
  • If a Slave is cut off from the Master and some/ none Slaves (i.e. it still has contact with some Slaves) then it is considered partitioned.

In the past if a Slave were isolated/ partitioned the Master would consider it as offline and restart its Protected VMs elsewhere. Starting with vSphere 5.0 the Master also sends a ping (ICMP packet) to the Slave to see if responds and uses datastore heartbeats to verify the Slave is really down. It could be that the Management network is down but the VM and storage networks are up, so the VMs are still functioning as expected.

Datastore heartbeats work thus (and remember they are only used in case of isolation/ partition scenarios):

  • When enabling HA for a cluster, a datastore is automatically selected (or can be selected manually by the user) to be used for datastore heartbeats.
  • On this datastore a folder called .vSphere-HA is created within which a sub-folder of name FDM-<Fault Domain ID>-<vCenter Server Name> is created. (Such a name allows the same datastore to be used by multiple clusters).
  • Each host creates a file with its MOID name in this sub-folder. Like thus:heartbeats
  • Notice the host-X-hb file above? That is created by each host (you can check the /var/log/fdm.log file on each host to see it creating this file). When a Slave does not get heartbeats from a Master it updates its file above (and also checks the timestamp of the file for the Master – if that has updates it means the Master is alive). Similarly, when a Master does not hear from a Slave it checks the Slave’s file above to see if there’s updates. This is how datastore heartbeats work.
  • If a Slave is network partitioned – i.e. it cannot contact the Master – but can see some of the other Slaves, the Master and Slave can conclude that each other is still alive from the datastore heartbeats as above.
    • If the Master is down – i.e. the Slaves think they are partitioned because actually the Master is down – they can now elect a new Master since there are no datastore heartbeats from the Master.
    • If the Slave is down – i.e. the Master is not getting any datastore heartbeats from the Slave – then it restarts the Protected VMs on other hosts. (If the Slave were actually up but had lost network access to the datastore and so cannot update heartbeats, it is as good as down because the VMs have probably crashed by now).
  • If a Slave is network isolated – i.e. it cannot contact the Master or any other Slave (nor can it ping the isolation addresses) – then the Slave adds a special bit in the host-X-poweron file above. This tells the Master that the Slave is network isolated.
    • The Master then locks the file called protectedlist. This is a list of all Protected VMs. Once the Master has locked this file, the Slave knows the Master has taken responsibility for the Protected VMs and the Slave can leave these powered on, shut down, or power off (depending on which of these is selected as the host isolation response when setting up HA).
    • The protectedlist file thus ensures that unless another host has taken over these VMs the current host will not shut down/ power off these.

Two advanced options to keep in mind:

  • I mentioned this earlier: das.isolationAddress[0-9] allow one to specify up to 10 isolation IP addresses to check before a host considers itself isolated.
  • And das.allowNetwork[0-9] allow one to specify up to 10 port groups to use for HA. See this KB article for examples.

Lastly, I haven’t read it fully but this HA Deepdive is a great resource.