Contact

Subscribe via Email

Subscribe via RSS

Categories

Creative Commons Attribution 4.0 International License
© Rakhesh Sasidharan

HP DL360 Gen9 with HP FlexFabric 534 adapter and HP Ethernet 530 adapter and ESXi

That’s a very vague subject line, I know, but I couldn’t think of anything concise. Just wanted to put some keywords so that if anyone else comes across the same problem and types something similar into Google hopefully they stumble upon this post.

At work we got some HP DL360 Gen9s to use as ESXi hosts. To these servers we added additional network cards –

  • HP FlexFabric 10Gb 2-port 534FLR-SFP+ Adapter; and
  • HP Ethernet 10Gb 2-port 530SFP+ Adapter.

Each of these adapters have two NICs each. Here’s a picture of the adapters in the server and the vmnic numbers ESXi assigns to them.

serverIn this picture –

  • vmnic5 & vmnic4 are the HP FlexFabric 10Gb 2-port 534FLR-SFP+ Adapter;
  • vmnic6 & vmnic7 are the HP Ethernet 10Gb 2-port 530SFP+ Adapter; and
  • vmnic0 – vmnic3 are HP Ethernet 1Gb 4-port 331i Adapter (which come in-built into the server);
  • iLO is the iLO port (which I’ll ignore for now).

We didn’t want to use vmnic0 – vmnic3 as they are only 1Gb. So the idea was the use vmnic4 – vmnic7. Two NICs would be for Management+vMotion (connecting to two different switches); two NICs would be for iSCSI (again connecting to different switches).

We came across two issues. First was that the FlexFabric NICs didn’t seem to support iSCSI. ESXi showed two iSCSI adapters but the NICs mapped to them were the regular Ethernet 10Gb ones, not the FlexFabric 10Gb ones. Second issue was that we wanted to use vmnic4 and vmnic6 for Management+vMotion and vmnic5 and vmnic7 for iSCSI – basically a NIC from each adapter such that even if an adapter were to fail there’s a NIC from another adapter for resiliency. This didn’t work for some reason. The Ethernet 10Gb NICs weren’t “connecting” to the network switch for some reason. They would connect in the sense that the link status appears as connected and the LEDs on the switch and NICs blink, but something was missing. There was no real connectivity.

Here’s what we did to fix these.

But first, for both these fixes you have to reboot the server and go into the System Utilities menu.

f9 system utils

Change 1: Enable iSCSI on the FlexFabric adapter (vmnic4 and vmnic5)

Once in the System Utilities menu select “System Configuration”.

system configurationSelect the first FlexFabric NIC (port1).

select flexfabricThen select the Device Hardware Configuration menu.

select device hardwareYou will see that the storage personality is FCoE.

current flex personalityThat’s the problem. This is why the FlexFabric adapters don’t show up as iSCSI adapters. Select the FCoE entry and change it to iSCSI.

new flex personalityNow press Esc to go back to the previous menus (you will be prompted to save the changes – do so). Then repeat the above steps for the second FlexFabric NIC (port 2).

With this change the FlexFabric NICs will appear as iSCSI adapters. Now for the second change.

Change 2: Enable DCB for the Ethernet adapters

From the System Configuration menu now select the first Ethernet NIC (port 1).

select ethernetThen select its Device Hardware Configuration menu.

select device hardware (ethernet)Notice the entry for “DCB Protocol”. Most likely it is “Disabled” (which is why the NICs don’t work for you).

current DCBChange that to “Enabled” and now the NICs will work.

new DCBThat’s it. Once again press Esc (choosing to save the changes when prompted) and then reboot the system. Now all the NICs will work as expected and appear as iSCSI adapters too.

rebootI have no idea what DCB does. From what I can glean via Google it seems to be a set of extensions to Ethernet that provide “hardware-based bandwidth allocation to a specific type of traffic and enhances Ethernet transport reliability with the use of priority-based flow control” (via TechNet) (also check out this Cisco whitepaper for more info). I didn’t read much into it because I couldn’t find anything that mentioned why DCB mattered in this case – as in why were the NICs not working when DCB was disabled? The NICs are connected to an HP 5920AF switch but I couldn’t find anything that suggested the switch requires DCB enabled for the ports to work. This switch supports DCB but that doesn’t imply it requires DCB.

Anyhow, the FlexFabric adapters have DCB enabled by default which is probably why they worked. That’s how I got the idea to enable DCB on the Ethernet adapters to see if it makes a difference – and it did! The only thing I can think of is that DCB also seems to include a DCBX (Data Centre Bridging Exchange) protocol which is about discovering peers, discovering mismatched configuration etc – so maybe the fact that DCB was disabled on these adapters made the switch not “see” these NICs and soft-disable them somehow. That’s my guess at least.

Creating a Server 2012 Failover Cluster for iSCSI target

This post is about setting up a Server 2012 R2 failover cluster that acts as an iSCSI target server.

I have four servers: WIN-DATA01, WIN-DATA02, WIN-DATA03, WIN-DATA04. I will be putting WIN-DATA03 & WIN-DATA04 in the cluster. As you know clusters required shared storage so that’s what WIN-DATA01 and WIN-DATA02 is for. In a real world setup WIN-DATA01 & WIN-DATA02 are your SAN boxes whose storage you want to make available to clients. Yes you could have clients access the two SAN boxes directly, but by having a cluster in between you can provide failover. Plus, Windows now has a cool thing called Storage Pools which let you do software RAID sort of stuff.

Prepare the iSCSI target server

First step, prepare the iSCSI target servers that will provide storage for the cluster. The steps for this are in my previous post so very briefly here’s what I did:

Repeat the above for the other server (you can login to that server and issue commands, or do remotely like I did below). Everything’s same as above except for the addition of the -ComputerName switch.

Excellent!

Now let’s move on to the two servers that will form the cluster.

Add shared storage to the servers

Here we add the two iSCSI targets created above to the two servers WIN-DATA03 & WIN-DATA04.

Login to one of the servers, open Server Manager > Tools > iSCSI Initiator.

Targets

Easiest option is to enter WIN-DATA01 in the Target field and click Quick Connect. That should list the targets on this server in the Discovered targets box. Select the ones you want and click Connect.

Another option is to go to the Discovery tab.

Discovery tab

Click Discover Portal, enter the two server names (WIN-DATA01, WIN-DATA02), refresh (if needed), and then the first tab will automatically show all targets on these two servers. Select the ones you want and click Connect as before.

The GUI sets these connections as persistent (it calls them “Favorite Target”) so they are always reconnected when the server reboots.

You can also use PowerShell to add these connections though that isn’t as easy as this point and click. Instructions are in my earlier post so here they are briefly:

Unlike the GUI, PowerShell does not mark these targets as persistent so I have to specify that explicitly when connecting.

Prepare the shared storage

The shared storage we added is offline and needs to be initialized. You can do this via the Disk Management UI or using PowerShell as below. The initializing bits need to be done on any one server only (WIN-DATA03 or WIN-DATA04) but you have to make the disks online on both servers.

Create the cluster

Login to one of the servers that will form the cluster, open Server Manager, go to Tools > Failover Cluster Manager and click Create Cluster on the right side (the Actions pane). This launches the Create Cluster Wizard.

Click Next on the first screen, enter the server names on the second …

Adding Servers to Cluster

… do or don’t do the validation tests (I skipped as this is a lab setup for me), give a name for the cluster and an IP address, confirm everything (I chose to not add all eligible storage just so I can do that separately), and that’s it.

Couple of things to note here:

  • If your cluster servers don’t have any interfaces with DHCP configured, you will also be prompted for an IP address. Otherwise a DHCP address is automatically assigned. (In my case I had an interface with DHCP).
  • A computer object with the cluster server name you specify is created in the same OU as the servers that make up the cluster. You can specify a different OU by giving the full name as the cluster name – so in the example above I would use “CN=WIN-CLUSTER01,OU=Clusters,OU=Server,DC=rakhesh,DC=local” to create the object in the Clusters OU within the Servers OU. Would have been good if the wizard mentioned that.
  • Later, when you add roles to this cluster server, it creates more virtual servers automatically. These are placed in the same OU where the cluster server object is so you must give this object rights to add/ remove computers in that OU. So it’s best you have a separate OU which you can delegate rights to. I used the Delegation Control Wizard to give the WIN-CLUSTER01 object full control over Computer Objects in the rakhesh.local/Servers/Clusters OU.

    5

Instead of the Create Cluster Wizard one can use PowerShell as below:

Configuring the cluster

A cluster has many resources assigned to it. Resources such as disks, networks, and its name. These can be seen in the summary page of the cluster or via PowerShell.

Cluster Resources

Network

I am interested in changing the IP address of my cluster. Currently it’s taken an IP from the DHCP pool and I don’t want that. Being a lab setup both my servers had a NAT interface to connect to the outside world and so the cluster is currently picking up an IP from that. I want it to use the internal network instead.

If I right click on IP address resource I can change it.

IP Address

In my case I don’t want to use this network itself so I have to go to the Networks section in the UI …

Networks

… where I can see there are two networks specified, with one of them having no cluster use while the other is configured for cluster & client use, so I right click on the first network and …

Network Settings

… enable it for cluster access, then I right click on the second network and …

Network Settings

… rename it (for my reference) as well as disable it from the cluster.

Now if I go to the resources section and right click the IP address, I can select the second network and assign a static IP address.

Here’s how to do the above via PowerShell.

To change the network name use the Get-ClusterNetwork to select the network you want (the result is an object) and directly assign the new name as a value:

One would expect to be able to set IP addresses too via this, but unfortunately these are read-only properties (notice it only has the get method; properties which you can modify will also have the set method):

To set the IP address via PowerShell we have to remember that we went to the Cluster Resource view to change it. So we do the same here. Use the Get-ClusterResource cmdlet.

That doesn’t totally help. What we need are the parameters of the resource, so we pipe that to the Get-ClusterParameter cmdlet.

Perfect! And to modify these parameters use the Set-ClusterParameter cmdlet.

To take the Cluster IP Address resource offline & online use the Stop-ClusterResource and Start-ClusterResource cmdlets:

Although it doesn’t say so, you have to also stop and start the Cluster Name resource for the name to pick up the new IP address.

Lastly, to change whether a particular network is used for cluster communication or not, one can do the same technique that was used to change the host name. Just use the Role property. I am not sure what the values for it are, but from my two networks I can see that a value of 0 means it is not used for cluster communication, while a value of 3 means it is used for cluster communication and client traffic.

The GUI is way easier than PowerShell for configuring this network stuff!

Storage (Disks)

Next I want to add disks to my cluster. Usually all available disks get added by default, but in this case I want to do it manually.

Right click Failover Cluster Manager > Storage > Disk and select Add Disk. This brings up a window with all the available disks. Since this is a cluster not every disk present on the system can be used. The disk must be something visible to all members of the cluster as it is shared by them.

Add Disk

Via PowerShell:

To add these disks to the cluster pipe the output to the Add-ClusterDisk cmdlet. There’s no way to select specific disks so you must either pipe the output to a Where-Object cmdlet first to filter the disks you want, or use the Get-Disk cmdlet (as it lets you specify a disk number) and pipe that to the Add-ClusterDisk cmdlet.

(I won’t be adding these disks to my cluster yet as I want to put them in a storage pool. I’ll be doing that in a bit).

Quorum

Quorum is a very important concept for clusters (see an earlier post of mine for more about quorum).

The cluster which I created is currently in the Node Majority mode. (Because I haven’t added and Disk or File Share witnesses to it). That’s not a good mode to be in so let’s change that.

Go to the Configure Cluster Quorum Settings as in the screenshot below, click Next …

Quorum

… choose the second option (Select the quorum witness) and click Next …

Witness

… in my case I want to use a File Share witness so I choose that option and click Next …

File Witness

… create a file share someplace, point to that, and click Next …

File Share

… click Next and then Finish.

Now it’s configured.

Storage (Pool)

(If you plan to create a storage pool add an extra shared disk to your cluster. I created a new target on the WIN-DATA02 & WIN-DATA01 server (no need for doing on both but I just did so it’s consistent), mapped another virtual disk to it, and used the iSCSI initiator on WIN-DATA03 & WIN-DATA04 to map it. Storage Pools on Failover Clusters require a minimum of three disks).

I want to use Storage Spaces (they are called Storage Pools in Failover Cluster Manager). Storage Spaces is a new feature in Server 2012 (and Windows 8) and it lets you combine disks and create virtual disks on them that are striped, mirrored, or have parity (think of it as software RAID).

We can create a new Storage Pool by right clicking on Failover Cluster Manager > Storage > Pools and select New Storage Pool.

Pool

Click Next, give the Pool a name …

Pool Name

… select the disks that will make up the pool (notice that the disks shown below are the iSCSI disk that are visible to both nodes; neither disk is actually on the local computer, they are both from a SAN box someplace (in this case the WIN-DATA01 & WIN-DATA02 servers from where we created these before) …

Pool Disks

… and click Create.

Right click on the pool that was created and make virtual disks on that. These virtual disks are software RAID equivalents. (The pool is a placeholder for all your disks. The virtual disks are the logical entities you create out of this pool). Here’s the confirmation page of the pool I created:

Pool Virtual Disk

Once the disk is created, be sure to not uncheck the “Create a volume when this wizard closes” check box. If you did uncheck, you’ll have to go to Server Manager > File and Storage Services > Volumes, click on TASKS and select New Volume. The virtual disk is just a disk, what we have to do now is create volumes on it.

Below is a screenshot of the volume I created. I chose ReFS for no particular reason, and assigned the full space to the volume. Also, the screenshot shows that I didn’t assign a drive letter. That’s incorrect. I did just that I forgot to do that when taking this screenshot. (Note that I assigned the drive letter R. This will be used later).

Volume

Via PowerShell:

As with the GUI, once the disk is created a cmdlet has to be run to create a volume on the disk. It’s probably the New-Volume cmdlet, but it’s throwing errors in my case and I am too tired to investigate further so I’ll skip it for now.

Add Roles

Finally lets add the iSCSI target server role.

Right click Failover Cluster Manager > Roles and select Configure Roles.

Click Next, select iSCSI Target Server, and click Next. This steps assumes the iSCSI Target Server role is installed on both nodes of the cluster. If not, install it via Server Manager or PowerShell.

Give the role (the virtual server that hosts the role) a name and IP address …

ISCSI Target Server

… select storage (the previously created volume; if you missed out on creating the volume go back and do it now), click Next, Next, and that’s it.

At this point it’s worth taking a step back to understand what we have done. What we did just now is create a virtual server (a role in the cluster actually) and assigned it some storage space. One might expect this storage to be the one that’s presented by the iSCSI server as a target, but no, that’s not the case. Think of this server and its storage as similar to the WIN-DATA01 and WIN-DATA02 servers that we dealt with initially. What did we do to set these as an iSCSI target? We created a target, created virtual iSCSI disks, and assigned mappings to them. That’s exactly what we have to do here too!

Ideally one should be able to use the GUI and do this, but Server Manager seems to have trouble communicating with the newly created WIN-DATA server. So I’ll use PowerShell instead.

And that’s it! Now I can add this target back to the WIN-DATA01 & WIN-DATA02 servers if I want (as those are the Initiator IDs I specified) and whatever I write will be written back to their disks via this clustered iSCSI target. I am not sure if the mirroring will happen across both servers though, but this is all just for fun anyways and not a real life scenario.

Lastly …

Before I conclude here’s something worth checking out.

The WIN-DATA virtual server is currently on the WIN-DATA04 node. This means WIN-DATA04 is the one who is currently providing this role, WIN-DATA03 is on standby.

If I login to WIN-DATA04 and check its disks I will see the mirrored volume I created:

If I do the same on WIN-DATA03 I won’t see the volume.

If I right click on the role in Failover Cluster Manager, select Move > Select Node, and select the new node as WIN-DATA03, then this becomes the new active node. Now if I check the volumes and disks of both servers the information will be the other way around. WIN-DATA03 will have the volume and disk, WIN-DATA04 won’t have anything!

This is how clustering ensures both servers don’t write to the shared storage simultaneously. Remember, the shared storage is just a block device. It doesn’t have any file locking nor is it aware of who is writing to it. So it’s up to the cluster to take care of all this.

Also …

Be sure to add WIN-CLUSTER01 and WIN-DATA to DNS with the correct IPs. If you don’t do that Server Manager and other tools won’t be able to resolve the name.

There’s more …

This post is just a tip of the iceberg. There’s so many more cool things you can do with iSCSI, Clustering, and Storage Spaces in Server 2012 so be sure to check these out elsewhere!

Server 2012 add/ remove initiator IDs for an iSCSI target

Once you create an iSCSI target on Windows Server 2012 there doesn’t seem to be a way to add/ remove initiator IDs via the GUI. But you can use PowerShell for it. Set-IscsiServerTarget is your friend.

Bear in mind this replaces the existing list with the new one.

A cool thing about most of these iSCSI cmdlets is that they work remotely too. So one can add a -ComputerName xxx switch to work with a remote computer.

Notes on iSCSI and Server Core 2012

For the Initiator (client)

Prerequisites

  1. Be sure to enable the enable the MSiSCSI service and set its startup type to Automatic.

  2. Ensure the iSCSI initiator outgoing & incoming rules are allowed on the firewall.

Connecting to a Target

To connect to a target you have to connect to a target portal first. The portal lets you enumerate the targets available.

After connecting, list the targets thus:

Do this for each portal you are aware of:

Note that Get-IscsiTarget now lists targets from all the portals.

To connect to a specific target, use the Connect-IscsiTarget cmdlet:

Viewing the disks

To view the disks available, use the Get-Disk cmdlet.

Numbers 2 & 3 are the iSCSI disks.

As far as the OS is concerned these are regular disks. To use them: 1) Change their status to online, 2) Initialize the disk, and 3) Partition & Format.

If you have many disks (physical or via iSCSI) you can also use the Get-IscsiConnection and Get-IsciSession to select just the iSCSI connections you want, and filter these to the Get-Disk cmdlet.

Making a session persistent across reboots

By default iSCSI connections are not persistent across reboots (note the IsPersistent : False above). To make a connection persistent there area two options:

  1. When connecting to the iSCSI target the first time use the -IsPersistent switch.

  2. If you want to make an existing session persistent, pipe it to the Register-IcsiSession cmdlet.

To make a session non-persistent, pipe it to the Unregister-IscsiSession cmdlet.

Disconnecting a Target

If you would like to disconnect a session use the Disconnect-IscsiTarget cmdlet.

For the Target (server)

  1. Create a Target.

    When creating the target it is very important to specify the initiator IDs. The cmdlet doesn’t prompt for these if you don’t mention (unlike the UI which doesn’t go ahead unless initiator IDs are specified). If the initiator IDs are missing no one can see this target.

    Initiator IDs can be specified via IQNs, IP addresses, IPv6 addresses, DNS name, and MAC addresses. Notice the ‘IPN:xxx’ bit above? Replace IQN with IPAddress, IPv6Address, DNSName, and MACAddress if you are specifying initiator IDs using these, followed by the address or name. For instance:

    If there’s only one initiator ID it can be specified as it is. If there multiple, separate them with commas. There’s no need to put them inside @(…) as above – that’s just to make it explicit to the reader that the input is an array. Separating by commas will have the same effect irrespective of the @(…) notation.

    It’s also worth pointing out that if multiple initiators are allowed to access a target, they will all be able to read/write to the target, but expect corruption. There are exceptions of course.

  2. Create an iSCSI virtual disk which will back the LUN presented by the target.

  3. Create multiple virtual disks if needed.
  4. Map these virtual disks to the target.

And that’s it. Initiators can now connect to the target. In the example above, since the target has two LUNs, initiators will see two disks after connecting.