NIC teaming in Windows Server 2012

Windows Server 2012 includes NIC teaming as part of the OS now so there’s no need for additional software. I find that pretty cool. It’s quite easy to setup too. You can do via the Server Manager GUI or use PowerShell. Just one PowerShell cmdlet, that’s it!

So what is NIC teaming?

Scenario 1: Say your server has one network card (NIC) and is connected to one switch port. If the NIC fails for any reason, your server is kaput – offline! Similarly if the switch fails for any reason, your server is kaput.

Scenario 2: Say your server has two 1GbE NICs. That is, two NICs that supports 1Gbps bandwidth each. And say your server runs two network intensive applications. Both applications use 1Gbps bandwidth each. Even though you have two NICs of 1Gbps each, you can’t use them together and so there’s no way to balance your two applications over the two NICs. You are stuck with just 1Gbps bandwidth.

These are two areas where NIC teaming can help.

NIC teaming simply means you combine your NICs into one team. (Windows Server 2012 can combine 32 such NICs into one team. And these NICs can be from different hardware providers too!) Before Windows Server 2012 you needed third party software to do NIC teaming – often from the network card manufacturer. But if you have Windows Server 2012 this is built into the OS.

The two usage scenerios above are called Link Balancing (LB) and FailOver (FO) – hence another name for NIC teaming is LBFO.

Link Balancing means you team your NICs into one and traffic is balanced between the two NICs. If you had two 1GbE NICs, the net effect is not as though you had a 2GbE NIC. No, the NICs are still independent in terms of bandwidth, but since you can use both NICs you are able to get more bandwidth. If you have an application on your server communcating with some client, all traffic between them related to a particular “flow” goes over the same NIC. But traffic to other clients can go over the other NIC, and also traffic to the same client but unrelated to the prior flow can go over the other NIC. There is a reason for this. If traffic for a particular flow went over both NICs, you could have packets appearing out of order (maybe one NIC has a faster path than the other) and that leads to an overhead of reassembling these packets.

Failover means you team your NICs into one and if one NIC fails traffic isn’t interrupted as it goes through the other NIC. This also helps in cases where the switch that a NIC is connected to, or the cable between them, goes down. When that NIC goes offline, other NICs will take over.

Now, NIC teaming isn’t a one sided job. Although the NICs may be teamed, as far as the network switch(es) they connect to is(are) concerned, they are still disparate. The switch(es) see one IP address, associated with multiple MAC addresses – on different ports/ switch – and so you need to involve the switch(es) too in your plan of NIC teaming. That said, you may choose to not involve the switches too if you want, but that has its own implications.

If you decide to involve the switch, there are two ways of going about with the teaming. One is a static way in which your switch admin designates the ports that the multiple NICs connect to as part of a team, and then the switch will take care of things. This has a drawback in that if you move the NICs to any other switch port, teaming breaks unless the network admin reconfigures the new ports on the switch.

The other way is an automatic way, in which if your switch(es) support the Link Aggregration Control Protocol (LACP; usually referred to also as IEEE 802.3ad or 802.1ax) you can use any ports on the switch and the ports will automatically figure out they are part of a team. Of course, if you decide to use LACP you have to use LACP switches and also tell the bits in the OS responsible for teaming that the switch knows LACP so it can behave accordingly. LACP works by sending some frames (called LACPDUs) from NIC to the switch to determine which ports are part of the same team and then combining these ports into a single link.

It’s important to note here that teaming which involves the switch, usually called “switch dependent” teaming, requires all the NICs to the be connected to the same switch (but don’t quote me on this as I am not a networks guy and this is just my understanding!).

If you decide not to involve the switch, usually called “switch independent” teaming, you can use multiple switches as the switches don’t know anything and all the action happens at the NIC end. This has some implications though, in that traffic from the switch to your teamed NICs will not be distributed as the switch doesn’t know of the multiple NICs. As far as the switch(es) knows your teamed NIC interface has one MAC address – that of one of the NICs – and so that NIC is the one which will get all the incoming traffic.

(As an aside, this is why if we decide to take a NIC out of the team and use it independently, it is a good idea to disable the team and enable. If the NIC we are taking out has the same MAC address as the team, it won’t work because the switch won’t be aware of it. Disabling and enabling the team will cause the teamed NIC to pick up a new MAC address from the remaining NICs).

When setting up NIC teaming in Windows Server 2012 you can choose from the options above.

Apart from deciding how the teaming is setup, we have to also select what algorithm is used for load balancing (the LB in LBFO). Windows Server 2012 gives you 2 options if you use the GUI, and 5 options if you use PowerShell. Let’s look at these now.

HyperVPort

  • This is used in case your Windows Server 2012 box functions as the host for a bunch of VMs. Each VM only sees one NIC – the teamed NIC – but internally, the host assigns each VM to a particular NIC. (Say you have two NICs in the team, and 5 VMs. VMs 1,3,5 could be on NIC 1, VMs 2,4 could be on NIC 2).
  • Obviously, selecting such a load balancing algorithm means any particular VM is always limited by the bandwidth of the NIC it is internally assigned to. It can’t aggregate over the other NICs to increase its bandwidth.
  • If you do switch independent teaming, the switch will send incoming traffic for each VM to the NIC assigned to it as the switch is aware these are different VMs. If you do switch dependent teaming, the incoming traffic is anyways distributed.

Address Hashing

  • This option creates a hash based on various components of the packet, and assigns all packets with that hash to a particular NIC. The components chosen for the hash vary. You can choose to use the source and destination IP addresses and TCP & UDP ports. You can choose to use the source and destination IP address. Or you can choose to use the source and destination MAC address. (If you choose the first option, for instance, and the packet is say an IPSec packet and hence has no TCP & UDP port, the second option is automatically used. Similarly if you choose the second option and that cannot be used for any reason, the last option is used).
  • If you do switch independent teaming, since the NIC used keeps varying the teamed NIC uses the MAC address of one of the NICs and that’s the MAC address the switch(es) the NICs connect to is aware of. So all incoming traffic from the switch to NICs is sent to that particular NIC whose MAC address is used. If you do switch dependent teaming, the incoming traffic is anyways distributed.
  • When using the GUI, the default option is the first one (use source and destination IP addresses and TCP & UDP ports).
  • When using PowerShell too, the default option is the first one (it is called TransportPorts). The second option is called IPAddresses and the third option is called MacAddresses.

Dynamic

I couldn’t find much information on this. This is an option available only in PowerShell (it is called Dynamic) and it seems to be an algorithm that automatically load balances between NICs depending on their load. When using switch independent teaming, incoming traffic is not distributed and is sent to a particular NIC; when using switch dependent teaming incoming traffic is distributed.

That’s all!