Microsoft Network Load Balancing Configuration on VMware vSphere

Network Load Balancing is implemented in a special driver installed on each Windows host in a cluster. The cluster presents a single IP address to all of its clients. When client requests arrive, they go to all the hosts in the cluster, and an algorithm is implemented in the driver maps each request to a particular host. The remaining hosts in the cluster will drop the request.

Hosts in the cluster exchange heartbeat messages so they can maintain consistent information about what hosts are members of the cluster. If a host fails, client requests are rebalanced across the remaining hosts, with each remaining host handling a percentage of requests proportional to the percentage you specified in the initial configuration.

Network Load Balancing relies on the fact that incoming packets are directed to all cluster hosts and passed to the Network Load Balancing driver for filtering.

“Each server in a Load Balancing Cluster is configured with a virtual IP address. The virtual IP address is configured on all the servers that are participating in the load balancing ‘cluster’ (a loose term that is unrelated to the Microsoft Cluster Service). When a request is made on this virtual IP, a network driver on each of these machines intercepts the request for the IP address and re-routes the request to one of the machines in the Load Balancing Cluster based on rules that you can configure for each of the servers in the cluster.”

Microsoft NLB Modes:

A Microsoft NLB cluster Can be configured in of the two communication modes:
  • Unicast
  • Multicast
Unicast mode:  

In unicast mode, all the NICs assigned to a Microsoft NLB cluster share a common MAC address. This requires that all the network traffic on the switches be port-flooded to all the NLB nodes. Normally, port flooding is avoided in switched environments when a switch learns the MAC addresses of the hosts sending network traffic through it. The Microsoft NLB cluster masks the cluster’s MAC address for all outgoing traffic to prevent the switch from learning the MAC address.

Benefits of Unicast Mode:
  • A benefit of unicast mode is that it works out of the box with all routers and switches (since each network card only has one MAC address).
  • In unicast mode, since all hosts in the cluster have the same MAC and IP address, they do not have the ability to communicate with each other via their NLB network card.
 Unicast mode reassigns the station (MAC) address of the network adapter for which it is enabled and all cluster hosts are assigned the same MAC (media access control) address. You cannot have ESXi/ESX send ARP or RARP to update the physical switch port with the actual MAC address of the NICs as this breaks the unicast NLB communication. To resolve this issue, you must configure the ESXi/ESX host to not send RARP packets when any of its virtual machines is powered on.  To avoid this prevent RARP packet transmission for a virtual switch by Selecting No from the Notify Switches on the vSwitch.

Multicast Mode:

Multicast mode does not have the problem that unicast operation does since the servers can communicate with each other via the original addresses of their NLB network cards.

Each server’s NLB network card operating in multicast mode has two MAC addresses (the original one and the virtual one for the cluster), which causes some problems. Most routers reject the ARP replies sent by hosts in the cluster, since the router sees the response to the ARP request that contains a unicast IP address with a multicast MAC address. The router considers this to be invalid and rejects the update to the ARP table. In this case, you need to manually Configure Static ARP Resolution at the switch or router for each port connecting to ESX’s NICs.

VMware recommends configuring the cluster to use NLB multicast mode even though NLB unicast mode should function correctly if you complete these steps. This recommendation is based on the possibility that the settings described in these steps might affect vMotion operations on virtual machines. Also, unicast mode forces the physical switches on the LAN to broadcast all NLB cluster traffic to every machine on the LAN. If you plan to use NLB unicast mode, ensure that:
  • All members of the NLB cluster must be running on the same ESXi/ESX host.
  • All members of the NLB cluster must be connected to a single portgroup on the virtual switch.
  • vMotion for unicast NLB virtual machines is not supported.
  • The Security Policy Forged Transmit on the Portgroup is set to Accept.
  • VMware recommends having two NICs on the NLB server.

Leave a Reply