Follow Me Icons

 

Follow @KendrickColeman on TwitterConnect on LinkedInWatch My Videos on YouTubeFollow me on FacebookCheck Out My Projects on GitHubStay Up To Date with RSS

Search

BSA 728x90 Center Banner

vSphere and vCloud Host 10Gb NIC Design with UCS & More

I've done vSphere 5 NIC designs using 6 NICs and 10 NICs but this one is going to be a bit different. I'm only going to focus on 10GbE NIC designs as well as Cisco UCS. Let's be honest with ourselves, 10GbE is what everyone is moving to, and if you are implementing vCloud Director, it's probably going to be in a 10GbE environment anyway.

 

I've always considered that a good vCloud design is based on a good vSphere design, which still stands and holds true for the most part. In a few recent engagements I've been involved in, I've seen architects want to use 4 NICs for their vCloud Hosts… and here's why

 

When you are designing a vCloud environment, most people will tend to use VCDNI (vCloud Director Network Isolation… soon to be VXLAN) which involves a little bit more complexity when it comes to the design. During the deployment of a VCDNI network a new port group is created on the vNetwork Distributed Switch (vDS). This port group is automatically created by vCloud Director and therefore inherits the following features (which are strongly recommended to NEVER change):

  • The default NIC behavior is to always choose dvUplink1 on the vSphere Distributed Switch
  • dvUplink1 is set as the Active NIC while all other NICs attached to the vDS are set to Stand-by
  • The port group is set as "route based on originating port id"
  • The security settings are the original defaults with:
    • Promiscious Mode: Reject
    • MAC Address Changes: Accept
    • Forged Transmits: Accept

 

So what does this potentially mean for your design and NIC considerations? It usually means that the NIC assigned to dvUplink 1 will be constantly utilized. I honestly don't know if the VCDNI/VXLAN port groups will choose a different dvUplink other than dvUplink1, but in all of my testing only dvUplink1 has been chosen. You need to take this into account so these are a few designs I have created using 4x10GbE, 2x10GbE and 2x1GbE, and 2x10GbE with UCS.

 

 

In this 4x10GbE NIC design, we are using 2 vDSes. The First vDS is used for basic vSphere functions including Management, vMotion, Fault Tolerance, and IP Storage. IP Storage isn't necessary if you are using Fiber Channel, but it's there if you use NFS or iSCSI. It's encouraged that all of these port groups are set to "Route Based on Physical NIC Load". You are probably wondering, why am I creating a FT network when it's not even supported by vCloud? You're right, it's not supported by vCloud, but who knows when or if it will ever will be. Since you are springing for Enterprise+ licensing already, you might as well just have the feature handy. It's not a difficult setup and it's better to have all the features up and available even if they aren't being used. The second vDS is used for VCDNI traffic. Of course, only 1 of these 10GbE NICs are going to be used because of the Active/Stand-by rules put in place by vCD, but at least there is a backup. This second vDS will house all of the automation of networks so you have a clean separation of vSphere and vCloud functions. I also moved the vCloud External Network connections down to this switch to get better utilization of all NICs. Of course, this also might be the most expensive option. This scenario will also protect you from a failure of a NIC card as well.

 

 

This next design uses a 2x10GbE and 2x1GbE. It very closely resembles the previous design except the second vDS uses the 1Gb NICs instead of 10Gb. I choose to go with the 1Gb for VCDNI and inter-VM traffic because the amount of data that needs to flow through this pipe isn't insane. When you think about how we design for VM Traffic in a complete 1Gb infrastructure, having 2x1GbE NICs is sufficient. The 10GbE NICs will be good for external network traffic, vMotion, and more, especially if using IP Storage. This option on the other hand doesn't protect you from a complete NIC failure. Yet, if the 1GbE On-bard Card dies, communication on the VCDI/VXLAN networks will stop and you may or may not be notified. The host will continue to host VMs, but inter-VM communication on VCDNI/VXLAN networks will stop on that host so you will be left trying to troubleshoot a hidden problem.

 

 

This second option, will utilize the 2 10GbE NICs and will only use the 1GbE NICs in case of a NIC failure. This design scares me a little bit. What happens if you lose your 10GbE NIC card and your host reverts to using 1 GbE NICs? Of course, your host will remain alive, but it will severely degrade the performance on the vApps and VMs sitting on that host.  The communication speeds between VMs on different hosts will also likely suffer some unexpected latency. As soon as this problem is discovered, you will want to put this host in maintenance mode. In addition, if you are hosting VMs on iSCSI, you will only want to bind the vmkernel port to the 10GbE adapter because it the PSP selects a route to 1GbE or Round Robin is configured, the storage will see bad response times. If you lose the 10GbE adapter then you will lose access to iSCSI LUNs, which isn't a good thing. So I basically built this diagram to show you what NOT to do.. :)

 

This last design is a very simple one and uses only 2x10GbE NICs. This design pools everything onto a single vDS switch. This is really when having the "Route Based on Physical NIC Load" plays a key role. It's very important to make sure Management, vMotion, FT, and IP Storage are all set to "Route Based on Physical NIC Load". Since VCDNI/VXLAN networks by default will always use NIC1, we will allow vSphere to decide when and where to send all the vSphere related traffic. This design will allow a good amount of redundancy as well. If a 10GbE NIC fails, all traffic will failover to the secondary NIC. If the complete 10GbE NIC card fails, the host will spawn a HA failover event and move VMs over. There is a temporary interruption of vApps/VMs, but at least the performance doesn't suffer for a finite amount of time while you are troubleshooting.

 

This is also the design I chose to use when deploying vCloud Hosts with Cisco UCS. You might be wondering, if the Cisco VIC adapters can create 300+ vNICs and multiple HBAs, why wouldn't I just create multiple vNICs and separate the traffic onto multiple vDSes? It's the same reason why VCE's standard practice is to deploy only 2 vNICs and 2vHBAs on a vSphere host. It doesn't matter how many vNICs or vHBAs you spin up, there are only going to be 2 physical 10GbE adapters on the backend actually pushing traffic.

 

I thought about this in terms of UCS and tried analyzing how vSphere actually defined "Route Based on Physical NIC Load". Does vSphere monitor bandwidth from a top-down approach or does it actually look at the utilization of a physical adapter? This is something that can often get confusing when using UCS. If vSphere uses a top-down approach, then it will be aware of all the traffic that is being placed on the virtualized adapter, giving vSphere the ability to monitor the load being placed there and letting it decide when to change uplinks. If we were to go with a 4vNIC and 2vDS approach, it would mean that vNIC 1 and vNIC 3, while on completely different vDSed, are actually utilizing the same physical adapter on the backend. Thanks to a comment from Scott Lowe "if UCS were using SR-IOV (and vSphere supported SR-IOV), that might be a different story, but today SR-IOV is not involved." we know that vSphere is completely unaware of the actual load on that physical NIC since it has been virtualized. That is why we opt to go for the 2vNIC approach and allow vSphere to do its magic. The other thing that you could do is utilize UCSM and the 1000v to put QoS on these uplinks and continue to go with a 4vNIC and 2vDS approach. Of course, this adds a bit more complexity and you need to have the level of knowledge to implement those features.

Related Items

Related Tags