LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/typography.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/template.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/responsive.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/k2.less

Follow Me Icons

 

Follow @KendrickColeman on TwitterConnect on LinkedInWatch My Videos on YouTubeFollow me on FacebookCheck Out My Projects on GitHubStay Up To Date with RSS

Search

BSA 728x90 Center Banner

vSphere Host NIC Configuration

EDIT: 11/29/2011

I have no updated these to reflect vSphere 5 changes!

VMware vSphere 5 Host Network Design Layout and Configuration

 

EDIT: 4/29/2010

I have now created 3 pages of vSphere NIC Design and Configurations. Each one has different diagrams and information. Sorry for not being completely consistent across the board:

vSphere Host NIC Design - 6 NICs

vSphere Host NIC Design - 10 NICs

vSphere Host NIC Design - 12 NICs

 

Thanks - Kenny

 

EDIT: Updated 2/05/2010 - with new pictures of vSphere vSwitch and vDS

**One thing that i completely forgot to mention is that in my diagrams, I have 2 stacked Cisco 3750G switches. I forgot to put the stacking cable in the diagram. But just know that every configuration should be using some sort of stacked switching**

There was some talk going on the past 2 days about Host NIC configurations. @Kiwi_Si is hosting a poll on his site, TechHead.co.uk,to see the most common configuration among our peers. Which inspired me to create this post.

This is what I like to do on my configurations. Use the 2 on-board NICS with 2x4Gb Expansion NICs, giving me a grand total of 10 NICs to play around with. Here is a diagram of how I plan to design my vSphere NIC layout. I still haven't fully configured Fault Tolerance on a set of physical servers, so let me know if you see something is amiss.

Thanks to a pointer from @darylhunter, be careful mixing the On-Board Broadcom NICs w/ the Intel 4x1Gb PCI NICs. There can be issues such as flow control.

Click the picture for a bigger shot. This picture as my 1st attempt and was later changed to below

Here is my current setup for each host. After chatting with @gabvirtualworld in a google wave, he made a good point about using Service Console and VMotion on one port group for 2 NICs instead of 4 because VMotion and Service Console only have traffic when you remote into the host or are moving virtual machines. This gives me the ability to use the other 2 NICs for complete Fault Tolerance and add another NIC to the network traffic group. For each host you add, be sure to not put all the Actives on one switch and all the Stand-bys on another switch. Mix it up a bit so if you have a switch failure all your hosts won't be going through the same changes.

Another setup I contemplated using was using @darylhunter's idea and using 1 NIC as a mirror port for any VMs that use Wireshark or some other packet sniffing software.

My original goal was to use Jumbo frames on the SAN ports on the switch. I found out that the Cisco 3750G doesn't support MTU changes on particular interfaces. Instead, you have to change the MTU size globally for all interfaces. bummer. If you have a bigger environment, as a rule of thumb, you should have a separate SAN traffic network with dedicated switches. If I had access to a Nexus or 4507-R I would be using these 3750Gs, or even 3750E for 10GigE support on the SAN, and enable jumbo frames.

EDIT: 1/6/2009 - WE NOW HAVE JUMBO FRAMES!!! i found out that when you actually enable jumbo frames on a 3750G, it enables it globally, but traffic will still flow. Jumbo frame traffic will only flow through it after you configure your end-to-end setup. So setup jumbo frames on your ESX NICs for SAN traffic, and setup jumbo frames on your SAN NICs. everything seems to be working smoothly and VMs are much faster. so don't be scared to enable jumbo frames globally.

EDIT: 2/5/2010 - Here is a picture of how I have everything setup inside of vSphere. I use 1 vSwitch  to take care of Service Console and vMotion and 3 Virtual Distributed Switches (vDS) to take care of Virtual Machine traffic, Fault Tolerance, and the Storage Network. Why do I not use Service Console and vMotion as a vDS? I like to have control over this type of traffic. By minimalizing risks such as flow control, improper load balancing, etc. you are better off telling your server where to send traffic during uptime (Active) and where to go if that NIC fails, ie. Stand-by. To achieve this type of configuration on the vSwitch you have to setup trunking on your physical switch and tag VLANs through vSphere. Just like you do with Virtual Machines on different VLANs.

 

Here is a diagram of an older ESX 3.5 environment with a separate storage network. I would upgrade the 3560 switches and I would change the NIC configurations so the Service Console and VMotion NICs are on the Nexus or 4507-E. This diagram has the VMotion NICS going to the SAN network, I wouldn't put it there.

Feel free to comment on how you would configure it differently or hit me back on twitter: @KendrickColeman

 

EDIT: 4/29/2010

I have now created 3 pages of vSphere NIC Design and Configurations. Each one has different diagrams and information. Sorry for not being completely consistent across the board:

vSphere Host NIC Design - 6 NICs

vSphere Host NIC Design - 10 NICs

vSphere Host NIC Design - 12 NICs

 

Thanks - Kenny

Related Items

LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/blue.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/green.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/orange.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/purple.less