LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/typography.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/template.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/responsive.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/k2.less

Follow Me Icons

 

Follow @KendrickColeman on TwitterConnect on LinkedInWatch My Videos on YouTubeFollow me on FacebookCheck Out My Projects on GitHubStay Up To Date with RSS

Search

BSA 728x90 Center Banner

vSphere 5 iSCSI Disconnects Setting Virtual Distributed Switch to Jumbo Frames Using Hardware iSCSI Initiator

 

OK readers, I've got a task I need help with. My vSphere 5 license keys don't allow me to report bugs to VMware and since I'm running vSphere on my Shuttle Boxes, they would automatically call me out on not being on the HCL. I'm asking for someone with a few minutes and a setup they can break to please test something out.

 

I'm currently rebuilding my lab from vSphere 5 beta build 384847 to vSphere 5 GA build 469512. vCenter is running GA build 455964. I try to mimic a 10GbE environment by using everything on 2 1GbE NICs sitting on a Virtual Distributed Switch (vDS). The issue rises when setting the MTU on the vDS to anything greater than 1500 MTU. I need to set the vDS to atleast 1524 MTU for a nested vCloud Director environment. When setting the vDS to a number greater than 1500, my iSCSI datastores all disconnect. This setup worked perfectly on vSphere 4.1 and even on vSphere 5 beta. I have a feeling that the issue is with vCenter 455964 and the vDS, but not the actual host itself because the ESXi host running build 384847 was running fine until I had to rebuild my vCenter server. The switch connecting all this is a HP ProCurve that supports Jumbo Frames and is turned on. This is all done using the hardware iSCSI initiator that is baked into my Broadcom BCM5709 NICs. So only test this if you have NICs that are capable of doing iSCSI and not utilizing the Software iSCSI Inititor provided by VMware.

 

 

 

So check it out:

 

  • I create my vDS and some basic port groups

 

  • So I can correctly configure round-robin multi-pathing, I have to create 2 separate portgroups and do explicit failover so I can properly bind vmknics to these physical iSCSI adapters.

 

  • My vDS is set at 1500 MTU

 

  • I verify that the MTU on my vmknics are set at 1500

 

  • I bind my vmknics to my physical adapters

 

  • Add in my Synology's IP address under dynamic discovery

 

  • Do a rescan of my HBAs and walla... the iSCSI datastores show up

 

  • Now let's change my dVS to MTU 9000 so I can run nested VCD-NI traffic over these links.

 

  • The change completes without issue, but when I go back to my Storage Adapters and Re-Scan, all my datastores have been lost.

 

This doesn't make sense because even though the vDS is set at 9000 MTU, the vmknic is set at 1500 MTU. The vmknic is sending traffic at 1500 MTU and the vDS is just *capable* of transmitting jumbo frames.

 

  • For giggles, I actually went inside my Synology DS411+ and set the MTU to 9000 just to see what would happen. Surprisingly enough, all traffic continued to work like normal.

 

  • When the vDS was set back to 1500 MTU, then the datastores would re-appear.

 

Again, these devices are only talking on 1500 MTU packet sizes. I decided to take it a bit further. I set the vDS to 9000 MTU, changed the vmk2 and vmk3 adapters to 9000 MTU, and did a rescan of the datastores. Nothing shows up. Fail. Here is where it get's ridiculously confusing. I opened up a PuTTy session to the host and did a "vmkping 192.168.40.100 -I vmk2 -s 9000" and what do you think happened? I was getting ping replies back from my NAS with 9008 bytes. So Jumbo Frames actually does work on my network without issue. There is either a bug within vCenter itself that is messing this up, or it's a combination of vCenter, my physical NICs (Broadcom BCM5709), and my Synology DS411+. But for some odd reason, iSCSI datastores will not stay alive when the vDS is set to anything other than 1500 MTU.

 

  • So what happens if I try to use the Software iSCSI Initiator? I went ahead and added the Software iSCSI Initiator to make that vmhba38. I unbinded the vmknics from the physical broadcom adapters. Bound vmk2 and vmk3 to the iSCSI Software Initiator and added my Synology IP to Dynamic Discovery.

 

  • Did a rescan operation and the paths are all found once again.

 

  • I set the MTU on the vDS to 9000

 

  • Clicked rescan and all the datastores are still alive.

 

This is definitely some odd behavior and not sure where the miscommunication is happening. I would prefer to not use the Software iSCSI initiator considering my BCM5709 of NICs have iSCSI capabilities built-in. It makes a cleaner look and removes an additional layer of mapping out multi-pathing. If you have any more ideas, I'm willing to try.

Related Items

Related Tags

LESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/blue.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/green.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/orange.lessLESS ERROR : load error: failed to find /home4/kacole2/public_html/templates/tx_zenith/less/styles/purple.less