Follow Me Icons

 

Follow @KendrickColeman on TwitterConnect on LinkedInWatch My Videos on YouTubeFollow me on FacebookCheck Out My Projects on GitHubStay Up To Date with RSS

Search

BSA 728x90 Center Banner

Standing Up The Cisco Nexus 1000v In Less Than 10 Minutes

A few weeks ago at geek week, I was assigned the task of getting the Cisco Nexus 1000v distributed virtual switch set up. I probably spent a good 4 hours on my first cluster because I had no clue what was going on and I ended up losing my management network and spent lots of time restoring my network to the original vSwitch. I followed TrainSignal's 1000v Installation training in Pro Series Vol. 1 but the Nexus 10000v setup has changed a bit since then. It has actually gotten much easier. After finding a few walk throughs and talking to rock star Jason Nash, I was able to setup my 2nd cluster in less than 10 minutes. I'll show you the easiest step by step instructions I can possibly do.

 

There are a few prerequisites to understand before we get started.

  1. VSM = Virtual Supervisor Module which is the "brains" or the engine on a Cisco switch. This is an .ovf that is deployed as a VM into our cluster. We are going to setup 2 of these for HA purposes
  2. VEM = Virtual Ethernet Module will be installed on the ESXi hosts through Update Manager.
  3. Packet VLAN = Used for communication between the VSM and the VEMs within a switch domain by protocols such as CDP, LACP, and IGMP. Cisco recommends keeping this on it's own separate VLAN, but I would put Packet and Control on the same L2 VLAN.
  4. Control VLAN = Used to send VSM configuration commands to each VEM, VEM notifications to the VSM, and NetFlow exports. Cisco recommends keeping this on it's own separate VLAN, but I would put Packet and Control on the same L2 VLAN.
  5. Management VLAN = The management interface is not used to exchange data between the VSM and VEM, it is used to establish and maintain the connection between the VSM and VMware vCenter Server. Needs to be on a routable VLAN that can communicate with the vCenter Server.

 

The setup I'm replicating is an ESXi host with 2 10Gigabit ethernet adapters. As Jason Nash found out, to move everything over to the Nexus 1000v with the GUI, you must start with Virtual Standard Switches and not with Distributed Virtual Switch.

 

To make the installation of the VEM seamless, verify VMware vCenter Update Manager is up and running because once we add the hosts to the Nexus 1000v, Update Manager will take care of the rest.

 

After downloading the Nexus 1000v package, it's time to deploy the .OVF template.

 

 

I tend to name my first one with a VSM-01 because we are going to deploy a 2nd one after initial setup.

 

Lets power it on and begin the configuration. (Note: even though my configuration shows a Nested ESXi environment, the VSM must be installed on a 64-bit capable hardware)

 

Enter the password for the admin

Do not enter anything for the mode

Enter the domain ID. This ID can be anything you want. It's for clustering purposes of the VSM. Let's choose 500

Enter basic configuration mode: yes

 

Create another login and SNMP communities, your choice, I choose no.

Enter the Switch Name. This is what the 1000v name will be called. I chose Nexus1k

Continue with the out-of-band management and the IP address configuration for the Nexus 1000v Management.

 

 

Skip the advanced configuration parameters, disable telnet, enable ssh, set to RSA, enter 1024, enter NTP if you want.

 

 

Enter SVS Domain and enter L2 and enter the VLANs for Packet and Control

 

Don't edit and save.

 

 

Navigate to the IP address we configured for the management network and choose "Launch Installer Application". (Note: you must have Java Installed to launch the application.)

 

 

Enter the password we setup earlier and click next

 

Enter your vCenter information and click next.

 

 

Select the Cluster where the VSM is located.

 

 

Choose the VSM Virtual Machine from the drop down and click on advanced to make sure your port groups are going to be configured correctly

 

 

Provide some of the same VSM configurations and click next

 

 

Click finish and let the configuration do its thing. It will take about 1-2 minutes to configure and reboot the VSM.

 

 

Click No to migrate hosts to VSM because we will do that manually and finish.

 

 

We now have to change the system redundancy mode. Open up the console and type:

show system redundancy status

it will say "standalone", but we need it primary  so type:

system redundancy role primary

copy run start

 

Now that the VSM has been installed, let's get a secondary VSM up and running and configure it as secondary. Once the OVF is deployed, type in the same password and the domain ID, allow a reboot, and it will reconfigure itself.

 

 

go back to VSM-01 and type in show system redundancy status and we will see HA configuration in process.

 

 

Now it's time to configure our VSM. SSH into our VSM and let's configure vlans. Before configuration, you must know every VLAN that you want to allow to traverse. Lets say we have VLANS 1,40-50,60-70,100, and 200. Before these can be added to an uplink, they must exist in the VLAN table. Let's add them.

 

conf t

vlan 1,40-50,60-70,100,200

 

now verify all the VLANs exist in the database

exit

sh vlan

 

 

Let's configure the port-profile for the physical ethernet uplink:

conf t
port-profile type ethernet system-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan all
channel-group auto mode on mac-pinning
system vlan 1,40-50,60-70,100,200
no shutdown
state enabled
end
copy run start


 

Pretty simple. I find it much easier to create new port-profile for every vlan that a virtual machine will reside instead of transferring over port groups in the port migration process. Let's create a port-profile for VLAN 70 as an example.

 

conf t
port-profile type vethernet 1KV-VM_Network
vmware port-group
switchport mode access
switchport access vlan 70
vmware max-ports 1024
no shutdown
state enabled
end
copy run start

 

 

Now that everything is prepared we can start migrating hosts to the new Nexus1000v vDS. Remove an ethernet adapter from the vSS so it's available for the migrations.

 

 

Go to Home -> Networking and we are ready to Add a Host to our Nexus 1000v

 

 

Select the Hosts with the free ethernet adapters and choose the system-uplink from the drop down and click next.

 

 

Do not worry about migrating ports or VMs yet

 

 

Click finish

 

 

Watch the process in the recent tasks of VUM updating the hosts with VEM modules.

 

 

Now the VEM has been installed we can start migrating ports by either clicking on Manage hosts or Migrate Virtual Machine Networking.

 

 

Granted, it might have taken you more than 10 minutes the first time through, but now that you know the process, it is actually a very simple procedure.

Related Items