This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

Part 3: Controller Deployment, Host Preperation and VXLAN Config

With the NSX Manager deployed and configured and after verifying that vShield Edges are still being deployed by vCloud Director we can move onto the nuts and bolts of the NSX Configuration. There are a number of posts out there on this process so I’ll keep it relatively high level…but do check out the bottom of this post for more detailed install links.

Controller Deployment:

Just in case this step hasn’t been done in previous product installs…a best practice for most new generation VMware Applications is to have the Managed IP Address set under vCenter Server Settings -> Runtime Settings -> Managed IP Address Set the IP that of your vCenter.

NSX_VCD_P3_1

Next login to the Web Client -> Networking and Security -> NSX Managers -> IP Address of NSX Manager -> Manage -> Grouping Objects -> IP Pools and Click Add.

Here we are going to preconfigure the IP Pools that are going to be used by the NSX Controllers. At this point we can also add the IP Pool that will be used by the VXLAN VMKernel Interfaces which become our VTEP’s. If we are routing our VXLAN Transport VLAN then add as many IP Pools as you need to satisfy the routed subnets in the Transport Zones.

NSX_VCD_P3_3 NSX_VCD_P3_2

For the NSX Controllers its recommended that 3 (5 Possible…7 max) be deployed for increased NSX Resiliency. The idea is to split them across the Management Network and on ESXi Hosts as diverse as possible. They can be split across different vCenter Clusters in the a vCenter Datacenter…Ideally there should be configured with DRS Anti Affinity Rules to ensure a single host failure doesn’t result in a loss of Cluster Quorum.

Go to the Web Client -> Networking and Security -> Installation In the NSX Controller Nodes Pane click on add

  • Select the NSX Manager, Datacenter, Cluster/Resource Pool, Datastore
  • Leave the Host blank (allow for auto placement)
  • On the Connected To, click Select and go to the Distributed PortGroup List and Select the Zone Management PortGroup

NSX_VCD_P3_4

  • On the IP Pool, click Select and Choose the NSX Controller IP Pool created above

NSX_VCD_P3_5

The Deployment of the NSX Controllers can be monitored via vCenter and will take about 5-10 minutes. The deployment will fail if Compute resources are not available or if the Controllers can’t talk to vCenter on the configured PortGroup.

LINK: NSX Controller Deployment Gotchyas

Host Preparation:

In the Networking & Security Menu go to Installation and Host Preparation. Here you will see the Clusters in the vCenter and their Installation Status. Once you click Install all hosts in the Cluster are Prepared at once…Preparing the hosts involves the installing of the following components:

  • UWA – User World Agent
  • DFW – Distributed Firewall Module
  • DLR – Distributed Router Module
  • VXLAN Module

NSX_VCD_P3_6

The installation of these VIBs is done without interruptions to host operations and doesn’t result in Maintenance Mode being triggered during the install a reboot is not required.

NSX_VCD_P3_7

VXLAN Configuration:

Once the Hosts have been prepared the Configure option becomes available in the VXLAN column of the Host Preperation tab. This process will create the initial VXLAN PortGroup under the selected Distributed Switch and add new VMKernel Interfaces to each Host in the prepared Cluster…the IP of which will act as the VTEP (VXLAN Tunnel End Point) and is from which all VM traffic passes if VXLAN enabled. There are a number of different Teaming Policies available and each choice depends on the design of your switching network…I chose Failover due to the fact LACP was not available and each ESXi host has 2x10Gig pNICs and I am comfortable with a failover scenario.

  • Select the Distributed Switch relative to each zone
  • Set the VLAN (Configured to be carried throughout the underlying Physical Network)
  • Set the MTU to 1600 (at least to allow overhead)
  • Use the VXLAN Up Pool create in previous steps
  • Set the Teaming Policy to Fail Over
  • Leave the VTEP to 1 (Can only be one if Failover is selected)

NSX_VCD_P3_8

As mentioned above a new Distributed Port Group will be created and VMK NICs added to each host in the Cluster

NSX_VCD_P3_9

 

NSX_VCD_P3_10

Once a Cluster is Prepared any host that gets added to the cluster will have the Agents and Networking automatically configured…the removal of the NSX Components does require a host to be restarted. In my experience of upgrading from NSX 6.x to 6.1 host reports are also required.

Further Reading:

http://networkinferno.net/nsx-compendium

http://wahlnetwork.com/2014/06/02/working-nsx-deploying-nsx-controllers-via-gui-api/

http://wahlnetwork.com/2014/06/12/working-nsx-preparing-cluster-hosts/

http://wahlnetwork.com/2014/07/07/working-nsx-configuring-vxlan-vteps/

http://www.pluralsight.com/courses/vmware-nsx-intro-installation