Search Results for: retrofit

NSX vCloud Retrofit: Overlapping Networks in vCD with NSX Virtual Wires

Part 4: vCloud Director Overlapping Networks:

vCloud Director has the ability to allow Overlapping Network segments configurable from the Administration Tab of the vCD UI. Traditionally for those using VLAN backed External Networks and Network Pools this would represent a potential risk to clients if admins where not careful provisioning network resources. If the same VLAN was mistakenly configured there could be the possibility for client networks to see each other meaning a really bad day for providers of multi-tenancy platforms.

Where this is required is when VXLAN is in play…The VXLAN Transport network is configured on the one VLAN which then carries all the Logical Switch Network Segments or VNI’s. Even though vCD is not aware of NSX you can still connect up Virtual Datacenter vApps and VMs to NSX Edge Gateways via VXLAN virtual wires. To achieve that you need to check the Allow Overlapping External Networks box as shown below.

If this isn’t Checked you will get the following error in the vCD UI

With Overlapping Networks in place your network pools are also able to be VXLAN backed and used in conjunction with retrofitted External Networks connected to NSX Edges for advanced Edge Gateway Services.

Bonus Tip: vCloud Director filters certain PortGroups based on their name of which NSX Created Virtual Wire Portgroups are one of those filtered. To have an NSX Virtual Wire appear in the vCD UI you need to rename the PortGroup similar to what’s shown below.

 

This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

NSX vCloud Retrofit: vShield Edges Become Unmanageable – API Fix

If you are familiar with vCloud Director Edge Gateway Services you might have come across situations where Edges become unmanageable and you see the following options greyed out in the vCD UI. All Edge services remain functional, however no changes can be made.

In environments where NSX has been retrofitted with an in place upgrade over vCNS you may hit a bug with the NSX Manager to do with how it interprets vShield Edge Statuses. Basically vCloud Director talks to the NSX Manager to grab the status of deployed Edge Gateways. A situation can occur where the NSX Manager sends back the incorrect status resulting in the Edge becoming unmanageable from vCloud Director as shown above.

A Re-Deploy will work in resetting the status and making the Edge manageable again, however this will result in downtime for the services sitting behind the effected Edge device. During the course of an SR I raised with the NSX and vCloud VMware Engineering Teams a fix was created that uses the NSX Manager APIs to POST a new status that makes vCloud Director pick up the edge as manageable without having to Re-Deploy.

First step is to find out which Edges might be effected by this condition…apart from going through each Edge in the vCD UI I suggest looking at this post from @fojta in which he creates a PowerCLI Script to grab the current statuses of all edges. In addition to the Edge Name you will also need the EDGE-ID

Get the EDGE-ID by going into the vSphere Web Client’s Networking and Security Tab -> NSX Edges and Search to the Edge Name that matches the vCD UI

Using your favourite Rest Client, take the EDGE-ID and replace the identifier in the following API Call to get more details of the Edge.

https://NSX-MANAGER-IP/api/4.0/edges/edge-xxx

Next take the EDGE-ID and NAME (checking the DATACENTERID) from the response above and modify the Payload below increment the ID Number as you go along

Executing the following API POST

https://NSX-MANAGER-IP/api/2.0/systemevent 

You should see a 201 Created Status Returned after the POST…Refresh the list in vCloud Director and the edge should be manageable. Repeat the process for any effected Edges.

Thanks to the NSX and vCloud Engineering team for working through an elegant solution that means zero impact on client services…As I am discovering, there are lots of cool things that can be done with APIs!

Further Reading:

This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

NSX vCloud Retrofit: Upgrade Issue – Edge Gateway Unmanageable in vCloud Director or Deployment Fails

We have been working with VMware GSS on an issue for a number of weeks whereby we were seeing some vShield Edge devices go into an unmanageable state from within the vCloud Director Portal. In a nutshell some VSEs (version 5.5.3) where stuck in a Configuring Loop upon the committal of a Service Config change. Subsequent reboots of the NSX Manager or vCloud Director Cells did not result in the VSE coming out of this state. While the VSE was not able to be managed from vCD the Edge Services where still functional…ie traffic was passing through and all existing rules and features where working as expected.

Looking at the vCD Logs the following entry was seen:

We also saw issues deploying some VSEs from vCloud Director where Deployment of edge gateways failed.

If the failed attempt was retried via a redeployment action the following was seen in the vCD logs with the vCD GUI stuck showing Reploying Edge in Progress

Heading over to the the NSX Manager logs we came across the following error log entry being constantly written to the system manager logs…in fact we were seeing this message pop up approximately 25,000 times a day across three NSX instances.

The VIX API:

The NSX Manager…and vShield Manager before it uses the VIX API to query vCenter and the ESXi Host running the Edge VMs via VMTools to query the status of the Edges. Tom Fojta has written a great article on the legacy VIX method and how its changed in NSX via a new Message Bus technique.

Searching for the VIX_E_FILE_NOT_FOUND error online It would seem that the NSX Manager was having issues talking to a subset of VSE 5.5.3 edges. It was noted by GSS that this was not happening for all VSEs and there were no instances of this happening on the NSX Edge Gateway’s (ESG 6.1.x). Storage was first suspected as being the cause of the issue, so we spent a good deal of time working through ESXi logs and Storage vMotioning the VSEs and NSX Managers to rule out storage. Once that was done, GSS took the case to the NSX Engineering team for further analysis. Engineering took an Export of one of my NSX Edges (uploading 10GB with of OVA is a challenge) to try and work out what was happening and why.

The Cause:

The VSE’s VM UUID as seen from the NSX Manager database somehow becomes different to that listed in the vCenter Inventory…causing the error messages.

The Fix:

There are a couple of options available to resolve the UUID Mismatch.

The self service workaround:
Attempt a redeployment of all VSEs that report the issue. You can get a list by grabbing logs from the NSX Manager and list down the vm-xxxxxx identifier as shown above. From there…head to vCD (Not the Networking & Security Edge section – this will redeploy NSX 6.1.2 Edges) and Click on Redeploy from the Edge Gateway Menu. The only risk with this is that the VSE might get stuck in a Redeploying state resulting in a time-out. Another thing to note with this option is the client services will be effected during the redeployment of the VSE while the new Edge is deployed and the config transferred across.

VMware GSS Database Fix:
If you are seeing these errors in your NSX Manager logs, raise an SR with VMware and they will execute a simple one line SQL Query to alter the UUID of the VMs that don’t match vCenter and update them. Once that’s done the errors go away and the potential for VSEs to go into this state is removed.

Further Info and RCA:

VMware GSS together with NSX Engineering are still investigating the cause of the issue but this seems to be a symptom (though not confirmed) of an in place vCNS to NSX Upgrade and there are no specific factors that seem to trigger this behaviour…the assumption is that this is a bug that comes into play after an upgrade from vCNS with existing VSE 5.5.3 Instances. It’s also interesting that the worst symptom of the issue (apart from the silly amount of logs generated) the VSE going into an unmanageable state or the deployment issue happens intermittently. There is no scientific reason why…but the trigger seems to be any action in vCD on a VSE (new or existing) that executes a config change…if this is done during a health check by the NSX Manager it could leave the VSE in the undesired state.

For those interested the version numbers where the issue was picked are are listed below.

Platform Versions:

  • vCenter 5.5 Update 2 Build 2001466
  • ESXi 5.5 Update 2 Build 2456374
  • vCloud Director 5.5.2 Builds 2000523 and 2233543
  • NSX-v 6.1.2 Build 2318232
  • VSE 5.5.3 Build 2175697

NSX vCloud Retrofit: Controller Deployment Internal Server Error Has Occurred

During my initial work with NSX-v I was running various 6.0.x Builds together with vCD 5.5.2 and vCenter/ESXi 5.5 without issue. When NSX 6.1 was released I decided to clone off my base Production Environment to test a fresh deployment of 6.1.2 into a mature vCloud Director 5.5.2 instance that had vCNS running a VSM version of 5.5.3. When it came time to deploy the first Controller I received the following error from the Networking & Security section of the Web Client:

Looking at the vCenter Tasks and Events the NSX Manager did not even try to kick off the OFV Deployment of the Controller VM…the error it’s self wasn’t giving to much away and suspecting a GUI bug I attempted to deploy the Controller directly via the RestAPIs…this failed as well however the error returned was a little more detailed.

Looking through the NSX Manager Logs the corresponding System Log Message looked like this:

The Issue:

While the error it’s self is fairly straight forward to understand in that the a value being entered into the database table was not of the right type…the reasons behind why it had shown up in the 6.1.1 and 6.1.2 releases after having no such issue working with the 6.0.x releases stumped everyone involved in the ensuing Support Case. In fact it seemed like this was the only/first instance (at the time) of this error in all global NSX-v installs.

The Fix:

NOTE: This can only be performed by VMware Support via an SR.

The fix is a simple SQL Query to alter the KEY_VALUE_STORE referenced in the error…however this can only done done by VMware Support as it requires special access to the NSX Manager Operating System to commit the changes. A word of warning…If you happen to have the secret password to get into the back door of the NSX Manager and apply these changes…your support for NSX could become null and void!

Once that’s been committed and the NSX Manager Service restarted Controllers can be successfully deployed…again, the fix needs to be applied by VMware Support.

The RCA:

In regards to the RCA of this issue,  the customers having a long upgrade history (5.0 onwards) will hit the issue, since there was a db migration happening from 5.0 to 5.1.x upgrade the alter table script for KEY_VALUE_STORE was missing. As per VMware engineering a new upgrade on NSX Manager is not going to override the DB schema change, since there is no such script to alter table on the upgrade path.

There was no indication of this being fixed in subsequent NSX Releases and no real explanation as to why it didn’t happen in 6.0.x but that aside the fix works and can be actioned in 5 minutes.

This was a fairly unique situation that contributed to this bug being exposed…my environment was a a vCNS 5.1 -> 5.5 -> 5.5.2 -> 5.5.3.1 -> NSX 6.1 -> 6.1.1 -> 6.1.2 replica of one of our Production vCloud Zone that sits in our lab. Previously I’d been able to fully deploy NSX end to end using the same base systems which sat side by side in working order in a separate #NestedESXi Lab…but that was vCNS 5.5.2 upgraded to NSX 6.0.5 which was upgraded to 6.1.2.

So due to not too many deployments of NSX-v the issue only manifested in mature vCNS environments that where upgraded to 6.1.1 or 6.1.2. Something to look out for if you are looking at doing an NSX vCloud Retrofit. If you have a green fields site you will not come across the same issue.

Further Reading:

http://anthonyspiteri.net/nsx-bites-nsx-controller-deployment-issues/

This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

NSX vCloud Retrofit: Using NSX 6.1.x with vCloud Director 5.5.2.x VSE Redeployment Issue

This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

In the latest round of VMware KB Updates posted this week I came across an interesting KB relating to vCD 5.5.2.x and NSX 6.1.x and an apparent error when redploying an existing vShield Edge (which run at version 5.3)

VMware KB: Using NSX 6.1.x with vCloud Director 5.5.2.x.

Details: When you redeploy an edge gateway from vCloud Director 5.5.2.x after you upgrade from vCloud Networking and Security 5.5.x to VMware NSX 6.1.x, the edge gateway is upgraded to version 6.x, which is not supported by vCloud Director.

Solution: Configure vCloud Director to use Edge gateway version 5.5 with NSX 6.1.x on an Microsoft SQL Server database
Add the following statement to the database.
INSERT INTO config (cat, name, value, sortorder) VALUES (‘vcloud’, ‘networking.edge_version_for_vsm6.1’, ‘5.5’, 0);
Restart the vCloud Director cell.

However during my work deploying NSX 6.1.x into vCD 5.5.2 I couldn’t remember coming across this behaviour… In my VSE Validation Post I go through doing a test deploying of a 5.3 VSE once vCNS is upgraded to NSX Manager. My fellow vCD expert Mohammed Salem triggered this KB and posted about it here as he saw this behaviour…

After upgrading NSX to 6.1.2, I had an interesting issue, When I was trying to redeploy an existing GW, I found out the GW was upgraded to v6.1.2 instead of 5.5.3. This caused me an issue because vCloud Director (at least v5.5.x) will not recognize GWs with a version higher than 5.5.3 (Which is the latest version supported by vCNS).

I decided to give this a go in my labs..I have two VSE’s deployed and Managed by vCloud 5.5.2.1…EDGE-192 was brought in from the upgrade to NSX and I created EDGE-208 with NSX Manager handling the deployment from vCD.

I went through the Re-Deploy Option and watching the progress from the Web Client Networking & Security Edges Menu

After this had completed the version remained at 5.5.3 and I was able to manage the VSE from the vCD GUI without issue. I did the same on the other VSE and that worked as well.

I ran the script from Mohammed’s post and found the expected entry before the addition put forward in the KB

So it seems that this behaviour is not consistent across instances…as it stands NSX and vCloud Director Integration is still in it’s infancy and I expect there to be differing behaviours as more and more people deploy NSX…however this sort of inconsistency is unexpected. One possible answer is that my instances are all mature vCD installs that have been upgraded from 1.5 onwards…that said, seems strange I don’t have the issue.

Point and case, test this behaviour first before looking to apply the DB entry…would be interesting to see if more people come across it…however there is no harm in adding the entry regardless…as Mohammed commented, this behaviour doesn’t seem to exist in vCD SP 5.6.3.

 

NSX vCloud Retrofit: Logical Network Preparation and Transport Zone Setup

This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

Part 4: Logical Network Preparation and Transport Zone Setup

In the previous posts we have gone through the process to upgrade the vCNS Manager to the NSX Manager…Configured the NSX Manager so it to talk to vCenter…verified that vCD 5.5 can still deploy/manage traditional vShield Edges and gone through deploying the NSX modules onto ESXi hosts and configure VTEPs for VXLAN transport.

We are now going to prepare for NSX Logical Networks and configure our Transport Zones which define the boundaries of our VXLAN domains. Recently @dkalintsev has released a series of excellent blog posts relating to NSX…the latest goes through Transport Zones in super deep dive detail. If you are not following Dimitri and you are interested in NSX…NIKE!

If you are used to vCloud Director then you know about Provider vDCs and what part they play in abstracting pools of resources for vCD VMs to be consumed via Virtual Datacenters. In it’s simplest form you can think of a PvDC as a NSX Transport Zone and that there is a one to one relationship between the two. With vCD 5.1 the concept of Merging PvDCs first appeared which relied on the vCNS implementation of VXLAN using multicast as a control plane…this opened up the possibility to having vDCs spanning different compute pool resources, possibly in different physical locations. With the NSX Controllers now handling the control plane we can use Unicast and much more easily utilise the Merged PvDCs feature of vCD…using Transport Zones as our network boundaries.

Segment ID Config:

In the Networking & Security Menu go to Installation -> Logical Network Preparation. Under VXLAN you should see the previously configured Cluster and Host details relating to the setting up of the VXLAN Transport Network on each host.

Go to the Segment ID tab and Click on Edit. This is where we are going to configure the scope of the VXLAN Segments that are created. In retrofitting this with vCloud, Segment IDs will be consumed by VXLAN Network Pools in vCD…which in turn translate to Logical Switches

  • You can have 16 million VXLAN segments
  • You can come back here and adjust the number up or down at any time.
  • As we will be using Unicast, leave the Multicast Addressing option unchecked.

Transport Zone Setup:

Go to the Transport Zones Tab and click Add, Enter in the Cluster Name as the name of the new Transport Zone, Select Unicast and Check the desired vSphere Cluster…as mentioned above you can select multiple Clusters to be included in the Transport Zone…this is how you will extend L3 across Providers.

That’s all the ground work done!

The last post in this series will look at how to bring this all together in vCD and leverage some of the power of NSX for vCloud Director.

NSX vCloud Retrofit: Controller Deployment, Host Preperation and VXLAN Config

This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

Part 3: Controller Deployment, Host Preperation and VXLAN Config

With the NSX Manager deployed and configured and after verifying that vShield Edges are still being deployed by vCloud Director we can move onto the nuts and bolts of the NSX Configuration. There are a number of posts out there on this process so I’ll keep it relatively high level…but do check out the bottom of this post for more detailed install links.

Controller Deployment:

Just in case this step hasn’t been done in previous product installs…a best practice for most new generation VMware Applications is to have the Managed IP Address set under vCenter Server Settings -> Runtime Settings -> Managed IP Address Set the IP that of your vCenter.

Next login to the Web Client -> Networking and Security -> NSX Managers -> IP Address of NSX Manager -> Manage -> Grouping Objects -> IP Pools and Click Add.

Here we are going to preconfigure the IP Pools that are going to be used by the NSX Controllers. At this point we can also add the IP Pool that will be used by the VXLAN VMKernel Interfaces which become our VTEP’s. If we are routing our VXLAN Transport VLAN then add as many IP Pools as you need to satisfy the routed subnets in the Transport Zones.

For the NSX Controllers its recommended that 3 (5 Possible…7 max) be deployed for increased NSX Resiliency. The idea is to split them across the Management Network and on ESXi Hosts as diverse as possible. They can be split across different vCenter Clusters in the a vCenter Datacenter…Ideally there should be configured with DRS Anti Affinity Rules to ensure a single host failure doesn’t result in a loss of Cluster Quorum.

Go to the Web Client -> Networking and Security -> Installation In the NSX Controller Nodes Pane click on add

  • Select the NSX Manager, Datacenter, Cluster/Resource Pool, Datastore
  • Leave the Host blank (allow for auto placement)
  • On the Connected To, click Select and go to the Distributed PortGroup List and Select the Zone Management PortGroup
  • On the IP Pool, click Select and Choose the NSX Controller IP Pool created above

The Deployment of the NSX Controllers can be monitored via vCenter and will take about 5-10 minutes. The deployment will fail if Compute resources are not available or if the Controllers can’t talk to vCenter on the configured PortGroup.

LINK: NSX Controller Deployment Gotchyas

Host Preparation:

In the Networking & Security Menu go to Installation and Host Preparation. Here you will see the Clusters in the vCenter and their Installation Status. Once you click Install all hosts in the Cluster are Prepared at once…Preparing the hosts involves the installing of the following components:

  • UWA – User World Agent
  • DFW – Distributed Firewall Module
  • DLR – Distributed Router Module
  • VXLAN Module

The installation of these VIBs is done without interruptions to host operations and doesn’t result in Maintenance Mode being triggered during the install a reboot is not required.

VXLAN Configuration:

Once the Hosts have been prepared the Configure option becomes available in the VXLAN column of the Host Preperation tab. This process will create the initial VXLAN PortGroup under the selected Distributed Switch and add new VMKernel Interfaces to each Host in the prepared Cluster…the IP of which will act as the VTEP (VXLAN Tunnel End Point) and is from which all VM traffic passes if VXLAN enabled. There are a number of different Teaming Policies available and each choice depends on the design of your switching network…I chose Failover due to the fact LACP was not available and each ESXi host has 2x10Gig pNICs and I am comfortable with a failover scenario.

  • Select the Distributed Switch relative to each zone
  • Set the VLAN (Configured to be carried throughout the underlying Physical Network)
  • Set the MTU to 1600 (at least to allow overhead)
  • Use the VXLAN Up Pool create in previous steps
  • Set the Teaming Policy to Fail Over
  • Leave the VTEP to 1 (Can only be one if Failover is selected)

As mentioned above a new Distributed Port Group will be created and VMK NICs added to each host in the Cluster

 

Once a Cluster is Prepared any host that gets added to the cluster will have the Agents and Networking automatically configured…the removal of the NSX Components does require a host to be restarted. In my experience of upgrading from NSX 6.x to 6.1 host reports are also required.

Further Reading:

http://networkinferno.net/nsx-compendium

http://wahlnetwork.com/2014/06/02/working-nsx-deploying-nsx-controllers-via-gui-api/

http://wahlnetwork.com/2014/06/12/working-nsx-preparing-cluster-hosts/

http://wahlnetwork.com/2014/07/07/working-nsx-configuring-vxlan-vteps/

http://www.pluralsight.com/courses/vmware-nsx-intro-installation 

NSX vCloud Retrofit: NSX Manager Configuration and vCD VSE Deployment Validation

This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

Part 2 – NSX Manager Configuration and vCD VSE Deployment Validation

Once you have updated the VSM to the NSX Manager there are a number of configuration items to work through…some of which would have been carried over from the vCNS upgrade. For user and group management you can reference this post where I go through the configuration of the Management Services to allow users and groups to administor NSX through the vCenter Web Client.

Once you have a Green Connected Button for the Lookup Service and vCenter Service as seen above you can configure the rest of the settings. Clicking on the home Icon will give you the menu below:

Go to Manage Appliance Settings -> General and configure the Time Settings, Syslog Server and keep the Locale that is relevant to you installation. Ensure the NTP Server is set and is consistent with other NTP servers referenced in vCloud, vCenter and ESXi (Time Sync is Critical between NSX Manager, Hosts and other Management Systems)

Configure a SYSLOG or point the NSX Manager at Log Insight which has a newly released Content Pack for NSX.

Go to Network Settings and enter in new Host Name Details without the Domain Name specified (those are put of the search domains) and double check the IP and DNS Settings

Note 1: Create a DNS entry (if not already created) for the Host Name ensuring there is a reverse lookup in place for internal name resolution of the Manager.

Go to Backup and Restore and (re)configure the Backup Settings to include an FTP location and an additional Pass Phrase for NSX Manager Restores.

Once done, perform a test backup

vShield Edge Deployment and Validation:

With that done we can now move onto to testing vCloud Director initiated deployments of the VSE 5.5.3 Edges that are deployed as legacy Appliances out of the NSX Manager. If you take a look under the covers of the NSX Manager you will see that it’s DNA is vShield and more to the point…the NSX portion has been itself retrofitted ontop of the vCNS VSM which has allowed for quick integration with vCenter and legacy interoperability with current versions of vCD.

vCloud Director will call vShield APIs (not NSX) to deploy edges for use with Virtual Datacenter Networking and all current functionality in the edges up to 5.5.3 are maintained. vCD will not be able to understand an NSX 6.1 ESG and if you upgrade (the option is there as shown below) you will have a fully functional Edge with all settings and config carried over…but not manageable by the vCloud GUI.

To ensure that all previous vCloud Director Deployment mechanisms and Edge Management is still functional deploy an Edge Gateway from the vCloud Director GUI checking to make sure that the OVF is deployed correctly…the service account will now be service.nsx (or the account you chose)

Validate the vShield Version at 5.5.3, Test Internal/External Access and IP Connectivity, Service Configurations by adding rules, disabling/enabling Firewall and Create and attaching a vORG Network and Check Port Group Status

If you are interested in what the 5.5.3 VSE Management looks like under the Network & Security Section of the Web Client, click on Edges and the Name of the Edge…what you see here is similar to what you would see for the 6.1 ESGs but with less functionality and features. What’s managed in the vCD GUI is what you see here.

With that validated you have ensured that vCloud Director will continue to do it’s thing and work as expected with NSX Manager in play…at this point we are not using any VXLAN Virtualwires or NSX Transport Zones Network Pools…that’s still to come!

NSX vCloud Retrofit: Intro and VSM to NSX Manager Upgrade

I’ve been working over the past 6 months on and off looking at how to best fit NSX into existing vCloud Director Platforms and while vCD in the Enterprise is going to become less a thing…vCloud Air Network Service Providers will continue to use vCD SP…The feature set provided by NSX could greatly enhance any SPs offering around enhanced networking services as well as helping IT Operations with with SDN abstraction efficiencies.

This blog series extends my NSX Bytes Blog Posts to include a more detailed look at how to deploy NSX 6.1.x into an existing vCloud Director Environment. Initially we will be working with vCD 5.5.x which is the non SP Fork of vCD, but as soon as an upgrade path for 5.5.2 -> 5.6.x is released I’ll be including the NSX related improvements in that release.

With that, if anyone is running or still looking to run vCD in-house as a Private Cloud Abstraction of vSphere and you are looking at implementing NSX this series will still be relevant.

To protect some of the work I’ve done with Zettagrid to productise NSX there will be sections where I will be vague…can’t give you guys all the secret sauce tricks and tweaks 🙂

NSX Deployment Pre-Requisites and Build Numbers*

  • Pre Assigned VLAN for VXLAN – MTU bigger or Equal to 1600
  • vCenter 5.5 Update 2 Build 2001466
  • ESXi 5.5 Update 2 Build 2068190
  • vCloud Director 5.5.2 Build 2000523
  • vShield Manager 5.5.3.1 Build 2175698
  • SSO Service Details ([email protected])
  • NSX Service Account Details (service.nsx)
  • NSX Admin Group (NSX.Admins)

* These are based on my internal testing and deployment validations

Disclaimer: At the time of posting NSX 6.1.x is not officially supported with vCloud 5.6.3 or 5.5.2. While I haven’t come across any issues in the retrofit given the current status you may want to think twice about putting this into prod until VMware validate the interoperability…I’m working to get more info on that and will update when I know more. The matrix below is not up to date and there is support for NSX 6.0.6 and NSX 6.0.7.

Part 1 – VSM to NSX Manager Upgrade:

Be wary…this is a one time upgrade…once installed we can’t roll back easily. At the time of writing the lastest version of vCNS VSM is 5.5.3.1, if you are not at the build upgrade the VSM to that before you begin.

NOTE1: vShield Data Security Installs:

If you are upgrading from NSX version 6.1.1, or do not have Data Security in your environment then you are fine to skip this step…

If you are upgrading from a release prior to NSX 6.1.1 and have Data Security in your environment…as much as is seems like an extreme PITA the following needs to be done:

  1. Un-install Data Security from all the clusters that have the service installed.
  2. Upgrade NSX Manager to version 6.1.2 – SEE STEPS BELOW. 
  3. Install or upgrade Guest Introspection and other services on appropriate clusters.
  4. Install Data Security on appropriate clusters.
  5. Upgrade the remaining components.

NOTE2: vShield Edge instances prior to version 5.5 need to be upgraded to the latest version. Pre-5.5 vShield Edge instances cannot be managed or deleted after vShield Manager has been upgraded to NSX Manager.

vCNS to NSX Upgrade Process:

Back Up and Snapshot the VSM

  • Ensure that there is a Backup of the VSM Manager Config
  • Snapshot VSM Manager
  • Reboot the VSM to ensure any existing logs are cleared and there is enough space on the filesystem to install (>4GB)

Login to the VSM -> Settings and Reports -> Updates and Click on Choose File

Click on Upload File: VMware-vShield-Manager-upgrade-bundle-to-NSX-6.1.2-2318232.tar.gz

Confirm the Version Number in the New Version and Description Field

Confirm the Install

Let the Upgrade go through it’s paces

Once the VM has rebooted go to the IP of the VSM. We should now have the NSX Manager login Screen

Login using admin and and default as the user/password combination…I’ve found in all my upgrades so far that the existing admin password had been reset, however I have had to use the previous password to access vShield API calls… (possible bug)

Verify that the build is as shown below and that vCenter is registered by going to Manage vCenter Registration

NOTE3: Shutdown VM and upgrade the VM Hardware to vCPU to 4 and vRAM to 12GB and restart the VM. This will allow the upgraded VM to meet the NSX Manager Compute Requirements.

Once completed the NSX Manager will boot and it’s time to verify the install and to ensure that no previous functionality is lost.

Part 2: NSX Manager Configuration and vCloud Director VSE Deployment Validation

Sneak Peek – Veeam 9.5 vCloud Director Self Service Portal

Last month Veeam announced that they had significantly enhanced the capabilities around the backup and recovery of vCloud Director. This will give vCloud Air Network Service Providers the ability to tap into a new set of RESTful APIs that adds tenanted, self service capabilities and be able to offer a more complete service that is totally controlled and managed by the vCloud tenant.

As part of the Veeam Vanguard program, I have been given access to an early beta of Veeam v9.5 and have had a chance to take the new functionality for a spin. Given the fact this is an very early beta of v9.5 I was surprised to see that the installation and configuration of the vCloud Director Self Service functionality was straight forward and like most things with Veeam…It just worked.

NOTE: The following is based on an early access BETA and as such features, functions and menu items are subject to change.

Basic Overview:

The new vCloud Director integration lets you back up and restore single VMs, vApps, Organization vDC and whole Organization. This is all done via a web UI based on Veeam Backup Enterprise Manager. Only vCD SP versions are compatible with the feature. Tenants have access to Self-Service web portal where they can manage their vCloud Director jobs, as well as restore VMs, files and application items within their vCloud Director organization.

The Service Provider exposes the following URL to vCD tenants:

https://Enterprise-Manager-IP/vcloud/OrgName:9443

As shown in the diagram below Enterprise Manager than talks to the vCloud Director Cells to authenticate the tenant and retrieve information relating to the tenant vCloud Organization.

Configuring a Tenant Job:

Anyone who is familiar with Veeam will recognize the steps below and the familiar look of the menu options that the Self Service Portal provides. As shown below the landing page once the tenant has authenticated is similar to what you see when logging into Enterprise Manager…in fact the beauty of this portal is that Veeam didn’t have to reinvent the wheel…they just retrofited vCD multi-tenancy into the views.

To configure a job click on the the Jobs Tab and hit the Create Button.

Give the Job a Name and set the number of restore points to keep.

Next select the VMs you want to add to the Job. As mentioned above you can add the whole Org, vDC, vApp and as granular as per VM.

Next select any Guest Processing you want done for Application Aware backups.

And then set the Job Schedule to you liking.

Finally configure email notification

Once that has been done you have the option to Run the Job manually or wait for the schedule to kick in. As you can see below you have a lot of control over the backup job and you can even start Active Full Jobs.

Once a job has been triggered you have access to view logs on what is happening during the backup process. The details is just as you would expect from the Veeam Backup & Recovery Console and keeps tenant’s informed as to the status of their jobs.

More to Come:

There is a lot more that I could post but for the moment I will leave you all with that first sneak peak. Once again Veeam have come to the party in a big way with this feature and every service provider who run vCloud Director should be looking at Veeam 9.5 so as to enhance the value of their IaaS offering.

#LongLivevCD

« Older Entries