Tag Archives: NSX

Released: NSX-v 6.3.4 and Upgrade Notes and Fixes

Last week VMware released NSX-v 6.3.4 (Build 6845891) that contains no specific new features but addresses a couple of bug fixes from previous releases. Going through the release notes there are a lot of known issues that should be known and there are more than a few that apply to service providers…specifically there are a lot around NSX Edge functions. The other interesting point to highlight about this release is that for those on NSX-v 6.3.3 there is are a couple of scripts to run against the API before upgrading to ensure all controllers are upgradable.

As mentioned, before upgrading the release notes stage that for those on NSX-v 6.3.3 they follow this VMwareKB. In a nutshell there is a bug in 6.3.3 where the NSX Controllers are reported as disconnected in the Web Client as shown below.

To fix that situation you need to execute a couple of API calls that POSTs a script to the NSX Manager as documented in the VMwareKB. This needs to be done as the NSX Manager Admin user as I found this didn’t work with an NSX Domain User or an SSO Administrator Account with NSX Org admin level permissions.

Once the second script has been run you should see a similar output to what’s shown above and have all NSX Controllers ready in a connected state which allows you to prepare for the upgrade. Once done, you can go through the normal NSX upgrade steps which will get you to the latest build.

Important Fixes :

  • Fixed Issue 1970527: ARP fails to resolve for VMs when Logical Distributed Router ARP table crosses 5K limit
  • Fixed Issue 1961105: Hardware VTEP connection goes down upon controller rebootA BufferOverFlow exception is seen when certain hardware VTEP configurations are pushed from the NSX Manager to the NSX Controller. This overflow issue prevents the NSX Controller from getting a complete hardware gateway configuration. Fixed in 6.3.4.
  • Fixed Issue 1955855: Controller API could fail due to cleanup of API server reference filesUpon cleanup of required files, workflows such as traceflow and central CLI will fail. If external events disrupt the persistent TCP connections between NSX Manager and controller, NSX Manager will lose the ability to make API connections to controllers, and the UI will display the controllers as disconnected. There is no datapath impact.

Those with the correct entitlements can download NSX-v 6.3.4 here.

References:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.3/rn/releasenotes_nsx_vsphere_634.html

https://kb.vmware.com/kb/2151719

 

NSX Bytes: NSX-v 6.3.3 Released – Upgrade Notes and Enhancements

Last week VMware released NSX-v 6.3.3 (Build 6276725) and with it comes a new operating system for the NSX Controllers. Once upgraded the new controllers will be powered by Photon OS which is more and more making it’s way into VMware’s appliances. There are a few other new bits in this release but more importantly a number of Resolved Issues. For those running homelabs with one NSX Controller there are some important upgrade notes to be made aware of before kicking off…i’ll go into those below.

Compatibility:

Before moving to the upgrade there are some important notes around interoperability and supported ESXi versions as is explained in this VMwareKB. The minimum supported version of ESXi running with NSX-v 6.3.3 is as shown below:

  • NSX-v 6.3.3 installed in a vSphere 5.5 environment requires a minimum version of ESXi 5.5 GA
  • NSX-v 6.3.3 installed in a vSphere 6.0 environment requires a minimum version of ESXi 6.0 Update 2
  • NSX-v 6.3.3 installed in a vSphere 6.5 environment requires a minimum version of ESXi 6.5a
If NSX 6.3.3 is installed on an earlier version of 5.5/6.0 ESXi, the netcpa service will fail to start preventing communication between ESXi hosts and the NSX Controllers.
In terms of upgrading from previous versions of NSX-v you can see that the upgrade path does have some stoppers. Below is the interoperability matrix that included vCloud Director 8.20 which, at the moment is not supported with NSX-v 6.3.3…I expect that to change over the next couple of weeks.
Upgrading to NSX-v 6.3.3:

 

As mentioned there are things to look out for during and after the upgrade from previous builds of NSX-v. There are detailed upgrade notes in the release notes so as always, make sure to read those as well, but below is a brief walk through of the upgrade process I conducted in one of my NestedESXi labs.
Once the NSX Manager has been upgraded you should have the following in your Summary tab:
Once the NSX Manager has been upgraded you should restart the vCenter Web Client to ensure any lingering parts of the previous version are removed. Login to the Web Client and click through to Networking & Security -> Installation and then the Management Tab where you will see Upgrade Available.
IMPORTANT NOTE: The upgrade notes state that you need to have a minimum of three NSX Controllers which I’d say is linked to the fact that the underlying OS of the Controllers has been shifted to Photon OS. This is likely to impact anyone running NSX in a NestedESXi or homelab as generally, only one was deployed to preserve resources. Once you click on upgrade you will get a special upgrade warning before committing to the upgrade as shown below:
  • The NSX Controller cluster must contain three controller nodes to upgrade to NSX 6.3.3. If it has fewer than three controllers, you must add controllers before starting the upgrade
  • When you upgrade to NSX-v 6.3.3, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses

There is also a slight increase to the size of the storage for the controllers from 20GB to 28GB. Once upgraded the NSX Controllers will be at version 6.3.6235594.

The last major step is to upgrade the Host components from the Host Preparation tab. On vSphere 6.0 and above once you have upgraded to NSX 6.3.x, all future NSX VIB changes do not trigger a reboot…only maintenance mode is required to complete the VIB change. In NSX 6.3.3 there is a change to the NSX VIB names on ESXi 6.0 and later where the esx-vxlan and esx-vsip VIBs have been merged and replaced with esx-nsxv as shown below.

VIB names on ESXi 5.5 remain the same.

References:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.3/rn/releasenotes_nsx_vsphere_633.html

https://kb.vmware.com/kb/2151267

 

vCloud Director SP 8.20 – NSX Advanced Networking Overview

Many, including myself thought that the day would never come where we would be talking about a new UI for vCloud Director…but a a month on from the 8.20 release of vCloud Director SP (which was the 8th major release of vCD) I’m happy to be writing about the new Advanced Networking features of 8.20 based on NSX-v. Full NSX compatibility and interoperability has been a long time coming, however the wait has been worthwhile as the vCloud Director team opted to fully integrate the network management into the vCD Cloud Cells over the initial approach that had a seperate appliance acting as a proxy between the NSX Manager and vCD Cells.

But before I dive into the new HTML5 goodness, I thought it would be good to recap the Advanced Networking Services of vCD and how we got to where we are today…

No More vShield…Sort Of:

As everyone should know by now, the vCloud Networking & Security was made end of life late last year and from the release of vCD SP 8.10 vShield Edges should have been upgraded to their NSX equivalents. These Edges will remain as basic Edges within vCloud Director and even though at the backend they would be on NSX-v versioning, no extra features or functionality beyond what was available in the existing vCD portal would be available to tenants.

  • DHCP
  • NAT
  • Firewall
  • Static Routing
  • IPSec VPN
  • Basic Load Balancer

The version of NSX-v deployed dictates the build number of the NSX Edge, however as can be seen below it’s still listed as a vShield Edge in vCenter.

As anyone who has worked closely would know, NSX-v has a lot of vShield DNA in it and in truth it’s more vShield than NSX when talking about the features that pertain to vCloud Director. However the power of NSX-v can be taken advantage of once an basic edge is upgraded to an Advanced Edge.

Advanced Edge Services:

Before the major UI additions that came with vCD SP 8.20 the previous 8.10 version did give us a taste of what was to come with the introduction of a new menu option when you right clicked on an Edge Gateway.

This option was greyed out unless you where running the initial beta of the Advanced Networking Services or ANS. The option can be executed by anyone with the rights to upgrade the edge gateway, but by default this can only be done by a System Administrator or the Org Admin. So it’s worthwhile double checking the roles you have allocated to your tenant’s to ensure that these upgrades can be controlled.

Once you click on the Convert to Advanced Gateway option you get a warning referring to a VMwareKB that warns you about an API change that may make previous calling methods obsolete. Something to take note of for anyone automating this process. On execution of this conversion there is no physical change to the Virtual Machine, however if you now click on the Edge Gateway Services option of the Edge Gateway you will be taken to the new HTML5 Web Interface for NSX Advanced Networking Services to access all the advanced features:

  • Firewall
  • DHCP
  • NAT
  • Routing (Dynamic)
  • Load Balancer (Advanced)
  • SSL VPN Plus
  • Certificates
  • Grouping Objects
  • Statistics
  • Edge Settings

All new Advanced Networking features are configured from the new HTML5 web interface which retains the base vCD URL but now adds:

/tenant/network-edges/{ID}?org=ORGNAME

Everything is self contained the tenant doesn’t have to authenticate again to get to the new user interface. However, if you just upgrade the Edge and go to configure the Advanced Network Services out of the box you will only see a couple of the items listed above.

In order to use the new features a System Administrator must use the vCloud API to grant the new rights that the organisation requires. This process has been explained very well by my good friend Giuliano Bertello here. This process uses the vCloud API to Grant Distributed Firewall and Advanced Networking Services Rights to roles in vCloud Director 8.20 using the new granular role based access control mechanisms that where introduced in 8.20. Once configured your tenant’s can now see all the services listed above to configure the Edge Gateway.

Organisational Distributed Firewall:

Something that is very much new in the 8.20 release is the ability to take advantage of mircosegmentation using the NSX-v Distributed Firewall service. The ability to configure organisation wide rules logically, without the need for a virtual Edge Gateway is a significant step forward for vCD tenants and I hope that this feature enhancement is exposed by service providers and it’s value sold to their tenants. To access the Distributed Firewall, in the Virtual Datacenters windows of the Administration tab, right click on the Virtual Datacenter name and select Manage Firewall.

Once again you will be taken to the new HTML5 user interface and once the correct permissions have been applied to the user you can enable the Distributed Firewall and start configuring your rules. The URL is slightly different to the Edge Gateway URL:

/tenant/dwf/{ID}?org=ORGNAME

But the look and feel is familiar.

Conclusion:

vCloud Director SP 8.20 has finally delivered on the what most members of the vCloud Air Network had wanted for some time…that is, full NSX interoperability and feature set access as well as a new user interface. Over the next few weeks, I am going to expand on all the features of the Advanced and Distributed Networking features of vCD and NSX and walk through how to configure elements through the UI and API as well as give a looks into what’s happening at the backend in terms of how NSX stores rules and policy items for vCD tenant use.

Compatibility with vSphere 6.5 and NSX-v 6.3.x:

vCloud Director SP 8.20 is compatible with vSphere 6.5 and NSX 6.3.0 and supports full interoperability with other versions as shown in the VMware Product Interoperability Matrix. As of vCD 8.20 GA, vCD 8.20 passed the functional interoperability test and limited scale testing for these versions:

  • vCD 8.20 with vSphere 6.0 and NSX 6.3.0
  • vCD 8.20 with vSphere 6.5 and NSX 6.3.0

References:

https://kb.vmware.com/kb/2149042
https://kb.vmware.com/kb/2147625

Released: vCloud Director SP 8.20 with HTML5 Goodness!

This week, VMware released vCloud Director SP version 8.20 (build 5070630) which marks the 8th Major Release for vCloud Director since 1.0 was released in 2010. Ever since 2010 the user interface give or take a few minor modifications and additions has been the same. It also required flash and java which has been a pain point for a long time and in someways unfairly contributed towards a negative perception around vCD on a whole.  It’s been a long time coming but vCloud Director finally has a new web UI built on HTML5 however this new UI is only exposed when accessing the new NSX integration which is by far and away the biggest addition in this release.

This NSX integration has been in the works for a while now and has gone through a couple of iterations within the vCloud product team. Initially announced as Advanced Networking Services which was a decoupled implementation of NSX integration we now have a fully integrated solution that’s part of the vCloud Director installer. And while the UI additions only extend to NSX for the moment it’s brilliant to see what the development team have done with the Clarity UI (tbc). I’m going to take a closer look at the new NSX features in another post, but for the moment here are the release highlights of vCD SP 8.20.

New Features:

  • Advanced Edge Gateway and Distributed Firewall Configuration – This release introduces the vCloud Director Tenant Portal with an initial set of controls that you can use to configure Edge Gateways and NSX Distributed Firewalls in your organization.
  • New vCloud Director API for NSX – There is a new a proxy API that enables vCloud API clients to make requests to the NSX API. The vCloud Director API for NSX is designed to address NSX objects within the scope of a vCloud Director tenant organization.
  • Role Administration at the Organization Level – From this release role objects exist in each organization. System administrators can use the vCloud Director Web Console or the vCloud API to create roles in any organization. Organization administrators can use the vCloud API to create roles that are local to their organization.
  • Automatic Discovery and Import of vCenter VMs – Organization VDCs automatically discover vCenter VMs that exist in any resource pool that backs the vDC. A system administrator can use the vCloud API to specify vCetner resource pools for the vDC to adopt. vCenter VMs that exist in an adopted resource pool become available as discovered vApps in the new vDC.
  • Virtual Machine Host Affinity – A system administrator can create groups of VMs in a resource pool, then use VM-Host affinity rules to specify whether members of a VM group should be deployed on members of a vSphere host DRS Group.
  • Multi-Cell Upgrade – The upgrade utility now supports upgrading all the cells in your server group with a single operation.

You can see above that this release has some major new features that are more focused on tenant usability and allow more granular and segmented controls of networks, user access and VM discovery. The Automatic VM discovery and Import is a significant feature that goes along with the 8.10 feature of live VM imports and helps administrators import VM work loads into vCD from vCenter.

“VMware vCloud Director 8.20 is a significant release that adds enhanced functionality.  Fully integrating VMware NSX into the platform allows edge gateways and distributed firewalls to be easily configured via the new HTML5 interface.  Additional enhancements such as seamless cell upgrades and vCenter mapping illustrate VMware is committed to the platform and to vCloud Air Network partners.”

A list of known issues can be found in the release notes and i’d like to highlight the note around Virtual Machine memory for the vCD Cells…I had my NestedESXi lab instances crash due to memory pressures due to the fact the VMs where configured with only 5GB of RAM. vCloud Director SP 8.20 needs at least 6GB so ensure your cells are modified before you upgrade.

Well done the the vCloud Director Product and Development team for this significant release and I’ll look to dig into some of the new feature in detail in upcoming posts. You can also read the offical vCloud Blog release post here. I’m looking forward to what’s coming in the next release now…hopefully more functionality placed into the HTML5 UI and maybe integration with VMwareonAWS 😉

References:

http://pubs.vmware.com/Release_Notes/en/vcd/8-20/rel_notes_vcloud_director_8-20.html

https://www.vmware.com/support/pubs/vcd_sp_pubs.html

https://blogs.vmware.com/vcloud/2017/02/vmware-announces-general-availability-vcloud-director-8-20.html

NSX Bytes: NSX-T 2.0 Released

A couple of months ago in my NSX-v 6.3 and NSX-T 1.1 release post I focused around NSX-v features as that has become the mainstream version that most people know and work with…however NSX, in it’s Nicira roots has always been about multi-hypervisor and has always had an MH version that worked with Openstack deployments. The NSBU has big plans for NSX beyond vSphere and during the NSX vExpert session we got to see a little about how NSX-T will look beyond version 1.1.

NSX-T’s main drivers relate to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-T is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors. As you can see before the existing use cases for NSX-T are mainly focused around devops, micro-segmentation and multi-tenant infrastructure.

What’s in NSX-T 2.0:
The short answer to this is a focus on expanding NSX to public clouds, containers and platform as a service workloads. We have already seen a tech preview at VMworld of NSX working with AWS instances and the partnership between VMware and AWS is even more of a driver for this cross cloud compute and networking landscape to allow NSX-T to shine.
Expanded Networking and Security into Public Cloud and Containers:
  • Centralised security policy management
  • NSX for Public Cloud (AWS)
  • NSX for Cross-Cloud Services (AWS)
  • NSX for Containers and PaaS (Kubernetes, Openshift)

Platform Capabilities:

  • Distributed L3 at scale decoupled from vCenter
  • Intel DPDK Edge Line Rate packet performance
  • L2/L3 redundant control and data plane
  • ESXi and KVM (RHEL/Ubuntu)
  • Independant NSX interface thats multi vCenter
  • Scale out control plane and scale out edge cluster
  • VM and Containers Hosts

Feature Capabilities:

  • Distributed Routing, eBGP, NAT, BFD, ECMP, route-maps, 4 byte ASN
  • REST/JSON OpenAPI Specification
  • VIO, Upstream Openstack support
  • Geneve Encapsulation, QoS, Software L2 Bridge
  • Distributed stateful firewall, tag based security grouping
  • DHCP Server and Relay
  • IPFIX, Port Mirroring, Port Connectivity, Trace Flow, Backup & Restore
  • Log Insight Content Management Pack

Where do NSX-v and NSX-T Play:

Conclusion:

When it comes to the NSX-T 2.0 feature capabilities, many of them are a case of bringing NSX-T up to speed to where NSX-v is, however the thing to think about is that how those capabilities will or could be used beyond vSphere environments…that is the big picture to consider here around the future of NSX!

For an overview of what’s was released in NSX-T 2.0, the release notes can be found here, or have a read of my launch post here.

References:

NSX Bytes: NSX for vSphere 6.3 and NSX-T 1.1 Release Information

VMware’s NSX has been in the wild for almost three years and while the initial adoption was slow, of recent times there has been a calculated push to make NSX more mainstream. The change in licensing that happened last year has not only been done to help drive adoption by traditional VMware customers running vSphere that previously couldn’t look at NSX due to price but also the Transformers project has looked to build on Nicira’s roots in the heterogeneous hypervisor market and offer network virutalization beyond vSphere and beyond Open source platforms and into the public cloud space. The vision for VMware with NSX is to manage security and connectivity for heterogeneous end points through:

  • Security
  • Automation
  • Application Continuity

NSX has seen significant growth for VMware over the past twelve to eighteen months driven mostly from customer demand focusing around micro-segmentation, IT automation and efficiency and also the need to have extended multiple data centre locations that can be pooled together. To highlight the potential that remains with NSX-v less that 5% of the total available vSphere install base has NSX-v installed…and while that could have something to do with the initial restrictions and cost of the software it still represents enormous opportunity for VMware and their partners.

Last week the NSX vExpert group was given a first look at what’s coming in the new releases…below is a summation of what to expect from both NSX-v 6.3 and NSX-T 1.1. Note that we where not given an indication on vSphere 6.5 support so, like the rest of you we are all waiting for the offical release notes.

[Update] vSphere 6.5 will be supported with NSX-v 6.3

Please note that VMware vSphere 6.5a is the minimum supported version with NSX for vSphere 6.3.0. For the most up-to-date information, see the VMware Product Interoperability Matrix. Also, see 2148841.

NSX for vSphere 6.3 Enhancements:

Security:

  • NSX Pre-Assessment Tool based on vRealize Network Insight
  • Micro-Segmentation Planning and application visibility
  • New Security Certifications around ICSA, FIPS, Common Criteria and STIG
  • Linux Guest VM Introspection
  • Increase performance in service chaining
  • Larger scalability of VDI up to 50K desktops
  • NSX IDFW for VDI
  • Active Directory Integration for VDI at scale

Automation:

  • Routing Enhancements
  • Centralized Dashboard for service and ops
  • Reduced Upgrade windows with rebootless upgrades
  • Integration with vRA 7.2 enhancing LB,NAT
  • vCloud Director 8.20 support with advanced routing, DFW, VPN
  • VIO Updates to include multi-vc deployments
  • vSphere Integrated Container Support
  • New Automation Frameworks for PowerNSX, PyNSXv, vRO

Application Continuity:

  • Multi-DC deployments with Cross VC NSX enhancements for security tags
  • Operations enhancements with improved availability
  • L2VPN performance enhancements for cross DC/Cloud Connectivity

Where does NSX-T Fit:

Given there was some confusion about NSX-v vs. NSX-t in terms of everything going to a common code base starting from the transformers release it was highlighted that VMware’s primary focus for 2017 hasn’t shifted away from NSX for vSphere and will still be heavily invested in to add new capabilities in and beyond 6.3 and that there will be a robust roadmap of new capabilities in future releases with support extended will into the future.

NSX-t’s main drivers related to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-t is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors. As you can see before the existing use cases for NSX-t are mainly focused around devops, micro-segmentation and multi-tenant infrastructure.

NSX-T 1.1 Brief Overview:

Again the focus is around private IaaS and multi-hypervisor support for development teams using dev clouds and employing more devops methodologies. There isn’t too much to write home about in the 1.1.0 release but there is some extended hypervisor support for KVM and ESXi, more single or multi-tenant support and some performance and resiliency optimizations.

Conclusion:

There is a lot to like about where VMware is taking NSX and both product streams offer strong network virtualization capabilities for customers to take advantage of. There is no doubt in my mind that the release of NSX-v 6.3 will continue to build on the great foundation laid by the previous NSX versions. When the release notes are made available I will do take a deeper look into all the new features and enhancements and tie them into what’s most useful for service providers.

NSX Bytes: Important Bug in 6.2.4 to be Aware of

[UPDATE] In light of this post being quoted on The Register I wanted to clarify a couple of things. First off, as mentioned there is a fix for this issue (the KB should be rewritten to clearly state that) and secondly, if you read below, you will see that I did not state that just about anyone running NSX-v 6.2.4 will be impacted. Greenfield deployments are not impacted.

Here we go again…I thought maybe we where over these, but it looks like NSX-v 6.2.4 contains a fairly serious bug impacting VMs after vMotion operations. I had intended to write about this earlier in the week when I first became aware of the issue, however the last couple of days have gotten away from me. That said, please be aware of this issue as it will impact those who have upgraded NSX-v from 6.1.x to 6.2.4.

As the KB states, the issue appears if you have the Distributed Firewall enabled (it’s enabled and inline by default) and you have upgraded NSX-v from 6.1.x to 6.2.3 and above, though for most this should be applicable to 6.2.4 upgrades due to all this issues in 6.2.3. If VM’s are migrated between upgraded hosts they will loose network connectivity and require a reboot to bring back connectivity.

If you check the vmkernal.log file you will see similar entries to that below.

Cause

This issue occurs when the VSIP module at the kernel level does not handle the export_version deployed in NSX for vSphere 6.1.x correctly during the upgrade process.

The is no current resolution to the issue apart from the VM reboot but there is a workaround in the form of a script that can be obtained via GSS if you reference KB2146171. Hopefully there will be a proper fix in future NSX releases.

<RANT>

I can’t believe something as serious as this was missed by QA for what is VMware’s flagship product. It’s beyond me that this sort of error wasn’t picked up in testing before it was released. It’s simply not good enough that a major release goes out with this sort of bug and I don’t know how it keeps on happening. This one specifically impacted customers and for service providers or enterprises that upgraded in good faith, it puts egg of the faces of those who approve, update and execute the upgrades that results in unhappy customers or internal users.

Most organisations can’t fully replicate production situations when testing upgrades due to lack or resources or lack of real world situation testing…VMware could and should have the resources to stop these bugs leaking into release builds. For now, if possible I would suggest that people add more stringent vMotion tests as part of NSX-v lab testing before promoting into production moving forward.

VMware customers shouldn’t have to be the ones discovering these bugs!

</RANT>

[UPDATE] While I am obviously not happy about this issue coming in the wake of previous issues, I still believe in NSX and would recommend all shops looking to automate networking still have faith in what the platform offers. Bug’s will happen…I get that, but I know in the long run there is huge benefit in running NSX.

References:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2146171

NSX Bytes: Updated – NSX Edge Feature and Performance Matrix

A question came up today around throughput numbers for an NSX Edge Services Gateway and that jogged my memory back to a previous blog post where I compared features and performance metrics between vShield Edges and NSX Edges. In the original post I had left out some key metrics, specifically around firewall and load balance throughput so thought it was time for an update. Thanks to a couple of people in the vExpert NSX Slack Channel I was able to fill some gaps and update the tables below.

A reminder that VMware has announced the End of Availability (“EOA”) of the VMware vCloud Networking and Security 5.5.x that kicked in on the September  of 19, 2016 and that vCloud Director 8.10 does not support vShield Edges anymore…hence why I have removed the VSE from the tables.

As a refresher…what is an Edge device?

The Edge Services Gateway (NSX-v) connects isolated, stub networks to shared (uplink) networks by providing common gateway services such as DHCP, VPN, NAT, dynamic routing, and Load Balancing. Common deployments of Edges include in the DMZ, VPN Extranets, and multi-tenant Cloud environments where the Edge creates virtual boundaries for each tenant.

Below is a list of services provided by the NSX Edge.

Service Description
Firewall Supported rules include IP 5-tuple configuration with IP and port ranges for stateful inspection for all protocols
NAT Separate controls for Source and Destination IP addresses, as well as port translation
DHCP Configuration of IP pools, gateways, DNS servers, and search domains
Site to Site VPN Uses standardized IPsec protocol settings to interoperate with all major VPN vendors
SSL VPN SSL VPN-Plus enables remote users to connect securely to private networks behind a NSX Edge gateway
Load Balancing Simple and dynamically configurable virtual IP addresses and server groups
High Availability High availability ensures an active NSX Edge on the network in case the primary NSX Edge virtual machine is unavailable
Syslog Syslog export for all services to remote servers
L2 VPN Provides the ability to stretch your L2 network.
Dynamic Routing Provides the necessary forwarding information between layer 2 broadcast domains, thereby allowing you to decrease layer 2 broadcast domains and improve network efficiency and scale. Provides North-South connectivity, thereby enabling tenants to access public networks.

Below is a table that shows the different sizes of each edge appliance and what (if any) impact that has to the performance of each service. As a disclaimer the below numbers have been cherry picked from different sources and are subject to change…I’ll keep them as up to date as possible.

NSX Edge (Compact) NSX Edge (Large) NSX Edge (Quad-Large) NSX Edge (X-Large)
vCPU 1 2 4 6
Memory 512MB 1GB 1GB 8GB
Disk 512MB 512MB 512MB 4.5GB
Interfaces 10 10 10 10
Sub Interfaces (Trunk) 200 200 200 200
NAT Rules 2000 2000 2000 2000
FW Rules 2000 2000 2000 2000
FW Performance 3Gbps 9.7Gbps 9.7Gbps 9.7Gbps
DHCP Pools 25 25 25 25
Static Routes 2048 2048 2048 2048
LB Pools 64 64 64 64
LB Virtual Servers 64 64 64 64
LB Server / Pool 32 32 32 32
IPSec Tunnels 512 1600 4096 6000
SSLVPN Tunnels 50 100 100 1000
Concurrent Sessions 64,000 1,000,000 1,000,000 1,000,000
Sessions/Second 8,000 50,000 50,000 50,000
LB Throughput L7 Proxy) 2.2Gbps 2.2Gbps 3Gbps
LB Throughput L4 Mode) 6Gbps 6Gbps 6Gbps
LB Connections/s (L7 Proxy) 46,000 50,000 50,000
LB Concurrent Connections (L7 Proxy) 8,000 60,000 60,000
LB Connections/s (L4 Mode) 50,000 50,000 50,000
LB Concurrent Connections (L4 Mode) 600,000 1,000,000 1,000,000
BGP Routes 20,000 50,000 250,000 250,000
BGP Neighbors 10 20 50 50
BGP Routes Redistributed No Limit No Limit No Limit No Limit
OSPF Routes 20,000 50,000 100,000 100,000
OSPF Adjacencies 10 20 40 40
OSPF Routes Redistributed 2000 5000 20,000 20,000
Total Routes 20,000 50,000 250,000 250,000

Of interest from the above table it doesn’t list any Load Balancing performance number for the NSX Compact Edge…take that to mean that if you want to do any sort of load balancing you will need NSX Large and above. To finish up, below is a table describing each NSX Edge size use case.

Use Case
NSX Edge (Compact) Small Deployment, POCs and single service use
NSX Edge (Large) Small/Medium DC or mult-tenant
NSX Edge (Quad-Large) High Throughput ECMP or High Performance Firewall
NSX Edge (X-Large) L7 Load Balancing, Dedicated Core

References:

https://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network-virtualization-design-guide.pdf

https://pubs.vmware.com/NSX-6/index.jsp#com.vmware.nsx.admin.doc/GUID-3F96DECE-33FB-43EE-88D7-124A730830A4.html

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2042799

First Look: vRealize Network Insight (Arkin)

Last year Arkin burst onto the scene offering a solution that focused on virtual and physical deep network analytics. Arkin was recognised at VMworld 2015 by nearly taking out the best of show and fast forward twelve months, Arkin was acquired by VMware with the product later rebadged as vRealize Network Insight. One of the products main strengths that attracted VMware into making the acquisition was it’s tight integration into NSX by way of a simple and intuitive user interface that lets admins easily manage and troubleshoot NSX while offering best practice checks that can guide users through VXLAN and firewall implementations and alert them to any issues in their design and implementation of NSX.

Arkin removes barriers to SDDC adoption and operation by providing converged visibility, and contextual analytics across virtual and physical, an ability to implement newer security models such as micro-segmentation, and by ensuring application uptime, while letting IT collaborate better. The platform helps IT organizations plan, operate, visualize, analyze, and troubleshoot their complex software-defined data center environments.

As vRealize Network Insight the key benefits are:

  • East-west traffic analytics for security and micro-segmentation design
  • Control and tracking to meet audit and compliance requirements for virtual distributed firewalls
  • 360 Overlay-underlay visibility and topology mapping
  • Extensive 3rd party physical switch integrations
  • VXLAN to VLAN logical path mappings
  • Advanced NSX Operations Management
  • Natural language search and enhanced user experience for rapid troubleshooting

What I was surprised to find when I was able to dig into the product was that it offered more than just Network insights…in fact it offered surprisingly deep analytics and metrics for Hosts and Virtual Machines that rival most similar products out on the market today.

Installation Overview:

To install Network Insight you download two OVA’s from MyVMware and deploy the two appliances into vCenter. It’s got an interesting setup that’s shown below and after deployment you are left with two appliances, a Platform, and a Proxy that have the following specifications.

Platform OVA

  • 6 CPU cores (reservation 3072) Mhz
  • 32 GigaBytes RAM (reservation)
  • 600 Gigabytes HDD (thin provisioned)

Proxy OVA

  • 2 CPU cores (reservation 1024 Mhz)
  • 4 Gigabytes RAM (reservation 4GB)
  • 100 Gigabytes HDD (thin provisioned)

A note before continuing…only Chrome is supported as a browser at this stage.

You start the install by deploying the Platform appliance…once the Platform OVA is deployed and the appliance VM settings have been configured you can hit the IP specified in the OVA deployment process and continue the installation.

After the license key has been validated you are then asked to Generate a shared secret that is used to pair the Platform with the Proxy appliance.

From here you can initiate the deployment of the Proxy appliance. During the OVA deployment you are asked to enter in the shared key before continuing to configure the appliance networking and naming. As shown below, the configuration wizard waits to detect the deployed Proxy appliance at which point the installation is complete and you can login.

The default username name is [email protected] with a password of admin.

When you login for the first time you are presented with a Product Evaluation pop up letting you know you are in NSX Assessment Mode and that you can switch to Full Product Mode at the bottom right of the window. NSX Assessment Mode is an interesting feature that looks like it will be used to install Network Insight as part of an on boarding or discovery engagement and produce reports on what is happening inside an NSX environment.

In either mode you need to register at least one vCenter and, if in a site with NSX, register the NSX Manager as well. As mentioned in the opening you can also plug into a small subset of popular physical networking equipment such as Cisco, Arista, DELL, Brocade and Juniper.

Once the vCenter has been connected and verified you then have the option to select the vDS and PortGroups you want to have monitored. This enabled Netflow (IPFIX) across all PortGroups selected…it does these changes live so be wary of any possible breaks in vDS traffic flow just in case.

Due to a rather serious PSOD bug in previous version of ESXi when Netflow is enabled, the configurator blocks any host that doesn’t meet the minimum ESXi builds as shown below.

Below is the minimum requirements for Network Insight to be configured and start collecting and analyzing.

Infrastructure

  • vCenter 5.5 or above
  • ESXi 5,5, update 2 (build 2068190) and above
  • ESXi 6.0, update 1b (3380124) and above
  • NSX for vSphere 6.1 or greater
  • Netflow enabled on vDS

Reading through the FAQ, you get to learn about IPFIX and how it’s used with the vDS to collect network traffic data…it’s worth spending some time going through the FAQ however I’ve pulled an overview on how it all works below.

IPFIX is an IETF protocol for exporting flow information. A flow is defined as a set of packets transmitted in a specific timeslot, and sharing 5-tuple values – source IP address, source port, destination IP address, destination port, and protocol. The flow information may include properties such as timestamps, packets/bytes count, Input/output interfaces, TCP Flags, VXLAN Id, Encapsulated flow information and so on.

 

Network Insight uses VMware VDS IPFIX to collect network traffic data. Every session has two paths. For example: Session A↔C has A→C packets and C→A packets. To analyze the complete information of any session, IPFIX data about packets in both the directions is required. Refer following diagram where VM-A is connected to DVPG-A and is talking to VM-C. Here DVPG-A will only provide data about the C→A packets, and DVPG-Uplink will provide data about A→C packets. To get the complete information of A’s traffic, Ipfix should be enabled on DVPG-A, DVPG-uplink

That wraps up this post…I’ll be looking at doing a followup post that looks at the Network Insight user interface and what information about network traffic, flows and routing can be viewed and analysed as well as taking a look at the surprisingly good VM, Host and Cluster level metrics

References:

http://www.arkin.net/

https://www.vmware.com/products/vrealize-network-insight.html

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vmware-vrealize-network-insight-faq.pdf

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vmware-vrealize-network-insight-user-guide.pdf

NSX Bytes: vCloud Director Can’t Deploy NSX Edges

Over the weekend I was tasked with the recovery of a #NestedESXi lab that had vCloud Director and NSX-v components as part of the lab platform. Rather than being a straight forward restore from the Veeam backup I also needed to downgrade the NSX-v version from 6.2.4 to 6.1.4 for testing purposes. That process was relatively straight forward and involved essentially working backwards in terms of installing and configuring NSX and removing all the components from vCenter and the ESXi hosts.

To complete the NSX-v downgrade I deployed a new 6.1.4 appliance and connected it back up to vCenter, configured the hosts, setup VXLAN, transport components and tested NSX Edge deployments through the vCenter Web Client. However, when it came time to test Edge deployments from vCloud Director I kept on getting the following error shown below.

Checking through the NSX Manager logs there was no reference to any API call hitting the endpoint as is suggested by the error detail above. Moving over to the vCloud Director Cells I was able to trace the error message in the log folder…eventually seeing the error generated below in the vcloud-container-info.log file.

As a test I hit the API endpoint referenced in the error message from a browser and got the same result.

This got me thinking that the error was either DNS related or permission related. After confirming that the vCloud Cells where resolving the NSX Manager host name correctly, as suggested by the error I looked at permissions as the cause of the 403 error. vCloud Director was configured to use the service.vcloud service account to connect to the previous NSX/vShield Manager and it dawned on me that I hadn’t setup user rights in the Web Client under Networking & Security. Under the Users section of the Manage Tab the service account used by vCloud Director wasn’t configured and needed to be added. After adding the user I retried the vCD job and the Edge deployed successfully.

While I was in this menu I thought I’d test what level of NSX User was required to for that service account to have in order to execute operations against vCloud Director and NSX. As shown below anything but NSX or Enterprise Administrator triggered a “VSM response error (254). User is not authorized to access object” error.

At the very least to deploy edges, you require the service account to be NSX Administrator…The Auditor and Security Administrator levels are not enough to perform the operations required. More importantly don’t forget to add the service account as configured in vCloud Director to the NSX Manager instance otherwise you won’t be able to have vCloud Director deploy edges using NSX-v.

 

 

« Older Entries