Tag Archives: Networking

Released : Veeam PN v2…Making VPNs Simple, Reliable and Scalable

When it comes to connecting remote sites, branch offices or extending on-premises networks to the cloud that level of complexity has traditionally always been high. Networking has always been the most complex part of any IT platform. There has also always been a high level of cost associated with connecting sites…both from a hardware or a software point of view. There are also the man hours to ensure things are setup correctly and will continue to work. As well and that, security and performance are also important factors in any networking solution..

Simplifying Networking with Veeam

At VeeamOn in 2017, we announced the release candidate for Veeam Powered Network (Veeam PN) which in combination with our Restore to Azure functionality created a new solution to ease the complexities around extending an on-premises network to an Azure network to ensure connectivity during restoration scenarios. In December of that year, Veeam PN went generally available as a FREE solution.

What Veeam PN does well is present a simple and intuitive Web Based User Interface for the setup and configuration of site-to-site and point-to-site VPNs. Moving away from the intended use case, Veeam PN became popular in the IT enthusiast and home lab worlds as a simple and reliable way to remain connected while on the road, or to mesh together with ease networks that where spread across disparate platforms.

By utilizing OpenVPN under the surface and automating and orchestrating the setup of site-to-site and point-to-site networks, we leveraged a mature Open Source tool that offered a level of reliability and performance that suited most use cases. However, we didn’t want to stop there and looked at ways in which we could continue to enhance Veeam PN to make it more useful for IT organizations and start to look to increase underlying performance to maximize potential use cases.

Introducing Veeam Powered Network v2 featuring WireGuard®

With the release of Veeam PN v2, we have enhanced what is possible for site-to-site connectivity by incorporating WireGuard into the solution (replacing OpenVPN for site-to-site) as well as enhancing usability. We also added the ability to better connect to remote devices with the support of DNS for site-to-site connectivity.

WireGuard has replaced OpenVPN for site-to-site connectivity in Veeam PN v2 due to the rise of it in the Open Source world as a new standard in VPN technologies that offers a higher degree of security through enhanced cryptography and operates more efficiently, leading to increased performance and security. It achieves this by working in kernel and by using fewer lines of code (4000 compared to 600,000 in OpenVPN) and offers greater reliability when thinking about connecting hundreds of sites…therefore increasing scalability.

For a deeper look at why we chose WireGuard… have a read of my offical veeam.com blog. The story is very compelling!

Increased Security and Performance

By incorporating WireGuard into Veeam PN we have further simplified the already simple WireGuard setup and allow users of Veeam PN to consume it for site-to-site connectivity even faster via the Veeam PN Web Console. Security is always a concern with any VPN and WireGuard again takes a more simplistic approach to security by relying on crypto versioning to deal with cryptographic attacks… in a nutshell it is easier to move through versions of primitives to authenticate rather than client server negotiation of cipher type and key lengths.

Because of this streamlined approach to encryption in addition to the efficiency of the code WireGaurd can out perform OpenVPN, meaning that Veeam PN can sustain significantly higher throughputs (testing has shown performance increases of 5x to 20x depending on CPU configuration) which opens up the use cases to be for more than just basic remote office or homelab use. Veeam PN can now be considered as a way to connect multiple sites together and have the ability to transfer and sustain hundreds of Mb/s which is perfect for data protection and disaster recovery scenarios.

Other Enhancements

The addition of WireGuard is easily the biggest enhancement from Veeam PN v1, however there are a number of other enhancements listed below

  • DNS forwarding and configuring to resolve FQDNs in connected sites.
  • New deployment process report.
  • Microsoft Azure integration enhancements.
  • Easy manual product deployment.
Conclusion

Once again, the premise of Veeam PN is to offer Veeam customers a free tool that simplifies the traditionally complex process around the configuration, creation and management of site-to-site and point-to-site VPN networks. The addition of WireGuard as the site-to-site VPN platform will allow Veeam PN to go beyond the initial basic use cases and become an option for more business-critical applications due to the enhancements that WireGuard offers.

Quick Post – Installing WireGuard® Client on MacOS

For those that have been monitoring my Twitter posts over the past month of so, i’ve been hinting at some upcoming news around WireGuard and the research i’ve been doing by way of getting to know about what makes it tick. In a nutshell, WireGuard is a VPN protocol similar to OpenVPN or IPsec, but modern and more streamlined

WireGuard is an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. It aims to be fastersimpler, leaner, and more useful than IPsec, while avoiding the massive headache. It intends to be considerably more performant than OpenVPN. WireGuard is designed as a general purpose VPN for running on embedded interfaces and super computers alike, fit for many different circumstances. Initially released for the Linux kernel, it is now cross-platform and widely deployable. It is currently under heavy development, but already it might be regarded as the most secure, easiest to use, and simplest VPN solution in the industry.

This specific post isn’t about the installation and configuration of WireGuard in the context of a VPN server (Stand by for some news about that over the next couple of days), but a quick look at how to install the WireGuard Toolkit on a MacOS system. Unlike OpenVPN, that has clients for almost any platform you can think of, WireGuard is still in its infancy when it comes to stable clients.

Even the offical Installation Page state that a lot of the steps and clients involved are in the experimental stages. For me, on my MBP running Mojave 10.14.2, I was having issues installing the Toolkit from the Apple Store.

It wouldn’t install the client full stop. I’m not sure why exactly, but rather than troubleshoot… I decided to go down the tried and tested path of using HomeBrew to try install it. Below are the very quick and easy steps to install from the Terminal.

Once installed, you can start the desktop tray application by searching for WireGuard. The WireGuard Toolkit icon will appear in the tray as show below

From here you can Manage the Tunnels manually or import the configuration from a file obtained from a WireGuard Server.

And that’s it… the Client has a similar look and feel to the TunnelBlick OpenVPN Client I have been using to connect up to my Veeam PN network while I am on the road… maybe in that there is a clue as to why I have been looking at WireGuard… or maybe not.

Either way, the easiest way to install the WireGuard Toolkit client on MacOS is with Home Brew as shown above…quick, simple and no fuss!

WireGuard is a registered trademark of Jason A. Donenfeld.

References:

https://www.wireguard.com

https://www.stavros.io/posts/how-to-configure-wireguard

NSX Bytes – What’s New in NSX-T 2.4

A little over two years ago in Feburary of 2017 VMware released NSX-T 2.0 and with it came a variety of updates that looked to continue to push NSX-T beyond that of NSX-v while catching up in some areas where the NSX-v was ahead. The NSBU has had big plans for NSX beyond vSphere for as long as I can remember, and during the NSX vExpert session we saw how this is becoming more of a reality with NSX-T 2.4. NSX-T is targeted at more cloud native workloads which also leads to a more devops focused marketing effort on VMware’s end.

NSX-T’s main drivers relate to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-T is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors.

What’s new in NSX-T 2.4:

[Update] – The Offical Release Notes for NSX-T 2.4 have been releases and can be found here. As mentioned by Anthony Burke

I only touch on the main features below…This is a huge release and I don’t think i’ve seen a larger set of release notes from VMware. There are also a lot of Resolved Issues in the release which are worth a look for those who have already deployed NSX-T in anger. [/Update]

While there are a heap of new features in NSX-T 2.4, for me one of the standout enhancements is the migration options that now exist to take NSX-v platforms and migrate them to NSX-T. While there will be ongoing support for both platforms, and in my opinion NSX-v still hold court in more traditional scenarios, there is clear direction on the migration options.

In terms of the full list of what’s new:

  • Policy Management
    • Simplified UI with rich visualisations
    • Declarative Policy API to configure networking, security and services
  • Advanced Network Services
    • IPv6 (L2, L3, BGP, FW)
    • ENS Support for Edge and DFW
    • VPN (L2, L3)
    • BGP Enhancements (allow-as in, multi-path-asn relax, iBGP support, Inter-SR routing)
  • Intrinsic Security
    • Identity Based FW
    • FQDN/URL whitelisting for DFW
    • L7 based application signatures for DFW
    • DFW operational enhancements
  • Cloud and Container Updates
    • NSX Containers (Scale, CentOS support, NCP 2.4 updates)
    • NSX Cloud (Shared NSX gateway placement in Transit VPC/VNET, VPN, N/S Service Insertion, Hybrid Overlay support, Horizon Cloud on Azure integration)
  • Platform Enhancements
    • Converged NSX Manager appliance with 3 node clustering support
    • Profile based installs, Reboot-less maintenance mode upgrades, in-place mode upgrades for vSphere Compute Clusters, n-VDS visualization, Traceflow support for centralized services like Edge Firewall, NAT, LB, VPN
    • v2T Migration: In-built UI wizards for “vDS to N-vDS” as well as “NSX-v to NSX-T” in-place migrations
    • Edge Platform: Proxy ARP support, Bare Metal: Multi-TEP support, In-band management, 25G Intel NIC support
Infrastructure as Code and NSX-T:

As mentioned in the introduction, VMware is targeting cloud native and devops with NSX-T and there is a big push for being able to deploy and consume networking services across multiple platforms with multiple tools via the NSX API. At it’s heart, we see here the core of what was Nicira back in the day. NSX (even NSX-v) has always been underpinned by APIs and as you can see below, the idea of consuming those APIs with IaC, no matter what the tool is central to NSX-T’s appeal.

Conclusion:

It’s time to get into NSX-T! Lots of people who work in and around the NSBU have been preaching this for the last three to four years, but it’s now apparent that this is the way of the future and that anyone working on virtualization and cloud platforms needs to get familiar with NSX-T. There has been no better time to set it up in the lab and get things rolling.

For a more in depth look at the 2.4 release, head to the official launch blog post here.

References:

vExpert NSX Briefing

https://blogs.vmware.com/networkvirtualization/2019/02/introducing-nsx-t-2-4-a-landmark-release-in-the-history-of-nsx.html/

Configuring Amazon S3 Access from VMware Cloud on AWS through an S3 Endpoint

When looking at how to configure networking for interactions between a VMware Cloud on AWS SDDC and an Amazon VPC there is a little bit to grasp in terms of what needs to be done to achieve traffic flow between the SDDC and the rest of the world.

As an example, by default if you want to connect to S3 the default configuration is to go through the Amazon ENI (Elastic Network Interface) which means that unless configured correctly, connectively to Amazon S3 will fail. Brian Gaff has a really good series of posts on Networking and Security Groups when working on VMware Cloud on AWS and are worth a read to get a deeper understanding of VMC to AWS networking.

There is a way to change this behaviour to make connectivity to Amazon S3 connect via the SDDCs Internet Gateway. This is done through the VMware Cloud Portal by going to the Networking section of the relevant SDDC.

Doing this, while easy enough means that you loose a lot of the benefits that passing traffic through the ENI provides. That is a high-bandwidth, low latency connection between the VPC and the SDDC which also provides free egress. In the case of S3 and the utilising the Veeam Cloud Tier it means more optimal connectivity between a Veeam Backup & Replication instance hosted in the SDDC and Amazon S3.

To allow communication between the SDDC and Amazon S3 over the ENI the following needs to be actioned.

Create Endpoint:

First step is to go into the AWS Console, go to the VPC thats connected to the VMC service and create a new Endpoint for S3 as shown below making sure you select the correct Route Table.

Configure Security Group:

Next is to configure the Security Group associated with your VPC to allow traffic to the logical network or networks. It’s a basic HTTPS Inbound rule where your source is the SDDN network or networks you want access from.

Create Compute Gateway Firewall Rule:

The final step is to configure a firewall rule on the SDDC Compute Gateway to allow HTTPS traffic to the Amazon VPC from the network or networks you want access to Amazon S3 from.

That’s pretty much it! After that, you should be able to access Amazon S3 over the ENI and get all the benefits that delivers.

References:

https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-B501FA3C-EAF9-4005-AC72-155C3F592281.html

Deploying Veeam Powered Network into a AWS VPC

Veeam PN is a very cool product that has been GA for about four months now. Initially we combined the free product together with Veeam Direct Restore to Microsoft Azure to create Veeam Recovery to Microsoft Azure. Of late there has been a push to get Veeam PN out in the community as a standalone product that’s capable of simplifying the orchestration of site-to-site and point-to-site VPNs.

I’ve written a few posts on some of the use cases of Veeam PN as a standalone product. This post will focus on getting Veeam PN installed into an AWS VPC to be used as the VPN gateway. Given that AWS has VPN solutions built in, why would you look to use Veeam PN? The answer to that is one of the core reasons why I believe Veeam PN is a solid networking tool…The simplicity of the setup and ease of use for those looking to connect or extend on-premises or cloud networks quickly and efficiently.

Overview of Use Case and Solution:

My main user case for my wanting to extend the AWS VPC network into an existing Veeam PN Hub connected to my my Homelab and Veeam Product Strategy Lab was to test out using an EC2 instance as a remote Veeam Linux Repository. Having a look at the diagram below you can see the basics of the design with the blue dotted line representing the traffic flow.

 

The traffic flows between the Linux Repository EC2 instance and the Veeam Backup & Replication server in my Homelab through the Veeam PN EC2 instance. That is via the Veeam PN Hub that lives in Azure and the Veeam PN Site Gateway in the Homelab.

The configuration for this includes the following:

  • A virtual private cloud with a public subnet with a size /24 IPv4 CIDR (10.0.100.0/24). The public subnet is associated with the main route table that routes to the Internet gateway.
  • An Internet gateway that connects the VPC to the Internet and to other AWS products.
  • The VPN connection between the VPC network and the Homelab network. The VPN connection consists of a Veeam PN Site Gateway located in the AWS VPC and a the Veeam PN HUB and Site Gateway located at the Homelab side of the VPN connection.
  • Instances in the External subnet with Elastic IP addresses that enable them to be reached from the Internet for management.
  • The main route table associated with the public subnet. The route table contains an entry that enables instances in the subnet to communicate with other instances in the VPC, and two entries that enables instances in the subnet to communicate with the remote subnets (172.17.0.0/24 and 10.0.30.0/24).

AWS has a lot of knobs that need adjusting even for what would normally be assumed functionality. With that I had to work out which knobs to turn to make things work as expected and get the traffic flowing between sites.

Veeam PN Site Gateway Configuration:

To get a Veeam PN instance working within AWS you need to deploy an Ubuntu 16.04 LTS form the Instance Wizard or Marketplace into the VPC (see below for specific configuration items). In this scenario a t2.small instance works well with a 16GB SSD hard drive as provided by the instance wizard. To install the Veeam PN services onto the EC2 instance, follow my previous blog post on Installing Veeam Powered Network Direct from a Linux Repo.

Once deployed along with the EC2 instance that I am using as a Veeam Linux Repository I have two EC2 instances in the AWS Console that are part of the VPC.

From here you can configure the Veeam PN instance as a Site Gateway. This can be done via the exposed HTTP/S Web Console of the deployed VM. First you need to create a new Entire Site Client from the HUB Veeam PN Web Console with the network address of the VPC as shown below.

Once the configuration file is imported into the AWS Veeam PN instance it should connect up automatically.

Jumping on the Veeam PN instance to view the routing table, you can see what networks the Veeam HUB has connected to.

The last two entries there are referenced in the design diagram and are the subnets that have the static routes configured in the VPC. You can see the path the traffic takes, which is reflected in the diagram as well.

Looking at the same info from the Linux Repository instance you can see standard routing for a locally connected server without any specific routes to the 172.17.0.0/24 or 10.0.30.0/24 subnets.

Notice though with the traffic path to get to the 172.17.0.0/24 subnet it’s now going through an extra hop which is the Veeam PN instance.

Amazon VPC Configuration:

For the most part this was a straightforward VPC creation with a IPv4 CIDR block of 10.0.100.0/24 configured. However, to make the routing work and the traffic flowing as desired you need to tweak some settings. After initial deployment of the Veeam PN EC2 instance I had some issues resolving both forward and reverse DNS entries which meant I couldn’t update the servers or install anything off the Veeam Linux software repositories.

By default there are a couple of VPC options that is turned off for some reason which makes all that work.

Enable both DNS Resolution and DNS Hostnames via the menu options highlighted above.

For the Network ACLs the default Allows ALL/ALL for inbound and outbound can be left as is. In terms of Security Groups, I created a new one and added both the Veeam PN and Linux Repository instances into the group. Inbound we are catering for SSH access to connect to and configure the instances externally and as shown below there are also rules in there to allow HTTP and HTTPS traffic to access the Veeam PN Web Console.

These, along with the Network ACLs are pretty open rules so feel free to get more granular if you like.

From the Route Table menu, I added the static routes for the remote subnets so that anything on the 10.0.100.0/24 network trying to get to 172.17.0.0/24 or 10.0.30.0/24 will use the Veeam PN EC2 instance as it’s next hop target.

EC2 Configuration Gotchya:

A big shout out to James Kilby who helped me diagnose an initial static routing issue by discovering that you need to adjust the Source/Destination Check attribute which controls whether source/destination checking is enabled on the instance. This can be done either against the EC2 instance right click menu, or on the Network Interfaces menu as shown below.

Disabling this attribute enables an instance to handle network traffic that isn’t specifically destined for the instance. For example, instances running services such as network address translation, routing, or a firewall should set this value to disabled. The default value is enabled.

Conclusion:

The end result of all that was the ability to configure my Veeam Backup & Replication server in my Homeland to add the EC2 Veeam Linux instance as a repository which allowed me to backup to AWS from home through the Veeam PN network site-to-site connectivity.

Bear in mind this is a POC, however the ability to consider Veeam PN as another options for extending AWS VPCs to other networks in a quick and easy fashion should make you think of the possabilities. Once the VPC/EC2 knobs where turned and the correct settings put in place, the end to end deployment, setup and connecting into the extended Veeam PN HUB network took no more than 10 minutes.

That is the true power of the Veeam Powered Network!

References:

https://docs.aws.amazon.com/glue/latest/dg/set-up-vpc-dns.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#change_source_dest_check

NSX-v 6.4.0 Released! What’s in it for Service Providers

This week VMware released NSX-v 6.4.0 (Build 7564187) and with it comes a new UI Plug-in for vSphere Client (HTML5) which includes some new dashboards including a new Update Lifecycle Manager built right into the Web Client. Reading through the release notes, for me the biggest improvements seem to be around NSX Edges and Edge services. These are central to Service Providers who offer NSX services with vCloud Director or otherwise via their service offerings. There are also as usual, a number of Resolved Issues which can be skimmed through in the release notes page.

What’s New:

As mentioned above there is a lot to get through and there are a lot of new enhancements and features packed into this release. I’ve gone through and picked the major ones as they might pertain to Service Providers running NSX on their platforms. I’ve basically followed the sections in the Release Notes but summarised for those that don’t want to troll through the page. Ad the end of each section i’ve commented on the benefits of the improvements.

Security Services

  • Identity Firewall now supports user sessions on remote desktop and application servers (RDSH) sharing a single IP address, new “fast-path” architecture improves processing speed of IDFW rules. Active Directory integration now allows selective synchronization for faster AD updates.
  • Distributed Firewall adds layer-7 application-based context for flow control and micro-segmentation planning.
  • Distributed Firewall rules can now be created as stateless rules at a per DFW section level.
  • Distributed Firewall supports VM IP realization in the hypervisor. This allows users to verify if a particular VM IP is part of a securitygroup/cluster/resourcepool/host.

These security features listed above will make a lot of people happy and improves end user experience and the DFW supporting within the VM is a small but important feature.

NSX User Interface

  • Support for vSphere Client (HTML5): Introduces VMware NSX UI Plug-in for vSphere Client (HTML5).
  • HTML5 Compatibility with vSphere Web Client (Flash): NSX functionality developed in HTML5 (for example, Dashboard) remains compatible with both vSphere Client and vSphere Web Client, offering seamless experience for users who are unable to transition immediately to vSphere Client.
  • Improved Navigation Menu: Reduced number of clicks to access key functionality, such as Grouping Objects, Tags, Exclusion List and System Configuration.

It’s great to see NSX jump over to the HTML5 Web Client and even though it’s a small first step its a great preview of what’s to come in future releases. The fact that it goes both ways, meaning older flash clients still have the features is important as well.

Operations and Troubleshooting

  • Upgrade Coordinator provides a single portal to simplify the planning and execution of an NSX upgrade. Upgrade Coordinator provides a complete system view of all NSX components with current and target versions, upgrade progress meters, one-click or custom upgrade plans and pre- and post-checks.
  • A new improved HTML5 dashboard is available along with many new components. Dashboard is now your default homepage. You can also customize existing system-defined widgets, and can create your own custom widgets through API.
  • New System Scale dashboard collects information about the current system scale and displays the configuration maximums for the supported scale parameters. Warnings and alerts can also be configured when limits are approached or exceeded.
  • A Central CLI for logical switch, logical router and edge distributed firewall reduces troubleshooting time with centralized access to distributed network functions.
  • New Support Bundle tab is available to help you collect the support bundle through UI on a single click. You can now collect the support bundle data for NSX components like NSX Manager, hosts, edges, and controllers.
  • New Packet Capture tab is available to capture packets through UI.
  • Multi-syslog support for up to 5 syslog servers.
  • API improvements including JSON support. NSX now offers the choice or JSON or XML for data formats. XML remains the default for backwards compatibility.

There is a lot going on here but for me it continues to solidify the vision that Martin Casado had around Nicira in it being efficient in software to get a deep view of what’s happened and what’s happening in your network. The System Scale dashboard (shown below) also is a great way to get an understanding of how loaded an NSX environment is…one of my favourite news features.

NSX Edge Enhancements

  • Enhancement to Edge load balancer health check. Three new health check monitors have been added: DNS, LDAP, and SQL.
  • You can now filter routes for redistribution based on LE/GE in prefix length in the destination IP.
  • Support for BGP and static routing over GRE tunnels.
  • NAT64 provides IPv6 to IPv4 translation.
  • Faster failover of edge routing services.
  • Routing events now generate system events in NSX Manager.
  • Improvements to L3 VPN performance and resiliency.

I’ve highlighted this in red because the improvements above continue to build on a very strong foundation that is the NSX Edge Gateway that still continues vShield DNA. Though I’ve been away from the day to day of a service provider for almost a year and a half I recognise that these new features create a more enterprise class of edge device. The little thing added will make network engineers happy.

Conclusion:

Overall this looks like a strong release for NSX-v and good to see that there is still a ton of development going into the platform. Service providers have the most to gain from this release which is a good thing! The only thing that I do hope is that as a 6.x.0 release that it’s stable and without any major bugs…the history of these first major release builds hasn’t been great but hopefully that’s a thing of the past with 6.4.0.

EDIT: Just to clarify after a couple of comments, it seems that for the moment vCD 9.0 and 8.20 is not compatible with NSX-v 6.4.0 just yet. More news when it comes to hand.

Resources:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.4/rn/releasenotes_nsx_vsphere_640.html

Released: NSX-v 6.3.5 and New Features and Fixes

Last week VMware released NSX-v 6.3.5 (Build 7119875) that contains a few new features and addresses a number of bug fixes from previous releases. Going through the release notes there are a lot of known issues that should be known and there are more than a few that apply to service providers…specifically there are a lot around Logical and Edge Routing functions. The other interesting point to highlight about this release is that this is apparently the same build that runs on VMware on AWS instances as mentioned by Ray Budavari.

The new features in this build are:

  • For vCenter 6.5 and later, Guest Introspection VM’s, on deployment, will be named Guest Introspection (XX.XX.XX.XX), where XX.XX.XX.XX is the IPv4 address of the host on which the GI machine resides. This occurs during the initial deployment of GI.
  • Guest Introspection service VM will now ignore network events sent by guest VMs unless Identify Firewall or Endpoint Monitoring is enabled
  • You can also modify the threshold for CPU and memory usage system events with this API: PUT /api/2.0/endpointsecurity/usvmstats/usvmhealththresholds
  • Serviceability enhancements to L2 VPN including
    • Changing and/or enabling logging on the fly, without a process restart
    • Enhanced logging
    • Tunnel state and statistics
    • CLI enhancements
    • Events for tunnel status changes
  • Forwarded syslog messages now include additional details previously only visible on the vSphere Web Client
  • Host prep now has troubleshooting enhancements, including additional information for “not ready” errors

That last new feature above is seen below…you can see the EAM Status message just below the NSX Manager IP which is a nice touch given the issues that can happen if EAM is down.

If you click on the Not Ready Installation Status you now get a more detailed report of what could be wrong and suggestions of how to resolve.

Important Fixes :

  • VMs migrated from 6.0.x can cause host PSOD When upgrading a cluster from 6.0.x to 6.2.3-6.2.8 or 6.3.x, the VM state exported can be corrupted and cause the receiving host to PSOD
  • “Upgrade Available” link not shown if cluster has an alarm. Users are not be able to push the new service spec to EAM because the link is missing and the service will not be upgraded
  • NSX Manager crashes with high NSX Manager CPU NSX Manager has an OOM (out of memory) error and continuously restarts
  • NSX Controller memory increases with hardware VTEP configuration causing high CPU usage A controller process memory increase is seen with hardware VTEP configurations running for few days. The memory increase causes high CPU usage that lasts for some time (minutes) while the controller recovers the memory. During this time the data path is affected
  • Translated IPs are not getting added to vNIC filters which is causing Distributed Firewall to drop traffic When new VMs are deployed, the vNIC filters do not get updated with the right set of IPs causing Distributed Firewall to block the traffic.

Those with the correct entitlements can download NSX-v 6.3.5 here.

References:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.3/rn/releasenotes_nsx_vsphere_635.html

Veeam Powered Network: Quick Video Walkthrough

Earlier this year at VeeamON we announced Veeam PN as part of the Restore to Microsoft Azure product. While Veeam PN is still in RC, I’ve written a series of posts around how Veeam PN can be used for a number of different use cases (See list below) and at VMworld 2017 I delivered a vBrownBag TechTalk on Veeam Powered Network which goes through an overview of what it is, how it works and an example of how easy it is to setup.

As mentioned, i’ve blogged about the three different use cases talked about in the presentation:

Clink on the links to visit the blog posts that go through each scenario and watch out for news around the GA of Veeam Powered Network happening shortly. Until then, download or deploy the RC from the Veeam.com website or Azure Marketplace and give it a try. Again, it’s free, simple, powerful and a great way to connect or extend networks securely with minimal fuss.

NSX Bytes: NSX-v 6.3.3 Released – Upgrade Notes and Enhancements

Last week VMware released NSX-v 6.3.3 (Build 6276725) and with it comes a new operating system for the NSX Controllers. Once upgraded the new controllers will be powered by Photon OS which is more and more making it’s way into VMware’s appliances. There are a few other new bits in this release but more importantly a number of Resolved Issues. For those running homelabs with one NSX Controller there are some important upgrade notes to be made aware of before kicking off…i’ll go into those below.

Compatibility:

Before moving to the upgrade there are some important notes around interoperability and supported ESXi versions as is explained in this VMwareKB. The minimum supported version of ESXi running with NSX-v 6.3.3 is as shown below:

  • NSX-v 6.3.3 installed in a vSphere 5.5 environment requires a minimum version of ESXi 5.5 GA
  • NSX-v 6.3.3 installed in a vSphere 6.0 environment requires a minimum version of ESXi 6.0 Update 2
  • NSX-v 6.3.3 installed in a vSphere 6.5 environment requires a minimum version of ESXi 6.5a
If NSX 6.3.3 is installed on an earlier version of 5.5/6.0 ESXi, the netcpa service will fail to start preventing communication between ESXi hosts and the NSX Controllers.
In terms of upgrading from previous versions of NSX-v you can see that the upgrade path does have some stoppers. Below is the interoperability matrix that included vCloud Director 8.20 which, at the moment is not supported with NSX-v 6.3.3…I expect that to change over the next couple of weeks.
Upgrading to NSX-v 6.3.3:

 

As mentioned there are things to look out for during and after the upgrade from previous builds of NSX-v. There are detailed upgrade notes in the release notes so as always, make sure to read those as well, but below is a brief walk through of the upgrade process I conducted in one of my NestedESXi labs.
Once the NSX Manager has been upgraded you should have the following in your Summary tab:
Once the NSX Manager has been upgraded you should restart the vCenter Web Client to ensure any lingering parts of the previous version are removed. Login to the Web Client and click through to Networking & Security -> Installation and then the Management Tab where you will see Upgrade Available.
IMPORTANT NOTE: The upgrade notes state that you need to have a minimum of three NSX Controllers which I’d say is linked to the fact that the underlying OS of the Controllers has been shifted to Photon OS. This is likely to impact anyone running NSX in a NestedESXi or homelab as generally, only one was deployed to preserve resources. Once you click on upgrade you will get a special upgrade warning before committing to the upgrade as shown below:
  • The NSX Controller cluster must contain three controller nodes to upgrade to NSX 6.3.3. If it has fewer than three controllers, you must add controllers before starting the upgrade
  • When you upgrade to NSX-v 6.3.3, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses

There is also a slight increase to the size of the storage for the controllers from 20GB to 28GB. Once upgraded the NSX Controllers will be at version 6.3.6235594.

The last major step is to upgrade the Host components from the Host Preparation tab. On vSphere 6.0 and above once you have upgraded to NSX 6.3.x, all future NSX VIB changes do not trigger a reboot…only maintenance mode is required to complete the VIB change. In NSX 6.3.3 there is a change to the NSX VIB names on ESXi 6.0 and later where the esx-vxlan and esx-vsip VIBs have been merged and replaced with esx-nsxv as shown below.

VIB names on ESXi 5.5 remain the same.

References:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.3/rn/releasenotes_nsx_vsphere_633.html

https://kb.vmware.com/kb/2151267

 

Cloud to Cloud to Cloud Networking with Veeam Powered Network

I’ve written a couple of posts on how Veeam Powered Network can make accessing your homelab easy with it’s straight forward approach to creating and connection site-to-site and point-to-site VPN connections. For a refresh on the use cases that I’ve gone through, I had a requirement where I needed access to my homelab/office machines while on the road and to to achieve this I went through two scenarios on how you can deploy and configure Veeam PN.

In this blog post I’m going to run through a very real world solution with Veeam PN where it will be used to easily connect geographically disparate cloud hosting zones. One of the most common questions I used to receive from sales and customers in my previous roles with service providers is how do we easily connect up two sites so that some form of application high availability could be achieved or even just allowing access to applications or services cross site.

Taking that further…how is this achieved in the most cost effective and operationally efficient way? There are obviously solutions available today that achieve connectivity between multiple sites, weather that be via some sort of MPLS, IPSec, L2VPN or stretched network solution. What Veeam PN achieves is a simple to configure, cost effective (remember it’s free) way to connect up one to one or one to many cloud zones with little to no overheads.

Cloud to Cloud to Cloud Veeam PN Appliance Deployment Model

In this scenario I want each vCloud Director zone to have access to the other zones and be always connected. I also want to be able to connect in via the OpenVPN endpoint client and have access to all zones remotely. All zones will be routed through the Veeam PN Hub Server deployed into Azure via the Azure Marketplace. To go over the Veeam PN deployment process read my first post and also visit this VeeamKB that describes where to get the OVA and how to deploy and configure the appliance for first use.

Components

  • Veeam PN Hub Appliance x 1 (Azure)
  • Veeam PN Site Gateway x 3 (One Per Zettagrid vCD Zone)
  • OpenVPN Client (For remote connectivity)

Networking Overview and Requirements

  • Veeam PN Hub Appliance – Incoming Ports TCP/UDP 1194, 6179 and TCP 443
    • Azure VNET 10.0.0.0/16
    • Azure Veeam PN Endpoint IP and DNS Record
  • Veeam PN Site Gateways – Outgoing access to at least TCP/UDP 1194
    • Perth vCD Zone 192.168.60.0/24
    • Sydney vCD Zone 192.168.70.0/24
    • Melbourne vCD Zone 192.168.80.0/24
  • OpenVPN Client – Outgoing access to at least TCP/UDP 6179

In my setup the Veeam PN Hub Appliance has been deployed into Azure mainly because that’s where I was able to test out the product initially, but also because in theory it provides a centralised, highly available location for all the site-to-site connections to terminate into. This central Hub can be deployed anywhere and as long as it’s got HTTPS connectivity configured correctly to access the web interface and start to configure your site and standalone clients.

Configuring Site Clients for Cloud Zones (site-to-site):

To configuration the Veeam PN Site Gateway you need to register the sites from the Veeam PN Hub Appliance. When you register a client, Veeam PN generates a configuration file that contains VPN connection settings for the client. You must use the configuration file (downloadable as an XML) to set up the Site Gateway’s. Referencing the digram at the beginning of the post I needed to register three seperate client configurations as shown below.

Once this has been completed you need deploy a Veeam PN Site Gateway in each vCloud Hosting Zone…because we are dealing with an OVA the OVFTool will need to be used to upload the Veeam PN Site Gateway appliances. I’ve previously created and blogged about an OVFTool upload script using Powershell which can be viewed here. Each Site Gateway needs to be deployed and attached to the vCloud vORG Network that you want to extend…in my case it’s the 192.168.60.0, 192.168.70.0 and 192.168.80.0 vORG Networks.

Once each vCloud zone has has the Site Gateway deployed and the corresponding XML configuration file added you should see all sites connected in the Veeam PN Dashboard.

At this stage we have connected each vCloud Zone to the central Hub Appliance which is configured now to route to each subnet. If I was to connect up an OpenVPN Client to the HUB Appliance I could access all subnets and be able to connect to systems or services in each location. Shown below is the Tunnelblick OpenVPN Client connected to the HUB Appliance showing the injected routes into the network settings.

You can see above that the 192.168.60.0, 192.168.70.0 and 192.168.80.0 static routes have been added and set to use the tunnel interfaces default gateway which is on the central Hub Appliance.

Adding Static Routes to Cloud Zones (Cloud to Cloud to Cloud):

To complete the setup and have each vCloud zone talking to each other we need to configure static routes on each zone network gateway/router so that traffic destined for the other subnets knows to be routed through to the Site Gateway IP, through to the central Hub Appliance onto the destination and then back. To achieve this you just need to add static routes to the router. In my example I have added the static route to the vCloud Edge Gateway through the vCD Portal as shown below in the Melbourne Zone.

Conclusion:

Summerizing the steps that where taken in order to setup and configure the configuration of a cloud to cloud to cloud network using Veeam PN through its site-to-site connectivity feature to allow cross site connectivity while allowing access to systems and services via the point-to-site VPN:

  • Deploy and configure Veeam PN Hub Appliance
  • Register Cloud Sites
  • Register Endpoints
  • Deploy and configure Veeam PN Site Gateway in each vCloud Zone
  • Configure static routes in each vCloud Zone

Those five steps took me less than 30 minutes which also took into consideration the OVA deployments as well. At the end of the day I’ve connected three disparate cloud zones at Zettagrid which all access each other through a Veeam PN Hub Appliance deployed in Azure. From here there is nothing stopping me from adding more cloud zones that could be situated in AWS, IBM, Google or any other public cloud. I could even connect up my home office or a remote site to the central Hub to give full coverage.

The key here is that Veeam Power Network offers a simple solution to what is traditionally a complex and costly one. Again, this will not suit all use cases but at it’s most basic functional level, it would have been the answer to the cross cloud connectivity questions I used to get that I mentioned at the start of the article.

Go give it a try!

« Older Entries