Tag Archives: Automation

Quick Fix: Terraform Plan Fails on Guest Customizations and VMware Tools

Last week I was looking to add the deployment of a local CentOS virtual machine to the Deploy Veeam SDDC Toolkit project so that it included the option to deploy and configure a local Linux Repository. This could then can be added to the Backup & Replication server. As part of the deployment I call the Terraform vSphere Provider to clone and configure the virtual machine from a pre loaded CentOS template.

As shown below, I am using the Terraform customization commands to configure VM name, domain details as well as network configuration.

In configuring the CentOS template i did my usual install of Open VM Tools. When the Terraform plan executes we applied the VM was cloned without issue, but it failed at the Guest Customizations part.

The error is pretty clear and to test the error and fix, I tried applying the plan without any VMware Tools installed. In fact without VMware Tools the VM will not finish the initial deployment after the clone and be deleted by Terraform. I next installed open-vm-tools but ended up with the same scenario of the plan failing and the VM not being deployed. For some reason it does not like this version of the package being deployed.

Next test was to deploy the open-vm-tools-deploypkg as described in this VMwareKB. Now the Terraform plan executed to the point of cloning the VM and setting up the desired VM hardware and virtual network port group settings but still failed on the custom IP and hostname components of the customisation. This time with a slightly different error.

The final requirement is to pre-install the perl package onto the template. This allows for the in guest customizations to take place together with VMware Tools. Once I added that to the template the Terraform Plan succeeded without issue.

References:

https://kb.vmware.com/s/article/2075048

 

 

Automated Configuration of Backup & Replication with PowerShel

As part of the Veeam Automation and Orchestration for vSphere project myself and Michael Cade worked on for VMworld 2018, we combined a number of seperate projects to showcase an end to end PowerShell script that called a number of individual modules. Split into three parts, we had a Chef/Terraform module that deployed a server with Veeam Backup & Replication installed. A Terraform module that deployed and configured an AWS VPC to host a Linux Repository with a Veeam PN Sitegateway. And finally a Powershell module that configured the Veeam server with a number of configuration items ready for first use.

The goal of the project was to release a PowerShell script that fully deployed and configured a Veeam platform on vSphere with backup repositories, vCenter server and default policy based jobs automatically configured and ready for use. This could then be adapted for customer installs, used on SDDC platforms such as VMware Cloud on AWS, or for POCs or lab use.

While we are close to releasing the final code on GitHub for the project, I thought I would branch out the last section of the code and release it separately. As I was creating this script, it became apparent to me that it would be useful for others to use as is or as an example from which to simplify manual and repetitive tasks that go along with configuring Backup & Replication after installation.

Script Overview:

The PowerShell script (found here on GitHub) performs a number of configuration actions against any Veeam Backup & Replication Server as per the included functions.

All of the variables are configured in a config.json file meaning nothing is required to be modified in the main PowerShell script. There are a number of parameters that can be called to trigger or exclude certain functions.

There are some pre-requisites that need to be in place before the script can be executed…most importantly the PowerShell needs to be executed on a system where the Backup & Replication Console is installed to allow access to the Veeam PowerShell Snap-in. From there you just need a new Veeam Backup & Replication server and a vCenter server plus their login credentials. If you want to add a Cloud Connect Provider offering Cloud Connect Backup or/and Replication you enter in all the details in the config.json file as well. Finally, if you want to add a Linux Repository you will need the details of that plus have it configured for key based authentication.

You can combine any of the parameters listed above. An example is shown above where -ClearVBRConfig has been used to reverse the -RunVBRConfigure parameter that was executed first to do an end to end configure. For Cloud Connect Replication, if you want to configure and deploy an NEA there is a specific parameter for that. If you didn’t want to configure Cloud Connect or the Linux Repository the parameters can be used individually, or together. If those two parameters are used, the Default Backup Repository will be used for the jobs that are created.

Automating Policy Based Backup Jobs:

Part of the automation that we where keen to include was the automatic creation of default backup jobs based on vSphere Tags. The idea was to have everything in place to ensure that once the script had been run, VMs could be backed up dependant on them being added to vSphere Tags. Once done the backup jobs would protect those VMs based on the policies set in the config.json.

The corresponding jobs are all using the vSphere Tags. From here the jobs don’t need to be modified when VMs are added…VMs assigned those Tags will be included in the job.

Conclusion:

Once the script has been run you are left with a fully configured Backup & Replication server that’s connected to vCenter and if desired (by default) has local and Cloud Connect repositories added with a set of default policy based jobs ready to go using vSphere Tags.

There are a number of improvements that I want to implement and I am looking out for Contributors on GitHub to help develop this further. At its base it is functional…but not perfect. However it highlights the power of the automation that is possible with Veeam’s PowerShell Snap-In and PowerCLI. One of the use-cases for this was for repeatable deployments of Veeam Backup & Replication into POCs or labs and for those looking to standup those environments, this is a perfect companion.

Look out for the full Veeam SDDC Deploy Toolkit being released to GitHub shortly.

References:

https://github.com/anthonyspiteri/powershell/tree/master/BR-Configure-Veeam

Creating Policy Based Backup Jobs for vCloud Director Self Service Portal with Tenant Creation

For a long time Veeam has lead the way in regard to the protection of workloads running in vCloud Director. Veeam first released deep integration into vCD back in version 7 of Backup & Replication that talked directly to the vCD APIs to facilitate the backup and recovery of vCD workloads and their constructs. More recently in version 9.5, the vCD Self Service Portal was released which also taps into vCD for tenant authentication.

This portal leverages Enterprise Manager and allows service providers to grant their tenants self-service management of their vCD workloads. It’s possible that some providers don’t even know that this portal exists let alone the value it offers. I’ve covered the basics of the portal here…but in this post, I want to talk about how to use the Veeam APIs and PowerShell SnapIn to automatically enable a tenant, create a default backup jobs based on policies, tie backup copy jobs to default job for longer retention and finally import the jobs into the vCD Self Service Portal ready for use.

Having worked with a service provider recently, they requested to have previously defined service definitions for tenant backups ported to Veeam and the vCD Self Service Portal. Part of this requirement was to have tenants apply backup policies to their VMs…this included short term retention and longer term GFS based backup.

One of the current caveats with the Veeam vCD Self Service Portal is that backup copy jobs are not configurable via the web based portal. The reason for this is that It’s our belief that service providers should be in control of longer term restore operations, however some providers and their tenants still request this feature.

Translated to a working solution, the PowerShell script combines a previously released set of code by Markus Kraus that uses the Enterprise Manager API to setup a new tenant in the vCD Self Service portal and a set of new functions that create default backup and backup copy jobs for vCD and then imports them into the portal ready for use. The variables are controlled by a JSON file making the script portable for Veeam Cloud and Service Providers to use as a base and build upon.

The end result is that when a tenant first logs into the vCD Self Service Portal they have jobs, dictated by the desired polices ready for use. The backup jobs are set to disabled without a schedule set. The scope of the default jobs is the tenant’s Virtual Datacenter. If there is a corresponding backup copy job, this is tied to the backup job and is ready to do its thing.

From here, the tenant can choose which policy that want to apply to their workloads and edit the desired job, change or leave the scope and add a schedule. The job name in the Backup and Replication console is modified to indicate which policy the tenant selected.

Again, if the tenant chooses a policy that requires longer term retention, the corresponding backup copy job is enabled in the Backup & Replication console…though not managed by the tenant.

Self service recovery is possible by the tenant for through the portal as per usual, including full VM recovery, file and application item level recovery. For recovery of the longer term workloads and/or items, this is done by the Service Provider.

This is a great example of the power of the Veeam API and PowerShell SnapIn providing a solution to offer more than what is out of the box and enhance the offering around the backup of vCloud Director workloads with Veeam’s integration. Feel free to use as is, or modify and integrate into your service offerings.

GitHub Page: https://github.com/anthonyspiteri/powershell/tree/master/vCD-Create-SelfServiceTenantandPolicyJobs

Automating the Creation of AWS VPC and Subnets for VMware Cloud on AWS

Yesterday I wrote about how to deploy a Single Host SDDC through the VMware Cloud on AWS web console. I mentioned some pre-requisites that where required in order for the deployment to be successful. Part of those is to setup an AWS VPC up with networking in place so that the VMC components can be deployed. While it’s not too hard a task to perform through the AWS console, in the spirit of the work I’m doing around automation I have gotten this done via a Terraform plan.

The max lifetime for a Single Instance deployment is 30 days from creation, but the reality is most people will/should be using this to test the waters and may only want to spin the SDDC up for a couple of hours a day, run some tests and then destroy it. That obviously has it’s disadvantages as well. The main one being that you have to start from scratch every time. Given the nature of the VMworld session around the automation and orchestration of Veeam and VMC, starting from scratch is not an issue however it was desirable to look for efficiencies during the re-deployment.

For those looking to save time and automate parts of the deployment beyond the AWS VPC, there are a number of PowerShell code example and modules available that along with the Terraform plan, reduce the time to get a new SDDC firing.

I’m using a combination of the above scripts to deploy a new SDDC once the AWS VPC has been created. The first one actually deploys the SDDC through PowerShell while the second one is a module that allows some interactivity via commandlets to do things such as export and import Firewall rules.

Using Terraform to Create AWS VPC for VMware Cloud on AWS:

The Terraform plan linked here on GitHub does a couple of things:

  • Creates a new VPC
  • Creates a VPC Network
  • Creates three VPC subnets across different Availability Zones
  • Associates the three VPN subnets to the main route table
  • Creates desired security group rules

https://github.com/anthonyspiteri/vmc_vpc_subnet_create

[Note] Even for the Single Instance Node SDDC it will take about 120 minutes to deploy…so that needs to be factored in in terms of the window to work on the instance.

Using Terraform to Deploy and Configure a Ready to use Backup Repo into an AWS VPC

A month of so ago I wrote a post on deploying Veeam Powered Network into an AWS VPC as a way to extend the VPC network to a remote site to leverage a Veeam Linux Repository running as an EC2 instance. During the course of deploying that solution I came across a lot of little check boxes and settings that needed to by tweaked in order to get things working. After that, I set myself the goal of trying to automate and orchestrate the deployment end to end.

For an overview of the intended purpose behind the solution head to the original blog post here. That post was mainly focused around the Veeam PN component, however I was using that as a mechanism to create a site-to-site connection to allow Veeam Backup & Replication to talk to the other EC2 instance which was the Veeam Linux Repository.

Terraform by HashiCorp:

In order to automate the deployment into AWS, I looked at Cloudformation first…but found that learning curve to be a little steep…so I went back to HashiCorp’s Terraform which I have been familiar with for a number of years, but never gotten my hands dirty with. HashiCorp specialise in Cloud Infrastructure Automation and their provisioning product is called Terraform.

Terraform is used to create, manage, and update infrastructure resources such as physical machines, VMs, network switches, containers, and more. Almost any infrastructure type can be represented as a resource in Terraform.

A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).

Terraform supports a host of providers and once you wrap your head around the basics and view some example code, provisioning Infrastructure as Code can be achieved with relatively no coding experience…however, as I did find out, you need to be careful in this world and not make the same initial mistake I did as explained in this post.

Going from Manual to Orchestrated with Automation:

The Terraform AWS provider is what I used to create the code required to deploy the required components. Like everything that’s automated, you need to understand the manual process first and that is where the previous experience came in handy. I knew what the end result was…I just needed to work backwards and make sure that the Terraform provider had all the instructions it needed to orchestrate the build.

the basic flow is:

  • Fetch AWS Access Key and Secret
  • Fetch AWS Key Pair
  • Create AWS VPC
    • Configure Networking and Routing for VPC
  • Create CentOS EC2 Instance for Veeam Linux Repo
    • Add new disk and set size
    • Execute configuration script
      • Install PERL modules
  • Create Ubuntu EC2 Instance for Veeam PN
    • Execute configuration script
      • Install VeeamPN modules from repo
  • Login to Veeam PN Web Console and Import Site Configuration.

I’ve uploaded the code to a GitHub project. An overview and instructions for the project can be found here. I’ve also posted a video to YouTube showing the end to end process which i’ve embedded below (best watched at 2x speed):

In order to get the Terraform plan to work there are some variables that need modifying in the GitHub Project and you will need to download, install and initialise Terraform. I’m intending to continue to tweak the project and complete the provisioning end to end, including the Veeam PN site configuration part at the end. The remote execution feature of Terraform allows some pretty cool things by way of script initiation.

References:

https://github.com/anthonyspiteri/automation/aws_create_veeamrepo_veeampn

https://www.terraform.io/intro/getting-started/install.html

 

Veeam Vault #9: Backup for Office 365 1.5 GA, Azure Stack and Vanguard Roundup

Welcome to another Veeam Vault! This is the ninth edition and given the last edition was focused around VMware and VMworld I thought just for a change, the focus for this edition will be Microsoft. Reason for that is over the past couple of weeks we have had some significant announcements around Azure Stack and the GA release of Backup for Office 365 1.5. I’ll cover both of those announcements, share some Veeam employee automation work that shows off the power of our new APIs and see what the Veeam Vanguard’s have been blogging about in the last month or so.

Backup for Office 365 1.5 GA:

The early part of my career was dedicated to Exchange Server however I drifted away from that as I made the switch to server virtualization and cloud computing. The old Exchange admin in my is still there however and it’s for that reason that I’m excited about the GA of our Backup for Office 365 product which is now at version 1.5. This release caters specifically for service providers adding scalability and automation enhancements as well as extended support for on-premises and hybrid Exchange setups.

New features and enhancements:

  • Distributed, scalable architecture: Enhanced scalability in distributed environments with several Remote Offices/Branch Offices and in service providers infrastructures
  • Backup proxies: take the workload off the management server, providing flexible throttling policy settings for performance optimization.
  • Support for multiple repositories: Streamlines data backup and restore processes.
  • Support for backup and restore of on-premises and hybrid Exchange organizations: Allows a variety of configurations and usage scenarios and implement those that meet your particular needs.
  • Increased performance: Restore operations allows for up to 5 times faster restores than in v1.0.
  • Restore of multiple datastore mailboxes using Veeam Explorer for Microsoft Exchange: simplifies workflow and minimizes workload for restore operators, as well as 1-Click restore of a mailbox to the original location.
  • RESTful API and PowerShell cmdlets: Helpful for automation of routine tasks and integration into existing or new portals.
  • UI Enhancements: Including main window, wizards, dialogs, and other elements, facilitating administration of the solution.
Examples of the Power of the Veeam APIs:

One of the features of Backup for Office 365 was the addition of a power set of RESTful APIs and PowerShell commandlets that are aimed are service providers automating the setup and management of their offerings around the product. A couple of our employees have written example interfaces for the Backup for Office 365 product and it shows that any service provider with some in house programming skill set can build customer portals that enhances their offerings and increases efficiency through automation.

Special welcome to Niels who this week joined our team. Great to have you on board!

Microsoft Azure Stack Support:

Last week at Microsoft Ignite, we announce our supportability for Azure Stack. This is based around our Windows Agent, Cloud Connect and Availability Console products that combine together to off an availability solution

Key benefits of Veeam’s support for the Azure Stack include:

  • Multi-tenancyVeeam Cloud Connect isolates backup copies for each tenant ensuring security and compliance; 
  • Multiple recovery options: Veeam Backup & Replication supports both granular item level recovery through Veeam Explorers for Microsoft Exchange, SQL Server, Microsoft SharePoint, Microsoft Active Directory and for Oracle, as well as full file level restores for tenant files that were deleted or corrupted;
  • Reporting & Billing: Veeam Availability Console supports real-time monitoring and chargeback on tenant usage, allow either Hosting providers or Enterprise organizations to easily manage and bill their tenants for Availability usage.

Veeam Vanguard Blog Post Roundup:

References:

https://helpcenter.veeam.com/docs/vbo365/guide/vbo_what’s_new_in_v1_5.html?ver=15

It’s ok to steal… VMUG UserCon Key Take Aways

Last week I attended the Sydney and Melbourne VMUG UserCons and apart from sitting in on some great sessions I came away from both events with a renewed sense of community spirit and enjoyed catching up with industry peers and good friends that I don’t see often enough. While the VMUG is generally struggling a little around the world at this point in time, kudos goes to both Sydney and Melbourne chapter leaders and steering committee in being able to bring out a superstar bunch of presenters (see panel below)…there might not be a better VMUG lineup anywhere in the world this year!

There was a heavy automation focus this year…which in truth was the same as last years events however last years messaging was more around the theory of _change or die_ this year there was more around the practical. This was a welcome change because, while it’s all well and good to beat the change messaging into people…actually taking them through real world examples and demo’s tends to get people more excited and keen to dive into automation as they get a sense of how to apply it to their every day jobs.

In the VMware community, there are not better examples of automation excellence than Alan Renouf and William Lam and their closing keynote sessions where they went through and deployed a fully functional SDDC vSphere environment on a single ESXi host from a USB Key was brilliant and hopefully will be repeated at other VMUGs and VMworld. This project was born out of last years VMworld Hackerthon’s and ended up being a really fun and informative presentation that showed off the power of automation along with the benefits of what undertaking an automation project can deliver.

“Its not stealing, its sharing” 

During the presentation Alan Renouf shared this slide which got many laughs and resonated well with myself in that apart from my very early failed uni days, I don’t think I have ever created a bit of code or written a script from scratch. There is somewhat of a stigma attached with “borrowing” or “stealing” code used to modify or create scripts within the IT community. There might also be some shame associated in admitting that a bit of code wasn’t 100% created by someone from scratch…I’ve seen this before and I’ve personally been taken to task when presenting some of the scripts that I’ve modified for purpose during my last few roles.

What Alan is pointing out there is that it’s totally ok to stand on the shoulders of giants and borrow from what’s out there in the public domain…if code is published online via someones personal blog or put up on GitHub then it’s fair game. There is no shame in being efficient…no shame in not having to start from scratch and certainly no shame in claiming success after any mods have been done… Own it!

Conclusion and Event Wrap Up:

Overall the 2017 Sydney and Melbourne UserCons where an excellent event and on a personal note I enjoyed being able to attend with Veeam as the Platinum Sponsor and present session on our vSAN/VVOL/SPBM support and introduce our Windows and Linux Agents to the crowd. The Melbourne crowd was especially engaged and asked lots of great questions around our agent story and where looking forward to the release of Veeam Agent for Windows.

Again the networking with industry peers and customers is invaluable and there was a great sense of community once again. The UserCon events are of a high quality and my thanks goes out to the leaders of both Sydney and Melbourne for working hard to organise these events. And which one was better? …I won’t go there but those that listened to my comment during our Sponsor giveaways at the end of the event knows how I really feel.

Until next year UserCon!

First Look: ManageIQ vCloud Director Orchestration

Welcome to 2017! To kick off the year I thought I’d do a quick post on a little known product (at least in my circles) from Red Hat Inc called ManageIQ. I stumbled across ManageIQ by chance having caught wind that they where soon to have vCloud Director support added to the product. Reading through some of the history behind ManageIQ I found out that in December of 2012 Red Hat acquired ManageIQ and integrated it into its CloudForms cloud management program…they then made it open source in 2014.

ManageIQ is the open source project behind Red Hat CloudForms. The latest product features are implemented in the upstream community first, before eventually making it downstream into Red Hat CloudForms. This process is similar for all Red Hat products. For example, Fedora is the upstream project for Red Hat Enterprise Linux and follows the same upstream-first development model.

CloudForms is a cloud management platform that also manages traditional server virtualization products such as vSphere and oVirt. This broad capability makes it ideal as a hybrid cloud manager as its able to manage both public clouds and on-premises private clouds and virtual infrastructures. This acts as a single management interface into hybrid environments that enables cross platform orchestration to be achieved with relative ease. This is backed by a community that contributes workflows and code to the project.

The supported platforms are shown below.

The October release was the first iteration for the vCloud provider which supports authentication, inventory (including vApps), provisioning, power operations and events all done via the use of the API provided by vCloud Director. First and foremost I see this as a client facing tool rather than an internal orchestration tool for vCAN SPs however given it can go cross platform there can be a use for VM or Container orchestration that SPs could tap into.

While it’s still relatively immature compared to the other platforms it supports, I see great potential in this and I think all vCAN Service Providers running vCloud Director should look at this as a way for their customers to better consume and operate vCD coming from a more modern approach, rather than depending on the UI.

Adding vCloud Director as a Cloud Provider:

Once the Appliance is deployed, head to Compute and Add New Cloud Provider. From the Type dropdown select VMware vCloud

Depending on which version of vCD SP your Service Provider is running, select the appropriate API Version. For vCD SP 8.x it should be vCloud API 9.0

Next add in the URL of the vCloud Director endpoint with it’s port…which is generally 443. For the username, you use the convention of [email protected] which allows you to login specifically to your vCD Organization. If you want to login at an admin enter in [email protected] to get top level access.

Once connected you can add as many vCD endpoints as you have. As you can see below I am connected to four seperate instances of vCloud.

Clicking through you get a Summary of the vCloud Zone with it’s relationships.

Clicking on the Instances you get a list of your VM’s, but this also has views for Virtual Datacenter, vApps and other vCD objects. As you can see below there is detailed views on the VM and it does have basic Power functions in this build.

I’ve just started to look into the power of CloudForms and have been reading through the ManageIQ automation guide. It’s one of those things that needs a little research plus some trial and error to master, but I see this form of cloud consumption where the end user doesn’t have to directly manipulate the various API endpoints as the future. I’m looking forward to how the vCloud Director provider matures and I’ll be keeping an eye on the forums and ManageIQ GitHub page for more examples.

Resources:

http://manageiq.org/docs/get-started/
http://manageiq.org/docs/reference/
https://pemcg.gitbooks.io/mastering-automation-in-cloudforms-and-manageiq/content/chapter1.html

VCA-CLI for vCloud Director: New Networking Features

There is a lot of talk going around how IT Pros can more efficiently operate and consume Cloud Based Services…AWS has lead the way in offering a rich set of APIs for it’s clients to use to help build out cloud applications and infrastructure and there are a ton of programming libraries and platforms that have seen the rise of the DevOps movement…And while AWS has lead the way, other Public Clouds such as Azure (with PowerShell Packs) and Google have also built self service capability through APIs.

vCloud Director has always had a rich set of APIs (API Online Doco Here) and as I blogged about last year Paco Gomez has been developing a tool called VCA-CLI which is based on pyvcloud which is a Python SDK for vCloud Director and vCloud Air. This is an alternative to Web Based creation and management of vCloud Director vDCs and vApps. Being Python based you have the option of running it pretty much on any OS you like…the posts below show you how to install and configure VCA on a Mac OS X OS and Windows and how to connect up to a vCloud Director based Cloud Org.

Initial releases of VCA-CLI didn’t have the capability to configure the Firewall settings of a vDC Edge Gateway, but since the release of version 16, Firewall rule management has been added. In the below example, I connect up to my vCD Org in Zettagrid, gather some information about my vDC, deploy a SexiLog VM template, set the Syslog setting on the Gateway and then configure a new NAT and Firewall rules to open up port 8080 to the SexiLog Web interface.

And the end result:

Again, this highlights the power of the vCloud Director API and what can be done with the pyvcloud Python SDK. Once perfected the set of commands above can be used to deploy vApps and configure networking in seconds instead of having to work through the vCloud Director UI…and that’s a win win!

References:

https://pypi.python.org/pypi/vca-cli

https://github.com/vmware/vca-cli

http://www.sexilog.fr/

 

The Power of Network Automation: How a Huge Low Turned into a Great High!

A few weeks back at Zettagrid we released our NSX Advanced Networking product that we have been working on for the best part of 12 months. I’m particularly  proud of this release as it represents a significant realisation of a vision myself and others have had in trying to integrate NSX into the Zettagrid IaaS platform. Furthermore the release held a deeper meaning as it showed off what can be achieved when faced with disappointment and failure.

Taking myself back to February of 2014 I was presenting to a government panel for a Cloud Computing tender which ended up going horribly wrong…Notwithstanding the fact that the tender had specified IaaS as the basis of the tender the presentation actually ended up being a practical test on deploying a three tier application into a Virtual Datacenter in an allotted time period which was more akin to an Managed Services Provider than an Infrastructure Provider. Cutting a long story short I was able to get vCloud Director configured in such a way to get the vShield to do basic load balancing but failed to produce a working IIS Default page externally which would have meant passing the test and us making it through to the next stage of the process.

I came out of that presentation as deflated as I have been in my career…I don’t usually fail and up until that point every presentation and demo I had given had resulted in success…as I sunk down a couple of whiskey’s in the pub next to the government agency building I was trying to think to myself what went wrong? Surly there had to be a more efficient way to deploy, configure and manage networks in a cloud environment…it was decided there and then that Zettagrid would look at NSX as a way to improve network efficiency via automation.

Looking back at the tender process the Government agency got it all wrong…they expected the tenderer to deploy and configure the full environment themselves…they expected a Managed Service instead of a pure IaaS. In fact it should have been that the roles were reversed and that instead of us being handed the practical example to work through the design configuration and setup it should have been them who did the configuring. They needed the tools to achieve the goal and at that stage we where not bale to provide them.

That said, even with this initial release of NSX Advanced Networking the outcome might have been much of the same, though there would have been much better Load Balancing options which ultimately cost us a shot at the next round but what resonated strongly out of that afternoon was that we needed to look at network automation more seriously.

In deploying NSX across our vCloud Hosting Zones we have not only been able to release enhanced networking services for our vCloud Director Virtual Datacenters but we have also laid the ground work for future released to be more software defined so that these sorts of tiered applications can be deployed in minutes through automated blueprints…this isn’t something new or particularity ground breaking…there are many automation platforms that allow for the orchestration and automation of pre-defined template solutions however these are for the most part private cloud or enterprise solutions

There are not too many cloud providers (that don’t start with an A) that offer this service to their clients within APAC.

The Hybrid Cloud is the future of IaaS and even though the landscape might change over the next 5-10 years with regards to containerised applications and services superseding more “traditional” Virtual Machine based applications the one thing that won’t change is the way in which the networking connects the client to the server and back. NSX is a great platform built from the ground up to be consumed by APIs and because of that failure 18 months ago I’m proud to have helped deliver (along with a super talented team of developers and engineers) and now work for a company that’s embraced change and is at the cutting edge of changing the way in which networks are both created and consumed using NSX as the overlay technology.