Monthly Archives: October 2019

There is still a Sting in the Tail for Cloud Service Providers

This week it gave me great pleasure to see my former employer, Zettagrid announced a significant expansion in their operations, with the addition of three new hosting zones to go along with their existing four zones in Australia and Indonesia. They also announced the opening of operations in the US. Apart from the fact I still have a lot of good friends working at Zettagrid the announcement vindicates the position and role of the boutique Cloud Service Provider in the era of the hyper-scale public cloud providers.

When I decided to leave Zettagrid, I’ll be honest and say that one of the reasons was that I wasn’t sure where the IaaS industry would be placed in five years. That was now, more than three years ago and in that time the industry has pulled back significantly from the previous inferred position of total and complete hyper-scale dominance in the cloud and hosting market.

Cloud is not a Panacea:

The Industry no longer talks about the cloud as a holistic destination for workloads, and more and more over the past couple of years the move has been towards multi and hybrid cloud platforms. VMware has (in my eyes) been the leader of this push but the inflection point came at AWS re:Invent last year, when AWS Outposts was announced. This shift in mindset is driven by the undisputed leader in the public cloud space towards consuming an on-premises resource in a cloud way.

I’ve always been a big supporter of boutique Service Providers and Managed Service Providers… it’s in my blood and my role at Veeam allows me to continue to work with top innovative service providers around the world. Over the past three years, I’ve seen the really successful ones thrive through themselves pivoting by offering their partners and tenants differential services… going beyond just traditional IaaS.

These might be in the form of enhancing their IaaS platform by adding more avenues to consume services. Examples of this are adding APIs, or the ability for the new wave of Infrastructure as Code tools to provision and manage workloads. vCloud Director is a great example of continued enhancement that, upon every releases offers something new to the service provider tenant. The Plugable Extension Architecture now allows service providers to offer new services for backup, Kubernetes and Object Storage.

Backup and Disaster Recovery is Driving Revenue:

A lot of service providers have also transitioned to offering Backup and Disaster Recovery solutions which in many cases has been the biggest growth area for them over the past number of years.  Even with the extreme cheapness that the hyper-scalers offer for the their cloud object storage platform.

All this leads me to believe that there is still a very significant role to be had for Service Providers in conjunction with other cloud platforms for a long time to come. The service providers that are succeeding and growing are not sitting on their hands and expecting what once worked to continue working. The successful service providers are looking at ways to offer more services and continue to be that trusted provider of IT.

I was once told in the early days of my career that if a client has 2.3 products with you, then they are sticky and the likelihood is that you will have them as a customer for a number of years. I don’t know the actual accuracy of that, but I’ve always carried that belief. This flies in the face of modern thinking around service mobility which has been reinforced by the improvement in underlying network technologies to allow the portability and movement of workloads. This also extends to the ease to which a modern application can be provisioned, managed and ultimately migrated. That said, all service providers want their tenants to be sticky and not move.

There is a Future!

Whether it be through continuing to evolve existing service offerings, adding more ways to consume their platform, becoming a broker for public cloud services or being a trusted final destination for backup and Disaster Recovery, the talk about the hyper-scalers dominating the market is currently not a true reflection of the industry… and that is a good thing!

Using Variable Maps to Dynamically Deploy vSphere VMs with Terraform

I’ve been working on a project over the last couple of weeks that has enabled me to sharpen my Terraform skills. There is nothing better than learning by doing and there is also nothing better than continuously improving code through more advanced constructs and methods. As this project evolved it became apparent that I would need to be able to optimize the Terraform/PowerShell to more easily deploy VMs based on specific vSphere templates.

Rather than have one set of Terraform declarations per template (resulting in a lot of code duplication), or having the declaration variables tied to specific operating systems changing (resulting in more manual change) depending on the what was being deployed, I looked for a way to make it even more “singularly declarative”.

Where this became very handy was when I was looking to deploy VMs based on Linux Distro. 98% of the Terraform code is the same no matter if Ubuntu, or CentOS was being used. The only difference was the vSphere Template being used to clone the new VM from, the Template Password and also, in this case a remote-exec call that needed to be made to open a Firewall port.

To get this working I used Terraform variable maps. As you can see below, the idea behind using maps is to allow groupings of like variables in one block declaration. These map values are then fed through to the rest of the Terraform code. Below is an example of a maps.tf file that I have seperate to the variables.tf file. This was an easier way to logically seperate what was being configured using maps.

At the top I have a standard variable that is the only variable that changes and needs setting. If ubuntu is set as the vsphere_linux_distro then all the map values that are ubuntu = will be used. The same if that variable is set to centos.

This is set in the terraform.tfvars file, and links back to the mappings. From here Terraform will lookup the linux_template variable and map it with the various mapped values.

The above data source that dictates what template is used builds the name from the lookup function of the base variable and the map value.

Above, the values we set in the maps are being used to execute the right command depending if it is Ubuntu or Centos and also to use the correct password depending on the linux_distro set. As mentioned, the declared variable can either be set in the terraform.tfvars file, or passed through at the time the plan is executed.

The result is a more controlled and easily managed way to use Terraform to deploy VMs from different pre-existing VM templates. The variable mappings can be built up over time and used as a complete library or different operating systems with different options. An other awesome feature of Terraform!

References:

https://www.terraform.io/docs/configuration-0-11/variables.html#maps