Author Archives: Anthony Spiteri

Heading to Tech Field Day 20 #TFD20

I’m currently sitting in my hotel room in sunny sunny San Jose. Today and tomorrow will be spent finishing off preparations for Tech Field Day 20. This will be my second Tech Field Day event following up from Cloud Field Day 5 in April. #TFD20 is the 10th year anniversary of Tech Field day and with that there is special significance being placed on this event which adds added excitement to the fact that I’m presenting with my fellow Product Strategy team members, Michael Cade and Rick Vanover.

Veeam at Tech Field Day 20

Once again, this is an important event for us at Veeam as we are given the #TFD stage for two hours on the Wednesday morning as we look to talk about what’s coming in our v10 release… and beyond. We have crafted the content with the delegates in mind and focused on core data protection functionality that looks to protect organizations from modern attacks and data loss.

For Michael, Rick and myself we will be focusing on reiterating what Veeam has done in leading the industry for a number of years in innovation while also looking at the progress we have made in recent times in transitioning to a true software defined, hardware agnostic platform that offers customers absolute choice.

The Veeam difference also lies in the way in which we release ready, stable and reliable solutions that are developed in innovative ways that are outside of the norm in the backup industry. What we will be showing on Wednesday, I believe, will highlight that as one of our strongest selling points.

Veeam are presenting at 10am (Pacific Time) Wednesday 13th November 2019

I am looking forward to presenting to all the delegates as well as those who join via the livestream.

v10 Enhancements – Downloading Object Storage Data per Tenant for SOBR

Version 10 of Veeam Backup & Replication isn’t too far away and we are currently in the middle of a second private BETA for our customers and partners. There has been a fair bit of content released around v10 functionality and features from our Veeam Vanguard’s over the past couple of weeks and as we move closer to GA, as par of the lead up, I am doing a series on some of the cool new enhancements that are coming as part of the release. These will be quick short takes that give a glimpse into what’s coming as part of the v10 release.

Downloading Tenant Data from SOBR Capacity Tier

Cloud Tier was by far the most significant feature of Update 4 for Backup & Replication 9.5 and we have seen the consumption of Object Storage in AWS, Azure and other platforms grow almost exponentially since its release. Our VCSPs have been looking to take advantage of the MOVE functionality that came in Update 4, but have also requested a way to pull back offloaded data from the Capacity Tier back to the Performance Tier on a per tenant basis.

The use case for this might be for tenant off-boarding, or migration of backup data back onsite. In any case our VCSPs needed a way to get the data back and rehydrate the VBK files and remove the data from Object Storage. In this quick post I’ll show how this is achieved through the UI.

First, looking at the image below you can see that there are a couple of dehydrated VBK files that belong to a specific tenant Cloud Connect Backup job are no bigger than 17MB as they site next to ones that are about 1GB.

To start a Download job, we have the option to click on the Download icon in the Tenant ribbon, or right right clicking on the tenant account and select Download

There will be an information box appear letting you know that there is a backup chain on the performance extent and the disk space required to download the other backup data back to the performance tier from the capacity tier The SOBR Download job progress can be tracked
When completed we can see details of the download from Object Storage to the Performance Tier. In the example below a lot of the blocks that where present in the Performance Tier where used to rehydrate the previously offloaded VBKs. This new feature is leveraging the Intelligent Block Recovery to save on egress and also reduce download time. Going back to the file view, the previously smaller 17MB VBKs have been rehydrated to their previous size and we have all the tenant’s data back on the Performance Tier ready to be accessed.

Wrap Up:

That was a quick look at one of the cool smaller enhancements coming in v10. The ability to download data on a per tenant based from the Capacity Tier back to the Performance Tier is one that I know our VCSPs will be happy with.

Stay tuned over the next few weeks as I go through some more hidden gems.

Disclaimer: The information and screen shots in this post is based on BETA code and may be subject to change come final GA.

vCloud Director is no more… Long Live vCD! [VMware Cloud Director Service for VMC]

There was a very significant announcement at VMworld Barcelona overnight, with the unveiling of a new service targeted at Managed Service Providers. VMware Cloud Director Service (CDS) looks to leverage a hosted SaaS based instance of vCloud Director to offer multi-tenancy on VMware Cloud on AWS. The VMware Cloud on AWS SDDC becomes the provider and MSPs can look to more efficiently consume VMC resources and expand as more capacity is required.

Daniel Paluszek has written a great overview blog post about what the service is, it’s key value points and some questions and answers that those in the VMware Cloud Provider Program may have. I’m personally looking forward to trying out the service myself and start looking at the data protection scenarios that can be supported.

They Said it Would Never Happen:

Back in 2016 when VMware first announced VMware Cloud on AWS, I saw the potential straight away and what it could mean for (at the time vCloud Air) VMware Cloud Provider Partners to extend their Provider vDCs to one that was backed by VMC.

At the time I hacked together what I saw to be the future.

This isn’t quite what this newly announced solution is and it will be interesting to see if VMware eventually allow SP based vCD installs to go out and source a VMware Cloud on AWS SDDC as a Provider of its own. I was told by a number of people that vCD would never be used with VMC…

Further improving on vCloud Directors maturity and extensibility, if the much maligned UI is improved as promised…with the upcoming addition of full NSX integration completing the network stack, the next step in greater adoption beyond the 300 odd vCAN SPs currently use vCloud Director needs a hook…and that hook should be VMWonAWS.

Time will tell…but there is huge potential here. VMware need to deliver to their partners in order to have that VMWonAWS potential realised.

That said, vCloud Director has evolved tremendously since then and delivered on almost everything that I was after at the time. This seems to be the last piece of the puzzle … though given that the actual Cloud Direct Service is delivered aaS does have me worried a little bit in terms of the ghosts of vCloud Air past.

Targeting MSPs over SPs:

I’ve already had some conversations as to who this new Cloud Director SaaS offering might be targeting. While I need to find out more information, it seems as though the main target of the service initially are MSPs. Depending on where you come from the definition of an MSP will differ to that of an SP. In some regions they are one and the same, however in the North American market an MSP would leverage an SP to offer infrastructure or software as a service.

Which every way you look at it, there is now the ability to spin up an instance of vCD that is managed and have that abstract resources that are in VMware Cloud on AWS. In a way this may lead some MSPs to ditch existing reseller relationships with existing VCPPs offering IaaS with vCD and go direct to VMware to have an end to end managed multi-tenant service and a direct reseller agreement with VMware.

Again, I need some more information before passing judgement and seeing how this fits into existing VCPP service offerings. Obviously the ability for existing VCPPs to land and expand into any VMC enabled AWS Region with this new service is significant also… but will they be able to use their existing provisioning and automation tools with the new service… and will the SaaS based version of Cloud Director be first to market with new features, followed by the VCPP versions?

Dropping the little v:

When VMware acquired Lab Manager and turned it into vCloud Director in 2010 it was hard to envision that the platform would still be going strong nearly ten years later. It’s now stronger that ever and set to go through its next evolution with the platform looking to extend beyond traditional vSphere IaaS based platforms… this explains why the little v has been dropped. We are not just talking about vCloud anymore… The premise is that Cloud Director will span multiple cloud and multiple platforms.

Be interesting to see when the name change takes place for the main product line that’s offered to VMware Cloud Providers… for the time being, it will still be vCD to me!

#LongLivevCD
#VCDpowered

References:

https://cloudsolutions.vmware.com/bite-sized-vmc

VMware Cloud Director – A New Day.

Public Beta – Backup for Microsoft Office 365 v4

Overnight at Microsoft Ignite, we announced availability of the Public Beta for the next version of Veeam Backup for Microsoft Office 365. This is again a much awaited update for VBO with a ton of enhancements and the introduction of Object Storage Support for Backup Repositories. The product has done extremely well and is one of our fastest growing in the Veeam Availability Platform. The reason for this is due to Organizations now understanding the requirements around the backing up of Office 365 data.

Backup for Office 365 3.0 Yes! You Still Need to Backup your SaaS

While we have enhanced a number of features and added some more reporting and user account management options, the biggest addition is the ability to leverage Object Storage to store longer term backup data. This has been a huge request since around version 1.5 of VBO, mainly due to the amount of data that is required to backup Exchange, SharePoint and OneDrive Organizations.

Similar to Cloud Tier in Backup & Replication 9.5 Update 4, the premise of the feature is to release pressure (be it cost, management and maintenance) on local Backup Repositories and offload the bulk of the data to cheaper Object Storage.

There is support in the beta for:

Though similar in name to Cloud Tier in Backup & Replication, the way in which the data is offloaded, stored and referenced in the VBO implementation is vasty different to that of Cloud Tier. As we get to GA for the v4 release there will be more information forthcoming about the underlying mechanics of that.

The Beta is available now and can be installed on a seperate server for side by side testing against Office 365 Organizations. For those looking to test the new functionality before the offical GA later in the year head to the Beta Download page and try it out!

Quick Post: Using Terraform to Deploy an Ansible Control Node on vSphere

In the continuing spirit of Terraforming all things, when I started to look into Ansible I wanted a way to have the base Control Node installed in a repeatable and consistent way. The setup and configuration of Ansible can be tricky and what I learnt in configuring the Ansible Control Node is that there can be a few dependencies that need to be in sync to line everything up. I also wanted to include some other modules and dependancies specifically related to the work that i’ll be doing with Windows Servers.

Requirements:

CentOS Template prepared and ready for deployment form vCenter – see example configuration http://everything-virtual.com/2016/05/06/creating-a-centos-7-2-vmware-gold-template/

The Terraform templates included in this repository requires Terraform to be available locally on the machine running the templates. Before you begin, please verify that you have the following information:

  1. Download Terraform (tested version 0.12.07) binary to your workstation.
  2. Terraform vSphere Provider
  3. Gather the VMware credentials required to communicate to vCenter
  4. Update the variable values in the terraform.tfvars file.
  5. Update the resource values in the main.tf file.

I borrowed some of the steps off Markus Kraus’s great work around configuring his Ansible Development environment. But also had to work against some complications that I had working with my CentOS 7 VMware Template due to Python 2.7x being the default version that comes with that distribution build. I also included the modules for Kerberos authentication when working with Windows Servers connected to Active Directory Domains.

While it wasn’t directly impacting the Playbook’s I was running, I was getting the following warning while running NTLM or Kerberos authentication against any Windows server:

Given that Python 2.7 was set to be unsupported early next year, I was determined to have Ansible running off Python3. The combination and order of Linux packages and dependencies to get that working wasn’t straight forward and as you can see below in the main VM Terraform resource declaration, there are a lot of commands to make that happen.

Terraform Breakdown:

There isn’t a lot to the Terraform code, other than deploying a cloned CentOS 7 Virtual Machine with the configured network setup via the Terraform Guest Customizations. Once the VM has been deployed and configured, the initial package deployment takes place…there are then two seperate configuration scripts which are uploaded and executed via SSH via the remote-exec blocks.

The last remote-exec block is the list of commands that works to install Ansible with PIP and using Phython3.

The final command of the Terraform Plan execution is to list the installed Ansible version.

End to end the deployment takes about 7-8 minutes depending on your storage… Once done we have a fully functional Ansible Control Node ready for automation goodness!

This might seem like a little chicken or the egg… but Terraform and Ansible represent both sides of the IaC spectrum. As I mention in the README.md … time to try work out this Ansible thing out a little more!

References:

Ansible Development environment

Quick Fix: Deploying Multiple Ubuntu 18.04 VMs From Template with DHCP Results Same in IP Allocation

In the continuing work I’ve been doing with Terraform, i’ve come across a number of gotchyas when working with VM Templates and deploying them on mass. The nature of the work is that i’m creating and destroying VMs often. Generally speaking I like using Static IP addresses but for the project i’m working on I needed to be able to have an option to deploy and configure the networking with DHCP. Windows and CentOS gave me no issues, however when I went to deploy the Ubuntu 18.04 template I started getting errors on the plan execution.

When I looked at the output of the Terraform where export the VM IP addresses, the json output showed that all the cloned VMs had been assigned the same IP address.

At first I assumed it was due to the same MAC address being assigned by ESXi to the cloned VMs which was resulting in the machines being allocated the same IP, however when I checked the MAC addresses they where all different.

What is Machine-ID:

After some digging online I came across a change in behaviour where Ubuntu uses the machine-id to request DHCP addresses. Ubuntu server default networking goes through cloud-init which by default sends /etc/machine-id in the DHCP request. This leads to the duplicate IP situation.

The /etc/machine-id file contains the unique machine ID of the local system that is set during installation or boot. The machine ID is a single newline-terminated, hexadecimal, 32-character, lowercase ID. When decoded from hexadecimal, this corresponds to a 16-byte/128-bit value. This ID may not be all zeros.

The machine ID is usually generated from a random source during system installation or first boot and stays constant for all subsequent boots. Optionally, for stateless systems, it is generated during runtime during early boot if necessary.

Quick Fix:

From a template perspective there is a quick fix that can be applied where the machine-id file is blanked out. This means upon first boot a new ID is generated. You can’t just delete the machine-id file as it needs to exist. If it doesn’t exist the deployment will fail as it expects it to be there in some form.

The simplest way I achieved this was by zero’ing out the file:

Once done, the VM can be saved again as a template and the cloning operation will result in unique IPs being handed out by the DHCP server.

References:

http://manpages.ubuntu.com/manpages/bionic/man5/machine-id.5.html

https://www.freedesktop.org/software/systemd/man/machine-id.html

 

There is still a Sting in the Tail for Cloud Service Providers

This week it gave me great pleasure to see my former employer, Zettagrid announced a significant expansion in their operations, with the addition of three new hosting zones to go along with their existing four zones in Australia and Indonesia. They also announced the opening of operations in the US. Apart from the fact I still have a lot of good friends working at Zettagrid the announcement vindicates the position and role of the boutique Cloud Service Provider in the era of the hyper-scale public cloud providers.

When I decided to leave Zettagrid, I’ll be honest and say that one of the reasons was that I wasn’t sure where the IaaS industry would be placed in five years. That was now, more than three years ago and in that time the industry has pulled back significantly from the previous inferred position of total and complete hyper-scale dominance in the cloud and hosting market.

Cloud is not a Panacea:

The Industry no longer talks about the cloud as a holistic destination for workloads, and more and more over the past couple of years the move has been towards multi and hybrid cloud platforms. VMware has (in my eyes) been the leader of this push but the inflection point came at AWS re:Invent last year, when AWS Outposts was announced. This shift in mindset is driven by the undisputed leader in the public cloud space towards consuming an on-premises resource in a cloud way.

I’ve always been a big supporter of boutique Service Providers and Managed Service Providers… it’s in my blood and my role at Veeam allows me to continue to work with top innovative service providers around the world. Over the past three years, I’ve seen the really successful ones thrive through themselves pivoting by offering their partners and tenants differential services… going beyond just traditional IaaS.

These might be in the form of enhancing their IaaS platform by adding more avenues to consume services. Examples of this are adding APIs, or the ability for the new wave of Infrastructure as Code tools to provision and manage workloads. vCloud Director is a great example of continued enhancement that, upon every releases offers something new to the service provider tenant. The Plugable Extension Architecture now allows service providers to offer new services for backup, Kubernetes and Object Storage.

Backup and Disaster Recovery is Driving Revenue:

A lot of service providers have also transitioned to offering Backup and Disaster Recovery solutions which in many cases has been the biggest growth area for them over the past number of years.  Even with the extreme cheapness that the hyper-scalers offer for the their cloud object storage platform.

All this leads me to believe that there is still a very significant role to be had for Service Providers in conjunction with other cloud platforms for a long time to come. The service providers that are succeeding and growing are not sitting on their hands and expecting what once worked to continue working. The successful service providers are looking at ways to offer more services and continue to be that trusted provider of IT.

I was once told in the early days of my career that if a client has 2.3 products with you, then they are sticky and the likelihood is that you will have them as a customer for a number of years. I don’t know the actual accuracy of that, but I’ve always carried that belief. This flies in the face of modern thinking around service mobility which has been reinforced by the improvement in underlying network technologies to allow the portability and movement of workloads. This also extends to the ease to which a modern application can be provisioned, managed and ultimately migrated. That said, all service providers want their tenants to be sticky and not move.

There is a Future!

Whether it be through continuing to evolve existing service offerings, adding more ways to consume their platform, becoming a broker for public cloud services or being a trusted final destination for backup and Disaster Recovery, the talk about the hyper-scalers dominating the market is currently not a true reflection of the industry… and that is a good thing!

Using Variable Maps to Dynamically Deploy vSphere VMs with Terraform

I’ve been working on a project over the last couple of weeks that has enabled me to sharpen my Terraform skills. There is nothing better than learning by doing and there is also nothing better than continuously improving code through more advanced constructs and methods. As this project evolved it became apparent that I would need to be able to optimize the Terraform/PowerShell to more easily deploy VMs based on specific vSphere templates.

Rather than have one set of Terraform declarations per template (resulting in a lot of code duplication), or having the declaration variables tied to specific operating systems changing (resulting in more manual change) depending on the what was being deployed, I looked for a way to make it even more “singularly declarative”.

Where this became very handy was when I was looking to deploy VMs based on Linux Distro. 98% of the Terraform code is the same no matter if Ubuntu, or CentOS was being used. The only difference was the vSphere Template being used to clone the new VM from, the Template Password and also, in this case a remote-exec call that needed to be made to open a Firewall port.

To get this working I used Terraform variable maps. As you can see below, the idea behind using maps is to allow groupings of like variables in one block declaration. These map values are then fed through to the rest of the Terraform code. Below is an example of a maps.tf file that I have seperate to the variables.tf file. This was an easier way to logically seperate what was being configured using maps.

At the top I have a standard variable that is the only variable that changes and needs setting. If ubuntu is set as the vsphere_linux_distro then all the map values that are ubuntu = will be used. The same if that variable is set to centos.

This is set in the terraform.tfvars file, and links back to the mappings. From here Terraform will lookup the linux_template variable and map it with the various mapped values.

The above data source that dictates what template is used builds the name from the lookup function of the base variable and the map value.

Above, the values we set in the maps are being used to execute the right command depending if it is Ubuntu or Centos and also to use the correct password depending on the linux_distro set. As mentioned, the declared variable can either be set in the terraform.tfvars file, or passed through at the time the plan is executed.

The result is a more controlled and easily managed way to use Terraform to deploy VMs from different pre-existing VM templates. The variable mappings can be built up over time and used as a complete library or different operating systems with different options. An other awesome feature of Terraform!

References:

https://www.terraform.io/docs/configuration-0-11/variables.html#maps

Veeam Vault #11 – VBR, Veeam ONE, VAC Releases plus Important Update for Service Providers

Welcome to the 11th edition of Veeam Vault and the first one for 2019! It’s been more than a year since the last edition, however in light of some important updates that have been released over the past couple of weeks and months, I thought it was time to open up the Vault again! Getting stuck into this edition, I’ll cover the releases of Veeam Backup & Replication 9.5 Update 4b, Veeam One Update 4a as well as an update for Veeam Availability Console and some supportability announcements.

Backup & Replication 9.5 Update 4b and Veeam ONE 4a:

In July we released Update 4b for Veeam Backup & Replication 9.5. It brought with it a number of fixes to common support issues as well as a number of important platform supportability milestones. If you haven’t moved onto 4b yet, it’s worth getting there as soon as possible. You will need to be on at least 9.0 Update 2 (build 9.0.0.1715) or later prior to installing this update. After the successful upgrade, your build number will be 9.5.4.2866.

Veeam ONE 9.5 Update 4a was released in early September and containers similar platform supportability to Backup & Replication as well as a number of fixes. Details can be found in this VeeamKB.

Backup & Replication Platform support

  • VMware vCloud Director 9.7 compatibility at the existing Update 4 feature levels.
  • VMware vSphere 6.5 and 6.7 U3 Supportability vSphere 6.5 and 6.7 U3 GA is officially supported with Update 4b.
  • Microsoft Windows 10 May 2019 Update and Microsoft Windows Server version 1903 support as guest VMs, and for the installation of Veeam Backup & Replication and its components and Veeam Agent for Windows 3.0.2 (included in the update).
  • Linux Kernel version 5.0 support by the updated Veeam Agent for Linux 3.0.2 (included in the update)

For a full list of updates and bug fixes, head to the offical VeeamKB. Update 4b is a cumulative update, meaning it includes all enhancements delivered as a part of Update 4a. There are also a number of fixes specifically for Veeam Cloud & Service Providers that offer Cloud Connect services. For the full change log, please see this topic on the private VCSP forum.

https://www.veeam.com/kb2970

VAC 3.0 Patch:

Update 3 for Veeam Availability Console v3 (build 2762) was released last week, and containers a number of important fixes and enhancements. The VeeamKB lists out all the resolved issues, but i’ve summerized the main ones below. It is suggested that all VAC installations are updated as soon as possible. As a reminder, don’t forget to ensure you have a backup of the VAC server before applying the update.

  • UI – Site administrators can select Public IP Addresses belonging to a different site when creating a company. Under certain conditions, “Used Storage” counter may display incorrect data on the “Overview” tab.
  • Server – Veeam.MBP.Service fails to start when managed backup agents have duplicate IDs (due to cloning operation) in the configuration database.
  • Usage Reporting – Under certain conditions, usage report for managed Veeam Backup & Replication servers may not be created within the first ten days of a month.
  • vCloud Director – Under certain conditions, the management agent may connect to a VAC server without authentication.
  • Reseller Reseller can change his or her backup quota to “unlimited” when creating a managed company with “unlimited” quota.
  • RESTful APIs – Querying “v2/tenants/{id}” and “/v2/tenants/{id}/backupResources” information may take considerable amount of time.

https://www.veeam.com/kb3003

Veeam Cloud Connect Replication Patch:

Probably one of the more important patches we have released of late has to do with a bug found in the stored procedure that generates automated monthly license usage reports for Cloud Connect Replication VMs. This displays an unexpected number of replicated VMs and licensed instances which has been throwing off some VCSP license usage reporting. If VCSPs where using the PowerShell command Get-VBRCloudTenant -Name “TenantName”, the correct information is returned.

To fix this right now, VCSPs offering Cloud Connect Replication servers can visit this VeeamKB, download an SQL script and apply it to the MSSQL server as instructed. There will also be an automated patch released and the fix baked into future Updates for Backup & Replication.

https://www.veeam.com/kb3004

Quick Round Up:

Along with a number of platform supportability announcements at VMworld 2019, it’s probably important to reiterate that we now have a patch available that allows us to support restores into NSX-T for VMware Cloud on AWS SDDCs environments. This also means that NSX-T is supported on all vSphere environments. The patch will be baked into the next major release of Backup & Replication.

Finally, the Dell EMC SC storage plug-in is now available which I know will be popular among our VCSP community who leverage SCs in their Service Provider platforms. Being able to offload the data transfer of backup and replication jobs to the storage layer introduces a performance advantage. In this way, backups from storage array snapshots provide a fast and efficient way to allow the Veeam backup proxy to move data to a Veeam backup repository.

Quick Fix – OS Not Found Deploying Windows Template with Terraform

During the first plan execution of a new VM based on a Windows Server Core VM Template, my Terraform plan timed out on Guest Customizations. The same plan had worked without issue previously with an existing Windows Template, so I was a little confused as to what had gone wrong. When I checked the console of the cloned VMs in vSphere, I found that it was stuck at the boot screen not able to find the Operating System.

Operating System not found – Obviously having issues booting into the templated disk.

After a little digging around, I came across this post which describes the error being related to the VM Template being configured with EFI Firmware which is now the default for vSphere 6.7 VMs. Upon cloning, Terraform deploys the new VM with a BIOS Firmware resulting in the disk not able to boot.

Checking the VM Template, it did in-fact have EFI set.

An option was to reconfigure the Template and make it default to BIOS, however the Terraform vSphere Provider was updated last year to include an option to set the Firmware on deployment.

In the instance declaration file we can set firmware as shown below

If we set it up such that it reads that value from a variable we only have to configure the efi or bios setting once in the terraform.tfvars files.

In the variables.tf file the variable is set and has a default value of bios set.

Once this was configured, the plan was able to successfully deploy the new Windows Template without issue and Guest Customizations where able to continue.

Terraform Version: 0.11.7

Resources:

https://github.com/terraform-providers/terraform-provider-vsphere/issues/441

https://github.com/terraform-providers/terraform-provider-vsphere/pull/485

https://www.terraform.io/docs/providers/vsphere/r/virtual_machine.html#firmware

« Older Entries Recent Entries »