Category Archives: General

Heading to Tech Field Day 20 #TFD20

I’m currently sitting in my hotel room in sunny sunny San Jose. Today and tomorrow will be spent finishing off preparations for Tech Field Day 20. This will be my second Tech Field Day event following up from Cloud Field Day 5 in April. #TFD20 is the 10th year anniversary of Tech Field day and with that there is special significance being placed on this event which adds added excitement to the fact that I’m presenting with my fellow Product Strategy team members, Michael Cade and Rick Vanover.

Veeam at Tech Field Day 20

Once again, this is an important event for us at Veeam as we are given the #TFD stage for two hours on the Wednesday morning as we look to talk about what’s coming in our v10 release… and beyond. We have crafted the content with the delegates in mind and focused on core data protection functionality that looks to protect organizations from modern attacks and data loss.

For Michael, Rick and myself we will be focusing on reiterating what Veeam has done in leading the industry for a number of years in innovation while also looking at the progress we have made in recent times in transitioning to a true software defined, hardware agnostic platform that offers customers absolute choice.

The Veeam difference also lies in the way in which we release ready, stable and reliable solutions that are developed in innovative ways that are outside of the norm in the backup industry. What we will be showing on Wednesday, I believe, will highlight that as one of our strongest selling points.

Veeam are presenting at 10am (Pacific Time) Wednesday 13th November 2019

I am looking forward to presenting to all the delegates as well as those who join via the livestream.

Quick Fix: Deploying Multiple Ubuntu 18.04 VMs From Template with DHCP Results Same in IP Allocation

In the continuing work I’ve been doing with Terraform, i’ve come across a number of gotchyas when working with VM Templates and deploying them on mass. The nature of the work is that i’m creating and destroying VMs often. Generally speaking I like using Static IP addresses but for the project i’m working on I needed to be able to have an option to deploy and configure the networking with DHCP. Windows and CentOS gave me no issues, however when I went to deploy the Ubuntu 18.04 template I started getting errors on the plan execution.

When I looked at the output of the Terraform where export the VM IP addresses, the json output showed that all the cloned VMs had been assigned the same IP address.

At first I assumed it was due to the same MAC address being assigned by ESXi to the cloned VMs which was resulting in the machines being allocated the same IP, however when I checked the MAC addresses they where all different.

What is Machine-ID:

After some digging online I came across a change in behaviour where Ubuntu uses the machine-id to request DHCP addresses. Ubuntu server default networking goes through cloud-init which by default sends /etc/machine-id in the DHCP request. This leads to the duplicate IP situation.

The /etc/machine-id file contains the unique machine ID of the local system that is set during installation or boot. The machine ID is a single newline-terminated, hexadecimal, 32-character, lowercase ID. When decoded from hexadecimal, this corresponds to a 16-byte/128-bit value. This ID may not be all zeros.

The machine ID is usually generated from a random source during system installation or first boot and stays constant for all subsequent boots. Optionally, for stateless systems, it is generated during runtime during early boot if necessary.

Quick Fix:

From a template perspective there is a quick fix that can be applied where the machine-id file is blanked out. This means upon first boot a new ID is generated. You can’t just delete the machine-id file as it needs to exist. If it doesn’t exist the deployment will fail as it expects it to be there in some form.

The simplest way I achieved this was by zero’ing out the file:

Once done, the VM can be saved again as a template and the cloning operation will result in unique IPs being handed out by the DHCP server.

References:

http://manpages.ubuntu.com/manpages/bionic/man5/machine-id.5.html

https://www.freedesktop.org/software/systemd/man/machine-id.html

 

There is still a Sting in the Tail for Cloud Service Providers

This week it gave me great pleasure to see my former employer, Zettagrid announced a significant expansion in their operations, with the addition of three new hosting zones to go along with their existing four zones in Australia and Indonesia. They also announced the opening of operations in the US. Apart from the fact I still have a lot of good friends working at Zettagrid the announcement vindicates the position and role of the boutique Cloud Service Provider in the era of the hyper-scale public cloud providers.

When I decided to leave Zettagrid, I’ll be honest and say that one of the reasons was that I wasn’t sure where the IaaS industry would be placed in five years. That was now, more than three years ago and in that time the industry has pulled back significantly from the previous inferred position of total and complete hyper-scale dominance in the cloud and hosting market.

Cloud is not a Panacea:

The Industry no longer talks about the cloud as a holistic destination for workloads, and more and more over the past couple of years the move has been towards multi and hybrid cloud platforms. VMware has (in my eyes) been the leader of this push but the inflection point came at AWS re:Invent last year, when AWS Outposts was announced. This shift in mindset is driven by the undisputed leader in the public cloud space towards consuming an on-premises resource in a cloud way.

I’ve always been a big supporter of boutique Service Providers and Managed Service Providers… it’s in my blood and my role at Veeam allows me to continue to work with top innovative service providers around the world. Over the past three years, I’ve seen the really successful ones thrive through themselves pivoting by offering their partners and tenants differential services… going beyond just traditional IaaS.

These might be in the form of enhancing their IaaS platform by adding more avenues to consume services. Examples of this are adding APIs, or the ability for the new wave of Infrastructure as Code tools to provision and manage workloads. vCloud Director is a great example of continued enhancement that, upon every releases offers something new to the service provider tenant. The Plugable Extension Architecture now allows service providers to offer new services for backup, Kubernetes and Object Storage.

Backup and Disaster Recovery is Driving Revenue:

A lot of service providers have also transitioned to offering Backup and Disaster Recovery solutions which in many cases has been the biggest growth area for them over the past number of years.  Even with the extreme cheapness that the hyper-scalers offer for the their cloud object storage platform.

All this leads me to believe that there is still a very significant role to be had for Service Providers in conjunction with other cloud platforms for a long time to come. The service providers that are succeeding and growing are not sitting on their hands and expecting what once worked to continue working. The successful service providers are looking at ways to offer more services and continue to be that trusted provider of IT.

I was once told in the early days of my career that if a client has 2.3 products with you, then they are sticky and the likelihood is that you will have them as a customer for a number of years. I don’t know the actual accuracy of that, but I’ve always carried that belief. This flies in the face of modern thinking around service mobility which has been reinforced by the improvement in underlying network technologies to allow the portability and movement of workloads. This also extends to the ease to which a modern application can be provisioned, managed and ultimately migrated. That said, all service providers want their tenants to be sticky and not move.

There is a Future!

Whether it be through continuing to evolve existing service offerings, adding more ways to consume their platform, becoming a broker for public cloud services or being a trusted final destination for backup and Disaster Recovery, the talk about the hyper-scalers dominating the market is currently not a true reflection of the industry… and that is a good thing!

Veeam Vault #11 – VBR, Veeam ONE, VAC Releases plus Important Update for Service Providers

Welcome to the 11th edition of Veeam Vault and the first one for 2019! It’s been more than a year since the last edition, however in light of some important updates that have been released over the past couple of weeks and months, I thought it was time to open up the Vault again! Getting stuck into this edition, I’ll cover the releases of Veeam Backup & Replication 9.5 Update 4b, Veeam One Update 4a as well as an update for Veeam Availability Console and some supportability announcements.

Backup & Replication 9.5 Update 4b and Veeam ONE 4a:

In July we released Update 4b for Veeam Backup & Replication 9.5. It brought with it a number of fixes to common support issues as well as a number of important platform supportability milestones. If you haven’t moved onto 4b yet, it’s worth getting there as soon as possible. You will need to be on at least 9.0 Update 2 (build 9.0.0.1715) or later prior to installing this update. After the successful upgrade, your build number will be 9.5.4.2866.

Veeam ONE 9.5 Update 4a was released in early September and containers similar platform supportability to Backup & Replication as well as a number of fixes. Details can be found in this VeeamKB.

Backup & Replication Platform support

  • VMware vCloud Director 9.7 compatibility at the existing Update 4 feature levels.
  • VMware vSphere 6.5 and 6.7 U3 Supportability vSphere 6.5 and 6.7 U3 GA is officially supported with Update 4b.
  • Microsoft Windows 10 May 2019 Update and Microsoft Windows Server version 1903 support as guest VMs, and for the installation of Veeam Backup & Replication and its components and Veeam Agent for Windows 3.0.2 (included in the update).
  • Linux Kernel version 5.0 support by the updated Veeam Agent for Linux 3.0.2 (included in the update)

For a full list of updates and bug fixes, head to the offical VeeamKB. Update 4b is a cumulative update, meaning it includes all enhancements delivered as a part of Update 4a. There are also a number of fixes specifically for Veeam Cloud & Service Providers that offer Cloud Connect services. For the full change log, please see this topic on the private VCSP forum.

https://www.veeam.com/kb2970

VAC 3.0 Patch:

Update 3 for Veeam Availability Console v3 (build 2762) was released last week, and containers a number of important fixes and enhancements. The VeeamKB lists out all the resolved issues, but i’ve summerized the main ones below. It is suggested that all VAC installations are updated as soon as possible. As a reminder, don’t forget to ensure you have a backup of the VAC server before applying the update.

  • UI – Site administrators can select Public IP Addresses belonging to a different site when creating a company. Under certain conditions, “Used Storage” counter may display incorrect data on the “Overview” tab.
  • Server – Veeam.MBP.Service fails to start when managed backup agents have duplicate IDs (due to cloning operation) in the configuration database.
  • Usage Reporting – Under certain conditions, usage report for managed Veeam Backup & Replication servers may not be created within the first ten days of a month.
  • vCloud Director – Under certain conditions, the management agent may connect to a VAC server without authentication.
  • Reseller Reseller can change his or her backup quota to “unlimited” when creating a managed company with “unlimited” quota.
  • RESTful APIs – Querying “v2/tenants/{id}” and “/v2/tenants/{id}/backupResources” information may take considerable amount of time.

https://www.veeam.com/kb3003

Veeam Cloud Connect Replication Patch:

Probably one of the more important patches we have released of late has to do with a bug found in the stored procedure that generates automated monthly license usage reports for Cloud Connect Replication VMs. This displays an unexpected number of replicated VMs and licensed instances which has been throwing off some VCSP license usage reporting. If VCSPs where using the PowerShell command Get-VBRCloudTenant -Name “TenantName”, the correct information is returned.

To fix this right now, VCSPs offering Cloud Connect Replication servers can visit this VeeamKB, download an SQL script and apply it to the MSSQL server as instructed. There will also be an automated patch released and the fix baked into future Updates for Backup & Replication.

https://www.veeam.com/kb3004

Quick Round Up:

Along with a number of platform supportability announcements at VMworld 2019, it’s probably important to reiterate that we now have a patch available that allows us to support restores into NSX-T for VMware Cloud on AWS SDDCs environments. This also means that NSX-T is supported on all vSphere environments. The patch will be baked into the next major release of Backup & Replication.

Finally, the Dell EMC SC storage plug-in is now available which I know will be popular among our VCSP community who leverage SCs in their Service Provider platforms. Being able to offload the data transfer of backup and replication jobs to the storage layer introduces a performance advantage. In this way, backups from storage array snapshots provide a fast and efficient way to allow the Veeam backup proxy to move data to a Veeam backup repository.

Quick Fix – OS Not Found Deploying Windows Template with Terraform

During the first plan execution of a new VM based on a Windows Server Core VM Template, my Terraform plan timed out on Guest Customizations. The same plan had worked without issue previously with an existing Windows Template, so I was a little confused as to what had gone wrong. When I checked the console of the cloned VMs in vSphere, I found that it was stuck at the boot screen not able to find the Operating System.

Operating System not found – Obviously having issues booting into the templated disk.

After a little digging around, I came across this post which describes the error being related to the VM Template being configured with EFI Firmware which is now the default for vSphere 6.7 VMs. Upon cloning, Terraform deploys the new VM with a BIOS Firmware resulting in the disk not able to boot.

Checking the VM Template, it did in-fact have EFI set.

An option was to reconfigure the Template and make it default to BIOS, however the Terraform vSphere Provider was updated last year to include an option to set the Firmware on deployment.

In the instance declaration file we can set firmware as shown below

If we set it up such that it reads that value from a variable we only have to configure the efi or bios setting once in the terraform.tfvars files.

In the variables.tf file the variable is set and has a default value of bios set.

Once this was configured, the plan was able to successfully deploy the new Windows Template without issue and Guest Customizations where able to continue.

Terraform Version: 0.11.7

Resources:

https://github.com/terraform-providers/terraform-provider-vsphere/issues/441

https://github.com/terraform-providers/terraform-provider-vsphere/pull/485

https://www.terraform.io/docs/providers/vsphere/r/virtual_machine.html#firmware

VMworld 2019 – Still Time To Go for FREE*!

VMworld is rapidly approaching, and for those that have not secured their place at the event in San Francisco, and for whatever reason have been hindered in terms of purchasing an event ticket… there is still time and there is still a way!

We (Veeam) have been running a competition that gives away three FULL conference pass over the course of the last few months but ends on the 19th of August so time is running out!

Head here to register for the chance to win a FULL conference pass.

For a quick summary of what is happening at VMworld from a Veeam perspective including sessions, parties and more, click here to head to the main event page that contains details on what Veeam is doing at VMworld 2019.

*The Prize does not include any associated costs including but not limited to expenses, insurance, travel, food or additional accommodation costs unless otherwise specified above.

I went on Holiday (Vacation) and Managed to Switch Off!!

The jet lag has almost passed… I’ve nearly caught up with the backlog of messages on my various social platforms… expenses are almost done… and i’m about to hit Outlook and clear my email inbox. Yes, I’ve just come back from nearly 4.5 weeks away on holiday (vacation) and barring a few conversations on Teams with my team, I managed to switch off almost 100%… In fact this is my first blog post in more than a month!

To be honest, this is something that I thought I would ever do and on one side I feel somewhat ok about it… while on the other I have a case of mild agita over needing to catch up and get back into the groove of work life.

“Don’t F*cking Tell me to Switch Off!!!”

This was the original title of the post (modified to soften the blow) and mainly relates to the bucket load of messages and comments telling me to switch off while on leave as I started this trip. This is something I have experienced throughout my career and I see it all the time when people make the “mistake” of checking into Twitter, or posting in Slack when they are on vacation.

There is almost nothing I find more frustrating than people telling me (or others) how to spend my time… be it on holiday or otherwise and especially when it comes to work related matters. I’ve written before about work/life balance and how I have struggled to achieve that over the years and in fact the whole work/life balance in IT has become a real topic since then and there have been many people that have written about their own personal struggles.

To that end, when people tell me to switch off, I tend to respond with what is stated above and the immediate thought that resonates in my mind is that I’ll switch off when and if I damn well please! And if I don’t, then that is ok as well! If I feel balance and I am ok in myself, then it’s something that is in my control and not the place for others to try and dictate to me.

Regardless… I Did Switch Off

When it comes to my thoughts around switching off… it comes down to the fact that my hobby is also my job and my career. Tinkering is how I learn and an important component of learning is staying connected and engaged with the various online communities and content sources. This is why I find it hard to completely switch off. I don’t deny that there is a physiological side to this which equates to an addiction… it’s well documented that we thrive on the hits of dopamine that come from social reward.

For us as techies, that social reward is linked to emails, messages, Tweets, likes, hits, views etc. I’ll be honest and admit that I do crave all those things as well as social interaction with my workmates. However as I settled into my holiday I began to replace the need for technical reward with that of personal and family rewards that generated different types of dopamine hits.

 

The max hit came while at a local village feast in Gozo where memories of my childhood trips to Malta came flooding back… and as I ate my third Imaret I was at max switch off level and knew that I had succeeding in doing something I thought not possible! Total disconnect!

I captured that moment below in the fourth picture… this is for me as a reminder of where I can get to if I ever feel the need to switch off again.

Ultimately I was able to not touch my MBP for work all holiday and I let myself drift away from my connected world without much thought or fear of missing out… for the most 🙂

I still did a bit here and there, but not nearly as much as I had thought. Now that I am back, it’s time to get into the connected world and get back to what I do… stay engaged, stay connected and stay switched on!

Kubernetes Everywhere…Time to Take off the Blinkers!

This is more or less a follow up post to the one I wrote back in 2015 about the state of containers in the IT World as I saw it at the time. I started off that post talking about the freight train that was containerization along with a cheeky meme… fast forward four years and the narrative around containers has changed significantly, and now there is new cargo on that freight train… and it’s all about Kubernetes!

In my previous role working at a Cloud Provider, shortly after writing that 2015 post I started looking at ways to offer containers as a service. At the time there wasn’t much, but I dabbled a bit in Docker and if you remember at the time, VMware’s AppCatalyst… which I used to deploy basic Docker images on my MBP (think it’s still installed actually) with the biggest highlight for me at the time being able to play Docker Doom!

I also was involved in some of the very early alphas for what was at the time vSphere Integrated Containers (Docker containers as VMs on vCenter) which didn’t catch on compared to what is currently out there for the mass deployment and management of containers. VMware did evolve it’s container strategy with Pivotal Container Services, however those outside the VMware world where already looking elsewhere as the reality of containerised development along with serverless and cloud has taken hold and become accepted as a mainstream IT practice.

Even four or five years ago I was hearing the word Kubernetes often. I remember sitting in my last VMware vChampion session with where Kit Colbert was talking about Kuuuuuuuurbenites (the American pronunciation stuck in my mind) and how we all should be ready to understand how it works as it was about to take over the tech world. I didn’t listen… and now, I have a realisation that I should have started looking into Kubernetes and container management in general more seriously sooner.

Not because it’s fundamental to my career path…not because I feel like I was lagging technically and not because there have been those saying for years that Kubernetes will win the race. There is an opportunity to take off the blinkers and learn something that is being adopted by understanding the fundamentals about what makes it tick. In terms of discovery and learning, I see this much like what I have done over the past eighteen months with automation and orchestration.

From a backup and recovery point of view, we have been seeing an increase in the field of customers and partners asking how they backup containers and Kubernetes. For a long time the standard response was “why”. But it’s becoming more obvious that the initial stateless nature of containers is making way for more stateful persistent workloads. So now, it’s not only about backing up the management plane.. but also understanding that we need to protect the data that sits within the persistent volumes.

What I’ll Be Doing:

I’ve been interested for a long time superficially about Kubernetes, reading blogs here and there and trying to absorb information where possible. But as with most things in life, you best learn by doing! My intention is to create a series of blog posts that describe my experiences with different Kubernetes platforms to ultimately deploy a simple web application with persistent storage.

These posts will not be how-tos on setting up a Kubernetes cluster etc. Rather, I’ll look at general config, application deployment, usability, cost and whatever else becomes relevant as I go through the process of getting the web application online.

Off the top of my head, i’ll look to work with these platforms:

  • Google Kubernetes Engine (GKE)
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Container Service (AKS)
  • Docker
  • Pivotal Container Service (PKS)
  • vCloud Director CSE
  • Platform9

The usual suspects are there in terms of the major public cloud providers. From a Cloud and Service Provider point of view, the ability to offer Kubernetes via vCloud Director is very exciting and if I was still in my previous role I would be looking to productize that ASAP. For a different approach, I have always likes what Platform 9 has done and I was also an early tester of their initial managed vSphere support, which has now evolved into managed OpenStack and Kubernetes. They also recently announced Managed Applications through the platform which i’ve been playing with today.

Wrapping Up:

This follow up post isn’t really about the state of containers today, or what I think about how and where they are being used in IT today. The reality is that we live in a hybrid world and workloads are created as-is for specific platforms on a need by need basis. At the moment there is nothing to say that virtualization in the form of Virtual Machines running on hypervisors on-premises are being replaced by containers. The reality is that between on-premises, public clouds and in between…workloads are being deployed in a variety of fashions… Kubernetes seems to have come to the fore and has reached some level of maturity that makes it a viable option… that could no be said four years ago!

It’s time for me (maybe you) to dig underneath the surface!

Link:

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Kubernetes is mentioned 18 times in this and on this page

First Look: On Demand Recovery with Cloud Tier and VMware Cloud on AWS

Since Veeam Cloud Tier was released as part of Backup & Replication 9.5 Update 4, i’ve written a lot about how it works and what it offers in terms of offloading data from more expensive local storage to what is fundamentally cheaper remote Object Storage. As with most innovative technologies, if you dig a little deeper… different use cases start to present themselves and unintended use cases find their way to the surface.

Such was the case when, together with AWS and VMware, we looked at how Cloud Tier could be used as a way to allow on demand recovery into a cloud platform like VMware Cloud on AWS. By way of a quick overview, the solution shown below has Veeam backing up to a Scale Out Backup Repository which has a Capacity Tier backed by an Object Storage repository in Amazon S3. There is a minimal operational restore window set which means data is offloaded quicker to the Capacity Tier.

Once there, if disaster happens on premises, an SDDC is spun up, a Backup & Replication Server deployed and configured into that SDDC. From there, a SOBR is configured with the same Amazon S3 credentials that connects to the Object Storage bucket which detects the backup data and starts a resync of the metadata back to the local performance tier. (as described here) Once the resync has finished workloads can be recovered, streamed directly from the Capacity Tier.

The diagram above has been published on the AWS Reference Architecture page, and while this post has been brief, there is more to come by way of an offical AWS Blog Post co-authored by myself Frank Fan from AWS around this solution. We will also look to automate the process as much as possible to make this a truely on demand solution that can be actioned with the click of a button.

For now, the concept has been validated and the hope is people looking to leverage VMware Cloud on AWS as a target for disaster and recovery look to leverage Veeam and the Cloud Tier to make that happen.

References: AWS Reference Architecture

Quick Fix: Unable to Login to WordPress Site

I’ve just had a mild scare in that I was unable to log into this WordPress site even after trying a number of different ways to gain access by resetting the password via the methods listed on a number of WordPress help sites. The standard reset my password via email option was also not working. I have access directly to the web server and also have access to the backend MySQL database via PHPMyAdmin. Even with all that access, and having apparently changed the password value successfully, I was still getting failed logins.


I had recently enabled Two Factor Authentication using the Google Authenticator and using the WordPress Plugin of the same name. I suspected that this might be the issue as one of the suggestions on the troubleshooting pages was to disable all plugins.

Luckily, I remembered that through the WordPress website you have administrative access back to your blog site. So rather than go down a more complex and intrusive route, I went in and remotely disabled the plugin in question.

Disabling that plugin worked and I was able to login. I’m not sure yet if there was general issues with the Google Authenticator, or if the Plugin had some sort of issue, however end result was I could login and my slight panic was over.

Interesting note is that most things can be done through the WordPress website including publishing blog posts and general site administration. In this case it saved me a lot of time trying to work out what was happening with me not able to login. So if you do have issues with your login, and you suspect it’s a Plugin, make sure you have access to WordPress.com and remotely handle the activation status of the plugin.

« Older Entries