Category Archives: Backup

Tech Field Day Recap #TFD20

Tech Field Day 20 has come and gone, and it was an honour to play a small part in the 10th year anniversary Tech Field Day event. This was my second TFD event for the year having attended Cloud Field Day 5 back in April. It’s always a privilege to present to the delegates and to those tuning in on the livestream. The significance is not lost on me, the impact that TFD has had on peoples careers. In an indirect way, it helped my land this role at Veeam as @RickVanover got his break having attended the first TFD. If Rick hadn’t gone to that, he wouldn’t have been hired by Veeam and further down the track I might not have had the opportunity to join…possibly.

In any case, well done to Stephen Foskett and GestaltIT on 10 years and the on the impact you have had on many peoples career in our extended tech community.

Veeam Recap:

We had the second slot on the Wednesday from 10am-12pm and presented around three main topics as well as a very quick re-introduction to Veeam and how we are doing in the market today.

Rick then took everyone through a Scale Out Backup Repository (SOBR) 101 and a quick recap of the Cloud Tier as it was released as part of Veeam Backup & Replication 9.5 Update 4. We actually could have level set a bit more at this point, but time was already short. With that I put together a quick post last week to further debunk some of the terminology we use when talking about SOBR and the Cloud Tier.

Veeam Cloud Tier Glossary

Following that, I went through two of the most anticipated features in our upcoming Veeam Backup & Replication v10 release. That is enhancements to the Cloud Tier by adding Copy Mode and Immutability for Amazon S3.

Michael Cade then took us through the v10 Enhanced NAS feature which is probably our most eagerly and long awaited/overdue feature in years. Michael does a great job of going through the differences between us and our competitors and also why we have waited this long to release backup for NAS… even though this is now much much more.

As an extra, Michael put out this video the next day further explaining how have implemented CRC into the feature for more efficient backup performance.

Finally, I had 15 minutes to race through a feature that is not coming as part of v10, but coming in 2020… CDP! It’s taken us a while, but as I said in the video, I believe we will have the most reliable and stable implementation of CDP. This isn’t something you want to mess around with, and I know all to well from experience the impact problematic CDP implementations can have.

#TFD20 Follow Up – Veeam Cloud Tier Glossary

Yesterday I presented at Tech Field Day 20. My first topic was on the enhancements we are bringing to Cloud Tier in our Backup & Replication v10 release. Rick Vanover setup the v10 enhancement session by doing some ground work on what a Scale Out Backup Repository is and briefly went over the initial features of Cloud Tier released in Backup & Replication Update 4.

We had a few questions around some of the terminology being used with regards to the Cloud Tier so I thought as a followup I would list out the glossary of terminology I’ve been building since the Update 4 release with the additions of the new v10 enhancements.

  • Cloud Tier – Cloud Tier is the name given to this feature in Veeam Backup & Replication 9.5 Update 4
  • Object Storage Repository – Object Storage Repository is the name given to a repository that is backed by Amazon S3, Azure Blob or IBM Cloud
  • Scale Out Backup Repository (SOBR) – Scale-Out Backup Repository is a Veeam feature first introduced in Veeam Backup & Replication v9. It consists of one or more Performance Tier extents and exactly one Capacity Tier extent.
  • Capacity Tier – Capacity Tier is the name given to extent on a SOBR using an Object Storage Repository.
  • Performance Tier – Name given to the one or more extents on a SOBR using a standard backup repository
  • Move Mode – Name given to a policy introduced in Update 4 that offloads data from sealed chains and has it in either Performance or Capacity Tier
  • Copy Mode – Name given to policy coming in v10 that immediately duplicates backup files from Performance to Capacity Tier once a backup job has completed
  • Offload Job – Name given to the process that moves data from Performance to Capacity Tier
  • Immutability Period – New feature coming in v10 that sets an Amazon S3 or S3 Compatible Object Lock on blocks copied or moved from the Performance or Capacity Tier protecting them against accidental or malicious deletion.

In addition to that, I have pasted a link to the offical Deep Dive Veeam Whitepaper for Cloud Tier that goes into the why the what and the how of the Cloud Tier and dives into the innovative technologies we have built into the feature.

White Paper Link: https://www.veeam.com/wp-cloud-tier-deep-dive.html

If you want to catch the Cloud Field Day 5 presentation on Cloud Tier, as well as the most recent one yesterday at Tech Field Day 20, I have embedded them below.

Heading to Tech Field Day 20 #TFD20

I’m currently sitting in my hotel room in sunny sunny San Jose. Today and tomorrow will be spent finishing off preparations for Tech Field Day 20. This will be my second Tech Field Day event following up from Cloud Field Day 5 in April. #TFD20 is the 10th year anniversary of Tech Field day and with that there is special significance being placed on this event which adds added excitement to the fact that I’m presenting with my fellow Product Strategy team members, Michael Cade and Rick Vanover.

Veeam at Tech Field Day 20

Once again, this is an important event for us at Veeam as we are given the #TFD stage for two hours on the Wednesday morning as we look to talk about what’s coming in our v10 release… and beyond. We have crafted the content with the delegates in mind and focused on core data protection functionality that looks to protect organizations from modern attacks and data loss.

For Michael, Rick and myself we will be focusing on reiterating what Veeam has done in leading the industry for a number of years in innovation while also looking at the progress we have made in recent times in transitioning to a true software defined, hardware agnostic platform that offers customers absolute choice.

The Veeam difference also lies in the way in which we release ready, stable and reliable solutions that are developed in innovative ways that are outside of the norm in the backup industry. What we will be showing on Wednesday, I believe, will highlight that as one of our strongest selling points.

Veeam are presenting at 10am (Pacific Time) Wednesday 13th November 2019

I am looking forward to presenting to all the delegates as well as those who join via the livestream.

v10 Enhancements – Downloading Object Storage Data per Tenant for SOBR

Version 10 of Veeam Backup & Replication isn’t too far away and we are currently in the middle of a second private BETA for our customers and partners. There has been a fair bit of content released around v10 functionality and features from our Veeam Vanguard’s over the past couple of weeks and as we move closer to GA, as par of the lead up, I am doing a series on some of the cool new enhancements that are coming as part of the release. These will be quick short takes that give a glimpse into what’s coming as part of the v10 release.

Downloading Tenant Data from SOBR Capacity Tier

Cloud Tier was by far the most significant feature of Update 4 for Backup & Replication 9.5 and we have seen the consumption of Object Storage in AWS, Azure and other platforms grow almost exponentially since its release. Our VCSPs have been looking to take advantage of the MOVE functionality that came in Update 4, but have also requested a way to pull back offloaded data from the Capacity Tier back to the Performance Tier on a per tenant basis.

The use case for this might be for tenant off-boarding, or migration of backup data back onsite. In any case our VCSPs needed a way to get the data back and rehydrate the VBK files and remove the data from Object Storage. In this quick post I’ll show how this is achieved through the UI.

First, looking at the image below you can see that there are a couple of dehydrated VBK files that belong to a specific tenant Cloud Connect Backup job are no bigger than 17MB as they site next to ones that are about 1GB.

To start a Download job, we have the option to click on the Download icon in the Tenant ribbon, or right right clicking on the tenant account and select Download

There will be an information box appear letting you know that there is a backup chain on the performance extent and the disk space required to download the other backup data back to the performance tier from the capacity tier The SOBR Download job progress can be tracked
When completed we can see details of the download from Object Storage to the Performance Tier. In the example below a lot of the blocks that where present in the Performance Tier where used to rehydrate the previously offloaded VBKs. This new feature is leveraging the Intelligent Block Recovery to save on egress and also reduce download time. Going back to the file view, the previously smaller 17MB VBKs have been rehydrated to their previous size and we have all the tenant’s data back on the Performance Tier ready to be accessed.

Wrap Up:

That was a quick look at one of the cool smaller enhancements coming in v10. The ability to download data on a per tenant based from the Capacity Tier back to the Performance Tier is one that I know our VCSPs will be happy with.

Stay tuned over the next few weeks as I go through some more hidden gems.

Disclaimer: The information and screen shots in this post is based on BETA code and may be subject to change come final GA.

Public Beta – Backup for Microsoft Office 365 v4

Overnight at Microsoft Ignite, we announced availability of the Public Beta for the next version of Veeam Backup for Microsoft Office 365. This is again a much awaited update for VBO with a ton of enhancements and the introduction of Object Storage Support for Backup Repositories. The product has done extremely well and is one of our fastest growing in the Veeam Availability Platform. The reason for this is due to Organizations now understanding the requirements around the backing up of Office 365 data.

Backup for Office 365 3.0 Yes! You Still Need to Backup your SaaS

While we have enhanced a number of features and added some more reporting and user account management options, the biggest addition is the ability to leverage Object Storage to store longer term backup data. This has been a huge request since around version 1.5 of VBO, mainly due to the amount of data that is required to backup Exchange, SharePoint and OneDrive Organizations.

Similar to Cloud Tier in Backup & Replication 9.5 Update 4, the premise of the feature is to release pressure (be it cost, management and maintenance) on local Backup Repositories and offload the bulk of the data to cheaper Object Storage.

There is support in the beta for:

Though similar in name to Cloud Tier in Backup & Replication, the way in which the data is offloaded, stored and referenced in the VBO implementation is vasty different to that of Cloud Tier. As we get to GA for the v4 release there will be more information forthcoming about the underlying mechanics of that.

The Beta is available now and can be installed on a seperate server for side by side testing against Office 365 Organizations. For those looking to test the new functionality before the offical GA later in the year head to the Beta Download page and try it out!

There is still a Sting in the Tail for Cloud Service Providers

This week it gave me great pleasure to see my former employer, Zettagrid announced a significant expansion in their operations, with the addition of three new hosting zones to go along with their existing four zones in Australia and Indonesia. They also announced the opening of operations in the US. Apart from the fact I still have a lot of good friends working at Zettagrid the announcement vindicates the position and role of the boutique Cloud Service Provider in the era of the hyper-scale public cloud providers.

When I decided to leave Zettagrid, I’ll be honest and say that one of the reasons was that I wasn’t sure where the IaaS industry would be placed in five years. That was now, more than three years ago and in that time the industry has pulled back significantly from the previous inferred position of total and complete hyper-scale dominance in the cloud and hosting market.

Cloud is not a Panacea:

The Industry no longer talks about the cloud as a holistic destination for workloads, and more and more over the past couple of years the move has been towards multi and hybrid cloud platforms. VMware has (in my eyes) been the leader of this push but the inflection point came at AWS re:Invent last year, when AWS Outposts was announced. This shift in mindset is driven by the undisputed leader in the public cloud space towards consuming an on-premises resource in a cloud way.

I’ve always been a big supporter of boutique Service Providers and Managed Service Providers… it’s in my blood and my role at Veeam allows me to continue to work with top innovative service providers around the world. Over the past three years, I’ve seen the really successful ones thrive through themselves pivoting by offering their partners and tenants differential services… going beyond just traditional IaaS.

These might be in the form of enhancing their IaaS platform by adding more avenues to consume services. Examples of this are adding APIs, or the ability for the new wave of Infrastructure as Code tools to provision and manage workloads. vCloud Director is a great example of continued enhancement that, upon every releases offers something new to the service provider tenant. The Plugable Extension Architecture now allows service providers to offer new services for backup, Kubernetes and Object Storage.

Backup and Disaster Recovery is Driving Revenue:

A lot of service providers have also transitioned to offering Backup and Disaster Recovery solutions which in many cases has been the biggest growth area for them over the past number of years.  Even with the extreme cheapness that the hyper-scalers offer for the their cloud object storage platform.

All this leads me to believe that there is still a very significant role to be had for Service Providers in conjunction with other cloud platforms for a long time to come. The service providers that are succeeding and growing are not sitting on their hands and expecting what once worked to continue working. The successful service providers are looking at ways to offer more services and continue to be that trusted provider of IT.

I was once told in the early days of my career that if a client has 2.3 products with you, then they are sticky and the likelihood is that you will have them as a customer for a number of years. I don’t know the actual accuracy of that, but I’ve always carried that belief. This flies in the face of modern thinking around service mobility which has been reinforced by the improvement in underlying network technologies to allow the portability and movement of workloads. This also extends to the ease to which a modern application can be provisioned, managed and ultimately migrated. That said, all service providers want their tenants to be sticky and not move.

There is a Future!

Whether it be through continuing to evolve existing service offerings, adding more ways to consume their platform, becoming a broker for public cloud services or being a trusted final destination for backup and Disaster Recovery, the talk about the hyper-scalers dominating the market is currently not a true reflection of the industry… and that is a good thing!

Veeam Vault #11 – VBR, Veeam ONE, VAC Releases plus Important Update for Service Providers

Welcome to the 11th edition of Veeam Vault and the first one for 2019! It’s been more than a year since the last edition, however in light of some important updates that have been released over the past couple of weeks and months, I thought it was time to open up the Vault again! Getting stuck into this edition, I’ll cover the releases of Veeam Backup & Replication 9.5 Update 4b, Veeam One Update 4a as well as an update for Veeam Availability Console and some supportability announcements.

Backup & Replication 9.5 Update 4b and Veeam ONE 4a:

In July we released Update 4b for Veeam Backup & Replication 9.5. It brought with it a number of fixes to common support issues as well as a number of important platform supportability milestones. If you haven’t moved onto 4b yet, it’s worth getting there as soon as possible. You will need to be on at least 9.0 Update 2 (build 9.0.0.1715) or later prior to installing this update. After the successful upgrade, your build number will be 9.5.4.2866.

Veeam ONE 9.5 Update 4a was released in early September and containers similar platform supportability to Backup & Replication as well as a number of fixes. Details can be found in this VeeamKB.

Backup & Replication Platform support

  • VMware vCloud Director 9.7 compatibility at the existing Update 4 feature levels.
  • VMware vSphere 6.5 and 6.7 U3 Supportability vSphere 6.5 and 6.7 U3 GA is officially supported with Update 4b.
  • Microsoft Windows 10 May 2019 Update and Microsoft Windows Server version 1903 support as guest VMs, and for the installation of Veeam Backup & Replication and its components and Veeam Agent for Windows 3.0.2 (included in the update).
  • Linux Kernel version 5.0 support by the updated Veeam Agent for Linux 3.0.2 (included in the update)

For a full list of updates and bug fixes, head to the offical VeeamKB. Update 4b is a cumulative update, meaning it includes all enhancements delivered as a part of Update 4a. There are also a number of fixes specifically for Veeam Cloud & Service Providers that offer Cloud Connect services. For the full change log, please see this topic on the private VCSP forum.

https://www.veeam.com/kb2970

VAC 3.0 Patch:

Update 3 for Veeam Availability Console v3 (build 2762) was released last week, and containers a number of important fixes and enhancements. The VeeamKB lists out all the resolved issues, but i’ve summerized the main ones below. It is suggested that all VAC installations are updated as soon as possible. As a reminder, don’t forget to ensure you have a backup of the VAC server before applying the update.

  • UI – Site administrators can select Public IP Addresses belonging to a different site when creating a company. Under certain conditions, “Used Storage” counter may display incorrect data on the “Overview” tab.
  • Server – Veeam.MBP.Service fails to start when managed backup agents have duplicate IDs (due to cloning operation) in the configuration database.
  • Usage Reporting – Under certain conditions, usage report for managed Veeam Backup & Replication servers may not be created within the first ten days of a month.
  • vCloud Director – Under certain conditions, the management agent may connect to a VAC server without authentication.
  • Reseller Reseller can change his or her backup quota to “unlimited” when creating a managed company with “unlimited” quota.
  • RESTful APIs – Querying “v2/tenants/{id}” and “/v2/tenants/{id}/backupResources” information may take considerable amount of time.

https://www.veeam.com/kb3003

Veeam Cloud Connect Replication Patch:

Probably one of the more important patches we have released of late has to do with a bug found in the stored procedure that generates automated monthly license usage reports for Cloud Connect Replication VMs. This displays an unexpected number of replicated VMs and licensed instances which has been throwing off some VCSP license usage reporting. If VCSPs where using the PowerShell command Get-VBRCloudTenant -Name “TenantName”, the correct information is returned.

To fix this right now, VCSPs offering Cloud Connect Replication servers can visit this VeeamKB, download an SQL script and apply it to the MSSQL server as instructed. There will also be an automated patch released and the fix baked into future Updates for Backup & Replication.

https://www.veeam.com/kb3004

Quick Round Up:

Along with a number of platform supportability announcements at VMworld 2019, it’s probably important to reiterate that we now have a patch available that allows us to support restores into NSX-T for VMware Cloud on AWS SDDCs environments. This also means that NSX-T is supported on all vSphere environments. The patch will be baked into the next major release of Backup & Replication.

Finally, the Dell EMC SC storage plug-in is now available which I know will be popular among our VCSP community who leverage SCs in their Service Provider platforms. Being able to offload the data transfer of backup and replication jobs to the storage layer introduces a performance advantage. In this way, backups from storage array snapshots provide a fast and efficient way to allow the Veeam backup proxy to move data to a Veeam backup repository.

Assigning vSphere Tags with Terraform for Policy Based Backups

vSphere Tags are used to add attributes to VMs so that they can be used to help categorise VMs for further filtering or discovery. vSphere Tags have a number of use cases of which Melissa has a great blog post here on the power of vSphere Tags, their configuration and their application. Veeam fully supports the use of vSphere Tags when configuring Backup or Replication Jobs. The use of tags essentially transforms static jobs into dynamic policy based management for backup and replication.

Once a job is set to build its VM inventory from Tags there is almost no need to go back and amend the job settings to cater for VMs that are added or removed from vCenter. . Shown above, I have a Tag Category configured with two tags that are used to set a VM to be included or excluded in the backed job. Every time the job is run it will source the VM list based on these policy elements resulting in less management overheads and as a way to capture changes to the VM inventory.

vSphere Tags with Terraform:

I’ve been deploying a lot of lab VMs using Terraform of late. The nature of these deployments means that VMs are being created and destroyed often. I was finding that VMs that should be backed up where not being backed up, while VMs that shouldn’t be backed up where being backed up. This also leads to issues with the backup job…an example was this week, when I was working on my Kubernetes vSphere Terraform project.

The VMs where present at the start of the backup, but during the window the VMs had been destroyed leaving the job in an error state. These VMs being transient in nature should never have been part of the job. With the help of the tags I created above I was able to use Terraform to assign those tags to VMs created as part of the plan.

With Terraform you can create Tag Categories and Tags as part of the code. You can also leverage existing Tag Categories and Tags and feed that into the declarations as variables. For backup purposes, every VM that I create now has one of the two tags assigned to it. Outside of Terraform, I would apply this from the Web Client or via PowerShell, but the idea is to ensure a repeatable, declarative VM state where any VM created with Terraform has a tag applied.

Terraform vSphere Tag Configuration Example:

First step is to declare two data sources somewhere in the TF code. I typically place these into a main.tf file.

We have the option to hard code the names of the Tag and Tag Category in the data source, but a better way is to use variables for maximum portability.

The terraform.tfvars file is where we can set the variable

We also need to created a corresponding entry in the variables.tf

Finally we can set the tag information in the VM .tf file that references the data sources, that in turn reference the variables that have been configured.

The Result:

Once the Terraform plan has been applied and the VMs created the Terraform State file will contain references to the tags and the output from the running of the plan will show it assigned to the VM.

The Tag will be assigned to the VM and visible as an attribute in vCenter.

Any Veeam Backup Job that is configured to use Tags will now dynamically add or exclude VMs created by Terraform plan. In the case above, the VM has the TPM03-NO-BACKUP tag assigned which means it will be part of the exclusion list for the backup job.

Conclusion:

vSphere Tags are an excellent way to configure policy based backup and replication jobs through Veeam. Terraform is great for deploying infrastructure in a repeatable, declarative way. By having Terraform assign Tags to VMs as they are deployed allows us to control whether a VM is included or excluded from a backup policy. If deploying VMs from Terraform, take advantage of vSphere Tags and have them as part of your deployments.

References:

https://www.terraform.io/docs/providers/vsphere/r/tag.html

Mapping vCloud Director Backup Jobs to Self Service Portal Tenants

Since version 7 of Backup & Replication, Veeam has lead the way in regard to the protection of workloads running in vCloud Director. With version 7 Veeam first released deep integration into vCD that talked directly to the vCD APIs to facilitate the backup and recovery of vCD workloads and their constructs. More recently in version 9.5, the vCD Self Service Portal was released which also taps into vCD for tenant authentication.

The portal leverages Enterprise Manager and allows service providers to grant their tenants self-service backup for their vCD workloads. More recently we have seen some VCSPs integrate the portal into the new vCD UI via the extensibility plugin which is a great example of the power that Veeam has with vCD today while we wait for deeper, native integration.

It’s possible that some providers don’t even know that this portal exists let alone the value it offers. I’ve covered the basics of the portal here…but in this post, I am going to quickly mention an extension to a project I released last year for the vCD Self Service Portal, that automatically enables a tenant, creates a default backup jobs based on policies, tie backup copy jobs to default job for longer retention and finally import the jobs into the vCD Self Service Portal ready for use.

Standalone Map and Unmap PowerShell Script:

From the above project, the job import part has been expanded to include its own standalone PowerShell script that can also be used to map or unmap existing vCD Veeam Backup jobs to a a tenant to manage from the vCD Self Service Portal. This is done using the Set-VBRvCloudOrganizationJobMapping commandlet.

As shown below, this tenant has already configured a number of jobs in the Portal.

There was another historical job that was created outside of the portal directly from the Veeam console. Seen below as TEST IMPORT.

To map the job, run the PowerShell script is with the -map parameter. A list of all existing vCloud Director Backup jobs will be listed. Once the corresponding number has been entered the commandlet within the script will be run and the job mapped to the tenant linked to the job.

Once that has been run, the tenant now has that job listed in the vCD Self Service Portal.

There is a little bit of error checking built into the script, to that it exits nicely on an exception as shown below.

Finally, if you want to unmap a job from the vCD Self Service portal, run the PowerShell script with the -unmap parameter. Conclusion:

Like most things I work on and then publish for general consumption, I had a request to wrap some logic around the Set-VBRvCloudOrganizationJobMapping commandlet from a service provider partner. The script can be taken and improved, but as-is, provides an easy way to retrieve all vCloud Jobs belonging to a Veeam Server, select the desired job and then have it mapped to a tenant using the vCD Self Service Portal.

References:

https://github.com/anthonyspiteri/powershell/blob/master/vCD-Create-SelfServiceTenantandPolicyJobs/vCD_job.ps1

https://helpcenter.veeam.com/docs/backup/powershell/set-vbrvcloudorganizationjobmapping.html?ver=95u4

First Look: On Demand Recovery with Cloud Tier and VMware Cloud on AWS

Since Veeam Cloud Tier was released as part of Backup & Replication 9.5 Update 4, i’ve written a lot about how it works and what it offers in terms of offloading data from more expensive local storage to what is fundamentally cheaper remote Object Storage. As with most innovative technologies, if you dig a little deeper… different use cases start to present themselves and unintended use cases find their way to the surface.

Such was the case when, together with AWS and VMware, we looked at how Cloud Tier could be used as a way to allow on demand recovery into a cloud platform like VMware Cloud on AWS. By way of a quick overview, the solution shown below has Veeam backing up to a Scale Out Backup Repository which has a Capacity Tier backed by an Object Storage repository in Amazon S3. There is a minimal operational restore window set which means data is offloaded quicker to the Capacity Tier.

Once there, if disaster happens on premises, an SDDC is spun up, a Backup & Replication Server deployed and configured into that SDDC. From there, a SOBR is configured with the same Amazon S3 credentials that connects to the Object Storage bucket which detects the backup data and starts a resync of the metadata back to the local performance tier. (as described here) Once the resync has finished workloads can be recovered, streamed directly from the Capacity Tier.

The diagram above has been published on the AWS Reference Architecture page, and while this post has been brief, there is more to come by way of an offical AWS Blog Post co-authored by myself Frank Fan from AWS around this solution. We will also look to automate the process as much as possible to make this a truely on demand solution that can be actioned with the click of a button.

For now, the concept has been validated and the hope is people looking to leverage VMware Cloud on AWS as a target for disaster and recovery look to leverage Veeam and the Cloud Tier to make that happen.

References: AWS Reference Architecture

« Older Entries