Category Archives: Veeam

The Reality of Disaster Recovery Planning and Testing

As recent events have shown, outages and disasters are a fact of life in this modern world. Given the number of different platforms that data sits on today, we know that disasters can equally come in many shapes and sizes and lead to data loss and impact business continuity. Because major wide scale disasters occur way less often than smaller disasters from within a datacenter, it’s important to plan and test cloud disaster recovery models for smaller disasters that can happen at different levels of the platform stack.

Because disasters can lead to revenue, productivity and reputation loss, it’s important to understand that having cloud based backup is just one piece of the data protection puzzle. Here at Veeam, we empower our cloud and service providers to offer services based on Veeam Cloud Connect Backup and Replication. However, the planning and testing of what happens once disaster strikes is ultimately up to either the organizations purchasing the services or the services company offering Disaster Recovery as a Service (DRaaS) that is wrapped around backup and replication offerings.

Why it’s Important to Plan:

In theory, planning for a disaster should be completed before selecting a product or solution. In reality, it’s common for organizations to purchase cloud DR services without an understanding of what needs to be put in place prior to workloads being backed up or replicated to a cloud provider or platform. Concepts like recovery time and recovery point objectives (RTPO) need to be understood and planned so, if a disaster strikes and failover has occurred, applications will not only be recovered within SLAs, but also that data on those recovered workloads will be useful in terms of its age.

Smaller RTPO values go hand-in-hand with increased complexity and administrative services overhead. When planning ahead, it’s important to size your cloud disaster platform and build the right disaster recovery model that’s tailored to your needs. When designing your DR plan, you will want to target strategies that relate to your core line of business applications and data.

A staged approach to recovery means that you recover tier-one applications first so the business can still function. A common tier-one application example is the mail server. Another is payroll systems, which could result in an organization being unable to pay its suppliers. Once your key applications and services are recovered, you can move on to recovering data. Keeping mind that archival data generally doesn’t need to be recovered first. Again, being able to categorize systems where your data sits and then working those categories into your recovery plan is important.

Planning should also include specific tasks and controls that need to be followed up on and adhered to during a disaster. It’s important to have specific run books executed by specific people for a smoother failover. Finally, it is critical to make sure that all IT staff know how to accessing applications and services after failover.

Why it’s Important to Test:

When talking about cloud based disaster recovery models, there are a number of factors to consider before a final sign-off and validation of the testing process. Once your plan is in place, test it regularly and make adjustments if issues arise from your tests. Partial failover testing should be treated with the same level of criticality as full failover testing.

Testing your DR plan ensures that business continuity can be achieved in a partial or full disaster. Beyond core backup and replication services testing, you should also test networking, server and application performances. Testing should even include situational testing with staff to be sure that they are able to efficiently access key business applications.

Cloud Disaster Recovery Models:

There are a number of different cloud disaster recovery models, that can be broken down into three main categories:

  • Private cloud
  • Hybrid cloud
  • Public cloud

Veeam Cloud Connect technology works for hybrid and public cloud models, while Veeam Backup & Replication works across all three models. The Veeam Cloud & Service Provider (VCSP) program offers Veeam Cloud Connect backup and replication classified as hybrid clouds offering RaaS (recovery-as-a-service). Public clouds, such as AWS and Azure, can be used with Veeam Backup & Replication to restore VM workloads. Private clouds are generally internal to organizations and leverage Veeam Backup & Replication to replicate or back up or for a backup copy of VMs between datacenter locations.

The ultimate goal here is to choose a cloud recovery model that best suits your organization. Each of the models above offer technological diversity and different price points. They also plan and test differently in order to, ultimately, execute a disaster plan.

When a partial or full disaster strikes, a thoroughly planned and well-tested DR plan, backed by the right disaster recovery model, will help you avoid a negative impact on your organization’s bottom line. Veeam and its cloud partners, service-provider partners and public cloud partners can help you build a solution that’s right for you.

First Published on veeam.com by me – modified and updated for republish today  

Veeam Powered Network v2 Azure Marketplace Deployment

Last month Veeam PN v2 went GA and was available for download and install from the veeam.com download page. As an update to that, we published v2 to the Azure Marketplace which is now available for deployment. As a quick refresher, Veeam PN was initially released as part of Direct Recovery to Azure and was marketed through the Azure Marketplace. In addition to that, for the initial release I went through a number of use cases for Veeam PN which are all still relevant with the release of v2:

With the addition of WireGuard replacing OpenVPN for site-to-site connectivity the list of use cases will be expanded and the use cased above enhanced. For most of my own use of Veeam PN, I have the Hub living in an Azure Region which I connect up into where ever I am around the world.

Now that the Veeam PN v2 is available from the Azure Marketplace I have created a quick deployment video that can be viewed below. For those that want a more step by step guide as a working example, you can reference this post from v1… essentially the process is the same.

  • Deploy Veeam PN Appliance from Azure Marketplace
  • Perform Initial Veeam PN Configuration to connect Azure
  • Configure SiteGateway and Clients

NOTE: One of the challenges that we introduced by shifting over to WireGuard is that there is no direct upgrade path from v1 to v2. With that, there needs to be a side by side stand up of v2 and v1 to enable a configuration migration… which at the moment if a manual process.

References:

https://anthonyspiteri.net/veeam-powered-network-azure-and-remote-site-configuration/

Cloud Tier Deep Dive Super Session On Demand!

Last week at VeeamON 2019, Dustin Albertson and myself delivered a two part deep dive session on Cloud Tier, which was released in Update 4 of Veeam Backup & Replication 9.5 in January. I’ve blogged about how Cloud Tier is one of the most innovative features i’ve seen in recent times and I have been able to dig under the covers of the technology from early in the development cycle. I have presented basic overviews to more complex deep dives over the past six or so months however at VeeamON 2019, Dustin and myself took it a step further and went even deeper.

Part I:

The first part of the Deep Dive was presented as the first session of the event, just after the opening keynote. It was on main stage and was all slide driven content that introduces the Cloud Tier, talks about the architecture and then dives deeper into its inner workings as well as us talking about some of the caveats.

Part II:

From the first session to the last session slot of the event…to finish up, Dustin and I presented a demo only super session which I have to admit… was one of the best sessions i’ve ever been a part of in terms of flow, audience participation and what we where able to actually show. We even where able to show off some of the new COPY functionality coming in v10.

There are a few scripts that we used in that session that I will look to release on GitHub over the next week or so.. so stay tuned for those! But for now, enjoy the session recordings embedded above.

VeeamON 2019 – Highlighting theCUBE Show Wrap

Hard to believe that another VeeamON has come and gone… for us in the Product Strategy Team the lead up and the week is immensely busy… but this is what we live and breath for! Everyone came away from the conference extremely pleased with how it panned out and we believe it also was a success based on what we heard coming out of media, analysts and the general IT community through social media.

In this post, I want to comment on a great Show Wrap from theCUBE hosted by Dave Vallante and Peter Burris which I think highlights exactly where Veeam is currently placed (Act I)… and where we are going in the industry (Act II).

Veeam is not about bragging rights and lots of flashy announcements…

This is a great quote from theCUBE Show Wrap (video embedded below) which speaks to what we at Veeam are trying to achieve. We are not restrained by the pressures of potential IPOs and we are confident enough to continue to be aggressive in the market while delivering on our core values of Simplicity, Reliability and Flexibility.

To comment a little more around what was talked about in theCUBE show wrap; It was interesting to hear perspective from the hallways about how people where talking about solving problems… Veeam is creating opportunities to solve problems with the focus on the customer. That is what successful companies focus on!

The messaging that theCUBE talked about from what they saw at the event was that Veeam is all about Data Protection across wherever your data lives… Backup is where is starts! Veeam still believes this and is focused…while not over rotating on the larger vision. Lots of their competitors are going hard after data management… modern architecture… Veeam is not legacy, but growing… if not flourishing due to the focus it has.

It’s a big, complex market and everyone is going to fight hard for it. Focused R&D is a very important concept to focus on… Veeam isn’t looking to be everything to everyone which can result in a wide but potentially shallow feature set. We see this with our newer competition… the concept of fast iterative development can have its flaws and though at times we don’t release as often as others in the market, when we do release new features and enhancements they are focused and reliable… you only need to look at the Cloud Tier that came as part of Update 4 for Backup & Replication 9.5.

Veeam has done a great job of keeping their finger on the Pulse… Veeam has done a good job of navigating what can customers really do (around data protection) and not getting too far ahead.

It’s all about our ecosystem and who we partner with… giving our customers the freedom of choice through our agnosticity. If we can nail the ecosystem partnership and make it seamless then Dave Vallante believe that Veeam has the advantage moving forward. This is where our Veeam Cloud Data Protection Platform centred around Backup & Replication and our Storage APIs will come into play.

Veeam is taking an almost Apple like approach…give customer what they can handle… then give them a little bit more.

Some really interesting thoughts in the Show Wrap from beginning to end… it’s worth a watch and I believe backs up the general feeling of a VeeamON show well executed which backs our shift into Act II.

This tweet sums it up well:

https://twitter.com/jpwarren/status/1130955342177685504

The whole stream of what was recorded at VeeamON 2019 by theCube can be found here:

VeeamON 2019 – Mainstage Technical Session Recap and Video

Hard to believe that another VeeamON has come and gone… for us in the Product Strategy Team the lead up and the event itself is immensely busy, but this is what we live and breath for! Everyone came away from the conference extremely pleased with how it panned out and we believe it also was a success based on what we heard coming out of media, analysts and the general IT community through social media.

We did something a little different this year at VeeamON. Instead of having one long General Session Keynote, we split the general sessions into two parts… one being a Veeam Vision keynote delivered by Ratmir in the morning, and the second being a Technology General Session held later in the day.

The idea was to dedicate ninety minutes to showcase what we had released already in 2019 and then, take an advanced look at what was to come later in the year. The other thing that we wanted to achieve was bring back to the live demos to the VeeamON mainstage as we saw in 2015 and in 2017.

Session Breakdown:

It’s pretty rare in our industry for companies to attempt live demos during keynote presentations… the ghosts of Microsoft BSODs past seem to hinder the use of live demos these days, but that is not how Veeam and the Veeam Product Strategy Team rolls. To pull off 8 live demos without a glitch (4 of which running on Tech Preview code) is a testament to the confidence we have in ourselves and in the technology… it’s also a huge rush when everything comes off as expected.

That said, the Technology General Session is worth watching for those interested in what Veeam has delivered so far this year… and what is to come!

Released : Veeam PN v2…Making VPNs Simple, Reliable and Scalable

When it comes to connecting remote sites, branch offices or extending on-premises networks to the cloud that level of complexity has traditionally always been high. Networking has always been the most complex part of any IT platform. There has also always been a high level of cost associated with connecting sites…both from a hardware or a software point of view. There are also the man hours to ensure things are setup correctly and will continue to work. As well and that, security and performance are also important factors in any networking solution..

Simplifying Networking with Veeam

At VeeamOn in 2017, we announced the release candidate for Veeam Powered Network (Veeam PN) which in combination with our Restore to Azure functionality created a new solution to ease the complexities around extending an on-premises network to an Azure network to ensure connectivity during restoration scenarios. In December of that year, Veeam PN went generally available as a FREE solution.

What Veeam PN does well is present a simple and intuitive Web Based User Interface for the setup and configuration of site-to-site and point-to-site VPNs. Moving away from the intended use case, Veeam PN became popular in the IT enthusiast and home lab worlds as a simple and reliable way to remain connected while on the road, or to mesh together with ease networks that where spread across disparate platforms.

By utilizing OpenVPN under the surface and automating and orchestrating the setup of site-to-site and point-to-site networks, we leveraged a mature Open Source tool that offered a level of reliability and performance that suited most use cases. However, we didn’t want to stop there and looked at ways in which we could continue to enhance Veeam PN to make it more useful for IT organizations and start to look to increase underlying performance to maximize potential use cases.

Introducing Veeam Powered Network v2 featuring WireGuard®

With the release of Veeam PN v2, we have enhanced what is possible for site-to-site connectivity by incorporating WireGuard into the solution (replacing OpenVPN for site-to-site) as well as enhancing usability. We also added the ability to better connect to remote devices with the support of DNS for site-to-site connectivity.

WireGuard has replaced OpenVPN for site-to-site connectivity in Veeam PN v2 due to the rise of it in the Open Source world as a new standard in VPN technologies that offers a higher degree of security through enhanced cryptography and operates more efficiently, leading to increased performance and security. It achieves this by working in kernel and by using fewer lines of code (4000 compared to 600,000 in OpenVPN) and offers greater reliability when thinking about connecting hundreds of sites…therefore increasing scalability.

For a deeper look at why we chose WireGuard… have a read of my offical veeam.com blog. The story is very compelling!

Increased Security and Performance

By incorporating WireGuard into Veeam PN we have further simplified the already simple WireGuard setup and allow users of Veeam PN to consume it for site-to-site connectivity even faster via the Veeam PN Web Console. Security is always a concern with any VPN and WireGuard again takes a more simplistic approach to security by relying on crypto versioning to deal with cryptographic attacks… in a nutshell it is easier to move through versions of primitives to authenticate rather than client server negotiation of cipher type and key lengths.

Because of this streamlined approach to encryption in addition to the efficiency of the code WireGaurd can out perform OpenVPN, meaning that Veeam PN can sustain significantly higher throughputs (testing has shown performance increases of 5x to 20x depending on CPU configuration) which opens up the use cases to be for more than just basic remote office or homelab use. Veeam PN can now be considered as a way to connect multiple sites together and have the ability to transfer and sustain hundreds of Mb/s which is perfect for data protection and disaster recovery scenarios.

Other Enhancements

The addition of WireGuard is easily the biggest enhancement from Veeam PN v1, however there are a number of other enhancements listed below

  • DNS forwarding and configuring to resolve FQDNs in connected sites.
  • New deployment process report.
  • Microsoft Azure integration enhancements.
  • Easy manual product deployment.
Conclusion

Once again, the premise of Veeam PN is to offer Veeam customers a free tool that simplifies the traditionally complex process around the configuration, creation and management of site-to-site and point-to-site VPN networks. The addition of WireGuard as the site-to-site VPN platform will allow Veeam PN to go beyond the initial basic use cases and become an option for more business-critical applications due to the enhancements that WireGuard offers.

#VeeamON 2019 – Top Session Picks, Live Tech Demos and VeeamOn Party

VeeamON is happening next week and the final push towards the event is in full swing. I can tell you that that this years event is going to be slightly different for those that have attended VeeamONs in the past…however that is a good thing! This is going to be my fourth VeeamOn, and my third being involved with the preparation of elements of the event. Having been behind the scenes, and knowing what our customers and partners are in for in terms of content and event activities…I can’t wait for things to kick off in Miami.

This year we have 60+ breakout sessions with a number of high profile speakers coming over to help delver those sessions. We also have significant keynote speakers for the main stage sessions on each of the event days. One of the biggest differences this year is that we will have a dedicated Technical Mainstage Keynote happening on Tuesday afternoon which will feature myself and other members of the Veeam Product Strategy and Product Management teams showing live demos of the latest Veeam technology and a look at what’s coming in our next major release.

Top Session Pick:

I’ve gone through all the breakouts and picked out my top sessions that you should consider attending…as usual there is a cloud slant to most of them, but there are also some core technology sessions that are not to be missed. The Veeam Product Strategy team are well represented in the session list so it’s also worth looking to attend talks from Rick Vanover, Michael Cade, Niels Engelen, David Hill, Kirsten Stoner, Dave Russell, Jason Buffington, Jeff Reichard and Danny Allan.

Secrets to Design an Availability Infrastructure for 25.000 VMs
Edwin Weijdema

Architecture, Installation and Design for Veeam Backup for Microsoft Office 365
Timothy Dewin and Niels Engele

TOP SECRET: Session related to announcement
Mike Resseler and Kostya Yasyuk

The State of the Backup Market & Veeam 2019 Predictions
Dave Russell

Cumulonimbus – Cloud Tier Deep Dive & Best Practices *
Anthony Spiteri and Dustin Albertson

Veeam Availability Console Deployment Best Practices
Luca Dell’Oca and Vitaliy Safarov

Activate Your Data with Veeam DataLabs
Michael Cade

Technology General Session
Veeam Product Management and Strategy Teams

VeeamON Party
Florider

You can download the VeeamON Mobile Application to register for sessions, organise and keep tabs on other parts of the event. Again, looking forward to seeing you all next week in Miami!

CrowdCompass Speaker Link

Cloud Tier Data Migration between AWS and Azure… or anywhere in between!

At the recent Cloud Field Day 5 (CFD#5) I presented a deep dive on the Veeam Cloud Tier which was released as a feature extension of our Scale Out Backup Repository (SOBR) in Update 4 of Veeam Backup & Replication. Since we went GA we have been able to track the success of this feature by looking at Public Cloud Object Storage consumption by Veeam customers using the feature. As of last week Veeam customers have been offloading petabytes of backup data into Azure Blob and Amazon S3…not counting the data being offloaded to other Object Storage repositories.

During the Cloud Field Day 5 presentation, Michael Cade talked about the Portability of Veeam’s data format, around how we do not lock our customers into any specific hardware or format that requires a specific underlying File System. We offer complete Flexibility and Agnosticity where your data is stored and the same is true when talking about what Object Storage platform to choose for the offloading of data with the Cloud Tier.

I had a need recently to setup a Capacity Tier extent that was backed by an Object Storage Repository on Azure Blob. I wanted to use the same backup data that I had in an existing Amazon S3 backed Capacity Tier while still keeping things clean in my Backup & Replication console…luckily we have built in a way to migrate to a new Object Storage Repository, taking advantage of the innovative tech we have built into the Cloud Tier.

Cloud Tier Data Migration:

During the offload process data is tiered from the Performance Tier to the Capacity Tier effectively Dehydrating the VBK files of all backup data only leaving the metadata with an Index that points to where the data blocks have been offloaded into the Object Storage.

This process can also be reversed and the VBK file can be rehydrated. The ability to bring the data back from Capacity Tier to the Performance Tier means that if there was ever a requirement to evacuate or migrate away from a particular Object Storage Provider, the ability to do so is built into Backup & Replication.

In this small example, as you can see below, the SOBR was configured with a Capacity Tier backed by Amazon S3 and using about 15GB of Object Storage.

The first step is to download the data back from the Object Storage and rehydrate the VBK files on the Performance Tier extents.

There are two ways to achieve the rehydration or download operation.

  1. Via the Backup & Replication Console
  2. Via a PowerShell Cmdlet
Rehydration via the Console:

From the Home Menu under Backups right click on the Job Name and select Backup Properties. From here there is a list of the Files contained within the job and also the objects that they contain. Depending on where the data is stored (remembering that the data blocks are only even in one location… the Performance Tier or the Capacity Tier) the icon against the File name will be slightly different with files offloaded represented with a Cloud.

Right Clicking on any of these files will give you the option to Copy the data back to the Performance Tier. You have the choice to copy back the backup file or the backup files and all its dependancies.

Once this is selected, a SOBR Download job is kicked off and the data is moved back to the Performance Tier. It’s important to note that our Intelligent Block Recovery will come into play here and look at the local data blocks to see if any match what is trying to be downloaded from the Object Storage… if so it will copy them from the Performance Tier, saving on egress charges and also speeding up the process.

In the image above you can see the Download Job working and only downloaded 95.5MB from Object Storage with 15.1GB copied from the Performance Tier… meaning the data blocks for the most that are local are able to be used for the rehydration.

The one caveat to this method is that you can’t select bulk files or multiple backup jobs so the process to rehydrate everything from the Capacity Tier can be tedious.

Rehydration via PowerShell:

To solve that problem we can use PowerShell to call the Start-VBRDownloadBackupFile cmdlet to do the bulk of the work for us. Below are the steps I used to get the backup job details, feed that through to variable that contains all the file names, and then kick off the Download Job.

The PowerShell window will then show the Download Job running

Completing the Migration:

No matter which way the Download job is initiated, we can see the progress form the Backup & Replication Console under the Jobs section.

And looking at the Disk and Network sections of Windows Resource Monitor we can see connections to Amazon S3 pulling the required blocks of data down.

Once the Download job has been completed and all VBKs have been rehydrated, the next step is to change the configuration of the SOBR Capacity Tier to point at the Object Storage Repository backed by Azure Blob.

The final step is to initiate an offload to the new Capacity Tier via an Offload Job…this can be triggered via the console or via Powershell (as shown in the last command of the PowerShell code above) and because we have already a set of data that satisfies the conditions for offload (sealed chains and backups outside the operational restore window) data will be dehydrated once again…but this time up to Azure Blob.

The used space shown below in the Azure Blob Object Storage matches the used space initially in Amazon S3 All recovery operations show Restore Points on the Performance Tier and on the Capacity Tier as dictated by the operational restore window policy.
Conclusion:

As mentioned in the intro, the ability for Veeam customers to have control of their data is an important principal revolving around data portability. With the Cloud Tier we have extended that by allowing you to choose the Object Storage Repository of your choice for cloud based storage or Veeam backup data…but also given you the option to pull that data out and shift when and where desired. Migrating data between AWS, Azure or any platform is easily achieved and can be done without too much hassle.

References:

https://helpcenter.veeam.com/docs/backup/powershell/object_storage_data_transfer.html?ver=95u4

Released: Backup for Office 365 3.0 …Yes! You Still Need to Backup your SaaS

A couple of weeks ago of Veeam Backup for Office 365 version 3.0 (build 3.0.0.422) went GA. This new version builds on the 2.0 release that offered support for SharePoint and OneDrive as well as enhanced self service capabilities for Service Providers. Version 3.0 is more about performance and scalability as well as adding some highly requested features from our customers and partners.

Version 2.0 was released last July and was focused on expansed the feature set to include OneDrive and SharePoint. We also continued to enhanced the automation capability of the platform through a RESTful API service allowing our Cloud & Service Providers to tap into the APIs to create scaleable and efficient service offerings. In version 3.0, there is also an extended set of PowerShell commandlets that have been enhanced from version 2.0.

What’s New in 3.0:

Understanding how best to deal with backing up SaaS based services where a lot of what happens is outside of the control of the backup vendor, there where some challenges around performance with the backing up and restoring of SharePoint and OneDrive in version 2.0. With the release of version 3.0 we have managed to increase the performance of SharePoint and OneDrive incremental backups up to 30 times what was previously seen in 2.0. We have also added support for multi-factor authentication which was a big ask from our customers and partners.

Other key enhancements for me was some optimisations around the repository databases that improves space efficiencies, auto-scaling of repository databases that enable easier storage management for larger environments by overcoming the ESE file size limit of 64 TB. When the limit is reached, a new database will be created automatically in the repository which stops manual intervention.

Apart from the headline new features and enhancements there are also a number of additional ones that have been implemented into Backup for Microsoft Office 365 3.0.

  • Backup flexibility for SharePoint Online. Personal sites within organisations can now be excluded or included from/to a backup in a bulk.
  • Flexible protection of services within your Office 365 organization, including exclusive service accounts for Exchange Online and SharePoint Online.
  • Built-in Office 365 storage and licensing reports.
  • Snapshot-based retention  which extends the available retention types.
  • Extended search options in the backup job wizard that make it is possible to search for objects by name, email alias and office location.
  • On-demand backup jobs to create backup jobs without a schedule and run them upon request.
  • The ability to rename existing organizations to keep a cleaner view on multiple tenant organizations presented in the console

For another look at what’s new, Niels Engelen goes through his top new features in detail here and for service providers out there, it’s worth looking at his Self Service Portal which has also been updated to support 3.0.

Architecture and Components:

 

There hasn’t been much of a change to the overall architecture of VBO and like all things Veeam, you have the ability to go down an all in one design, or scale out depending on sizing requirements. Everything is handled from the main VBO server and the components are configured/provisioned from here.

Proxies are the work horses of VBO and can be scaled out again depending on the size of the environment being backed up. Again, this could be Office 365 or on-premises Exchange or SharePoint instances.

Repositories must be configured on Windows formatted volumes as we use the JetDB database format to store the data. The repositories can be mapped one to one to tenants, or have a many to one relationship.

Installation Notes:

You can download the the latest version of Veeam Backup for Microsoft Office 365 from this location. The download contains three installers that covers the VBO platform and two new versions of the Explorers. Explorer for Microsoft OneDrive for Business is contained within the Explorer for Microsoft SharePoint package and installed automatically.

  • 3.0.0.422.msi for Veeam Backup for Microsoft Office 365
  • 9.6.5.422.msi for Veeam Explorer for Microsoft Exchange
  • 9.6.5.422.msi for Veeam Explorer for Microsoft SharePoint

To finish off…It’s important to read the release notes here as there are a number of known issues relating to specific situations and configurations.

Backup for Office 365 has been a huge success for Veeam with a growing realisation that SaaS based services require an availability strategy. The continuity of data on SaaS platforms like Office 365 is not guaranteed and it’s critical that a backup strategy is put into place.

Links and Downloads:

Disaster Recovery and Resiliency with Veeam Cloud Tier

Yesterday at Cloud Field Day 5, I presented a deep dive on our Cloud Tier feature that was released as a feature for Scale Out Backup Repository (SOBR) in Veeam Backup & Replication Update 4. The section went through an overview of its value proposition as well as deep dive into how we are tiering the backup data into Object Storage repositories via the Capacity Tier Extend of a SOBR. I also covered the space saving and cost saving efficiencies we have built into the feature as well as looking at the full suite of recoverability options still available with data sitting in an Object Storage Repository.

This included a live demo of a situation where a local Backup infrastructure had been lost and what the steps would be to leverage the Cloud Tier to bring that data back at a recovery site.

Quick Overview of Offload Job and VBK Dehydration:

Once a Capacity Tier Extent has been configured, the SOBR Offload Job is enabled. This job is responsible for validating what data is marked to move from the Performance Tier to the Capacity Tier based on two conditions.

  1. The Policy defining the Operational Restore Window
  2. If the backup data is part os a sealed backup chain

The first condition is all about setting a policy on how many days you want to keep data locally on the SOBR Performance Tiers which effectively become your landing zone. This is often dictated by customer requirements and now can be used to better design a more efficient approach to local storage with the understanding that the majority of older data will be tiered to Object storage.

The second is around the sealing of backup chains which means they are no longer under transformation. This is explained in this Veeam Help Document and I also go through it in the CFD#5 session video here.

Once those conditions are met, the job starts to dehydrate the local backup files and offload the data into Object Storage leaving a dehydrated shell with only the metadata.

The importance of this process is that because we leave the shell locally with all the metadata contained, we are able to still perform every Veeam Recovery option including Instant VM Recovery and Restore to Azure or AWS.

Resiliency and Disaster Recovery with Cloud Tier:

Looking at the above image of the offload process you can see that the metadata is replicated to the Object Storage as well as the Archive Index which keeps track of which blocks are mapped to what backup file. In fact for every extent we keep a resilient copy of the archive index meaning that if an extent is lost, there is still a reference.

Why this is relevant is because it gives us disaster recovery options in the case of a loss of whole a whole backup site or the loss of an extent. During the synchronization, we download the backup files with metadata located in the object storage repository to the extents and rebuild the data locally before making it available in the backup console.

After the synchronization is complete, all the backups located in object storage will become available as imported  jobs and will be displayed under the Backups and Imported in the inventory pane. But what better way to see this in action than a live demo…Below, I have pasted in the Cloud Field Day video that will start at the point that I show the demo. If the auto-start doesn’t kick in correctly the demo starts at the 31:30 minute mark.

References:

https://helpcenter.veeam.com/docs/backup/vsphere/capacity_tier_offload_job.html?ver=95u4

« Older Entries Recent Entries »