Category Archives: Backup

The State of DRaaS…A Few Thoughts

Over the past week Garter released the 2018 edition of the Magic Quadrant for DR as a Service. The first thing that I noticed was how sparse the quadrant was when comparing it to the 2017 quadrant. Though many hold it in high regard, the Gartner Quadrant isn’t the be all and end all source of information pertaining to those offering DRaaS and succeeding. But It got me thinking as to the state of the current DRaaS market.

Just before I talk about that, what does it mean to see less vendors in the Magic Quadrant this year? Probably not much apart from the fact the ones that dropped out probably don’t see value in undertaking the process. Though, as mentioned in this post it could also be due to the criteria changing. As a comparison, from the past three years you can see above that only ten participants remain down from twenty three the previous year. There has been a shift in position and it’s great to see iLand leading the way beating out global powerhouses like IBM and Microsoft.

But does the lack of participants in this year’s quadrant point to a declining market? Are companies skipping DRaaS for traditional workloads and looking to build availability and resilience into the application layer? Has network extension become so common place and reliable that companies are becoming less inclined to use DRaaS providers and just rely on inbuilt replication and mobility? There is an argument to be had that the push to cloud native applications, the use of public cloud and evolving network technologies has the potential to kill DRaaS…but not yet…and not any time soon!

Hybrid cloud and multi-platform services are here to stay…and while the use of the hyper-scale public clouds, serverless and containerisation has increased, there is still an absolute play to be had in the business of ensuring availability for “traditional” workloads. Those workloads that sit on-premises, in private or public cloud platforms still use the base unit of measurement as the VM.

This is where DRaaS still has the long game.

Depending on region, there is still a smattering of physical servers running workloads (some regions like Asia are 5-10 years behind the rest of the world in Virtualisation…let alone containerization or public cloud). It’s true that most Service Providers who have been successful with Infrastructure as a Service have spent the last few years developing their Backup, Replication and Disaster Recovery as a service offerings.

Underpinning these service offerings are vendors like Veeam, Zerto, VMware and other availability vendors that offer software that Service Providers can leverage to offer DR services both from on-premises locations to their cloud platforms, or between their cloud platforms. Traditional backup vendors offer replication features that can also be used for DR. There is also the likes of Azure that offers DRaaS using technologies like Azure Site Recovery that looks to offer an end to end service.

DRaaS still predominantly focuses on the availability of Virtual Machines and the services and applications they run. The end goal is to have critical line of business applications identified, replicated and then made available in the case of a disaster. The definition of a disaster varies depending on who you speak to and the industry loves to use geo-scale impact events when talking about disasters…but reality is that the failure of a single instance or application is much more likely than whole system failures.

Disaster avoidance has become paramount with DRaaS. Businesses accept that outages will happen but where possible the ramifications of down time needs to kept to a minimum. Or better yet…not happen at all. In my experience, having worked in and with the service provider industry since 2002, all infrastructure/cloud providers will experience outages at some point…and as one of my work colleagues put it…

It’s an immutable truth that outages will occur! 

I’ve written before about this topic before and even had a shirt for sale at once stage stating that Outages are like assholes…everyone has one!

There are those that might challenge my thoughts on the subject, however as I talk to service providers around the world, the one thing they all believe in is that DRaaS is worth investing in and will generate significant revenue streams. I would argue that the DRaaS hasn’t even hit an inflection point yet, whereby it’s been seen to be a critically necessary service to consume for businesses. It’s true to say that Backup as a Service has nearly become a commodity…but DRaaS has serious runway.

References:

https://www.gartner.com/doc/3881865

What’s Changed: 2018 Gartner Magic Quadrant for Disaster Recovery as a Service

First Look – Zenko, Multi-Platform Data Replication and Management

A couple of weeks ago I stumbled upon Zenko via a LinkedIn post. I was interested in what it had to offer and decided to go and have a deeper look. With Veeam launching our vision to be the leader of intelligent data management at VeeamON this year, I have been on the lookout for solutions that do smart thing with data that addresses the needs related to controlling the accelerated spread and sprawl of that data. Zenko looks to be on the right track with it’s notion of freedom to avoid being locked into a specific cloud platform whether it’s private or public.

Having come from service provider land I have always been against the idea of a Hyper-Scaler Public Cloud monopoly that forces lock-in and diminishes choice. Because of that, I gravitated to Zenko’s mission statement:

We believe that everyone should be in control of their data. Zenko’s mission is to allow everyone to be in control of their data, while leveraging the efficiency of private and public clouds.

This platform looks to do data mobility across multiple cloud platforms through common communication protocols and by sharing a common set of APIs to manage it’s data sets. Zenko is focused on achieving this multi-cloud capability through a unified AWS S3 API based services with data management and federated search capabilities driving it’s use cases. Data mobility between clouds, whether private or public cloud services it what Zenko is aimed at.

Zenko Orbit:

Zenko Orbit is the cloud portal for data placement, workflows and global search. Focused for application developers and “DevOps” the premise of Zenko Orbit is that those guys can spend less time learning multiple interfaces for different clouds while leveraging the power of cloud storage and data management services without needing to be an expert across different platforms.

Orbit provides an easy way to create replication workflows between difference cloud storage platforms…weather it be Amazon s3, Azure Blog, GCP Storage or others. You then have the ability to search across a global namespace for system and user-defined metadata.

Quick Walkthrough:

Given this is open source you have the option to download and install a Zenko instance which will then be registered against the Orbit cloud portal or you can pull the whole stack from GitHub. They also have a sandboxed instance hosted by them that can be used to take the system for a test drive.

Once done, you are presented with a Dashboard that gives you an overview of the amount of data and other metric contained in your instance. Looking at the Settings area you are given details about the instance, account details and endpoints to use to connect up into. They also other the ability to download pre generated Cyberduck Profiles.

You need to create a storage management account to be able to browse your buckets in the Orbit portal.

Once that’s been done you can create a bucket and select a location which in the sandbox defaults to AWS us-east-1.

From here, you can add a new storage location and configure the replication policy. For this, I created a new Azure Blob Storage account as shown below.

From the Orbit menu, I then added a New Storage Location.

Once the location has been added you can configure the bucket replication. This is the cool part that is the premise of the platform. Being able to setup policies to replicate data across multiple cloud platforms. From the sandbox, the policy is one way meaning there is no directional replication. Simply select the source and destination and the bucket from the menu.

Once that has been done you can connect to the endpoint and upload files. I tested this out with the setup above and it worked as advertised. Using the CyberDuck profile I connected in, uploaded some files and monitored the Azure Blog storage end for the files to replicate.

Conclusion: 

While you could say that Zenko feels like DFS-R for the multi-platform storage world, the solution has impressed me. Many would know that it’s not easy to orchestrate the replication of data between different platforms. They are also talking up their capabilities around extensibility of the platform as is relates to data management, backend storage plugins and search.

I think about this sort of technology and how it could be extended to cloud based backups. Customers could have the option to tier into cheaper cloud based storage and then further protect that data by replicating it to another cloud platform which could be cheaper yet. This could achieve added resiliency while offering cost benefits. However there is also the risk that the more spread out the data is, the harder it is to control. That’s where intelligent data management comes into play…interesting times!

References:

Zenko Orbit – Multi-Cloud Data Management Simplified

 

Veeam 9.5 Update 3a – What’s in it for Service Providers

Earlier this week Update 3a (Build 9.5.1922) for Veeam Backup & Replication was made generally available. This release doesn’t contain any major new features or enhancements but does add support for a number of key platforms. Importantly for our Cloud and Service Providers Update 3a extends our support for vSphere vSphere 6.7, vSphere 6.5 Update 2 (with a small caveat) and vCloud Director 9.1. We also have support for the April update of Windows 10 and the 1803 versions of Windows Server and Hyper-V.

vSphere 6.7 support (VSAN 6.7 validation is pending) is something that our customers and partners have been asking for since it was released in late April and it’s a credit to our R&D and QC teams to reach supportability within 90 days given the amount of underlying changes that came with vSphere 6.7. The performance of DirectSAN and Hot Add transport modes has been improved for backup infrastructure configurations through optimizing system memory interaction.

As mentioned, the recently released vCloud Director 9.1 is supported and maintains our lead in the availability of vCloud Director environments. Storage snapshot only vCloud Director backup jobs are now supported for all storage integrations tht support storage snapshot-only jobs. Update 3a also fully supports the VMware Cloud on AWS version 1.3 release without the requirement for the patch.

One of the new features in Update 3a is a new look Veeam vSphere Client Plug-in based on VMware’s Clarity UX. This is more a port, however with the announcement that the Flex based Web Client will be retired it was important to make the switch.

In terms of key fixes for Cloud and Service Providers, I’ve listed them below from the VeeamKB.

  • User interface performance has been improved for large environments, including faster VM search and lower CPU consumption while browsing through job sessions history.
  • Incremental backup runs should no longer keep setting ctkEnabled VM setting to “true”, resulting in unwanted events logged by vCenter Server.
  • Windows file level recovery (FLR) should now process large numbers of NTFS reparse points faster and more reliably.

Veeam Cloud Connect
Update 3a also includes enhancements and bug fixes for cloud and service providers who are offering Veeam Cloud Connect services, For more information relating to that, please head to this thread on the Veeam Cloud & Service Provider forum. A reminder as well, that if you are running Cloud Connect Replication you need to be aware that clients replicating in on higher VMware VM Hardware versions will error out. Meaning you need to either let the customer know that the replication cluster is at a certain level…or upgrade to the latest version…which is now vSphere 6.7 that gives Version 14.

For a full list check out the release notes below and download the update here. You can also download the update package without backup agents here.

References:

https://www.veeam.com/kb2646

Released: Veeam Availability Console Update 1

Today, Veeam Availability Console Update 1 (Build 2.0.2.1750) was released. This update improves on our multi-tenant service provider management and reporting platform that is provided free to VCSPs. VAC acts as a central portal for Veeam Cloud and Service Providers to remotely manage and monitor customer instances of Backup & Replication including the ability to monitor Cloud Connect Backup and Replication jobs and failover plans. It also is the central mechanism to deploy and manage our Agent for Windows which includes the ability to install agents onto on-premises machines and apply policies to those agents once deployed.

What’s new in Update 1:

If you want to get the low down from the What’s new document can be access here. I’ve summarised the new features and enhancements below and expanded on the key ones below.

  • Enhanced support for Veeam Agents
  • New Operator Role
  • ConnectWise Manage Plugin
  • Improved Veeam Backup & Replication monitoring
  • New backup policy types
  • Sub-tenant Accounts and Sub-tenant Management
  • Alarm for tracking VMs stored in cloud repositories
  • RESTful APIs enhancements

RESTful APIs enhancements: VACs API first approach gets a number of enhancements in Update 1 with more information stored in the VAC configuration database accessible via new RESTful API calls that include:

  • Managed backup server licenses
  • Tenant descriptions
  • References to the parent object for users, discovery rules and computers

As with the GA, this is all accessible via the built in Swagger Interface.

Enhanced support for Veeam Agents: VAC Update 1 introduces support for Veeam Agents that are managed by Veeam Backup & Replication. This adds monitoring and alarms for Veeam Agent for Microsoft Windows and Veeam Agent for Linux that are managed by a Veeam Backup & Replication. One of the great features of this is the search functionality which allows you to more efficiently search for agent instances that exist in Backup & Replication and see their statuses.

New Operator Role: While not the Reseller role most VCSPs are after this new role allows VCSPs wanting to delegate VAC access to their own IT staff to take advantage of the new operator role without granting complete administrative access. This role allows access to everything essential to remotely monitor and manage customer environments, but restricts access to VAC configuration settings.

ConnectWise Manage Plugin: ConnectWise Manage is a very popular platform used by MSPs all over the world. VAC Update 1 includes native integration with ConnectWise Manage. The integration allows VCSPs to synchronize and map company accounts between the two platforms, integrated billing, enabling you to use ConnectWise Manage to generate tenant invoices based on their usage and the plugin allows you to create tickets based on triggered alarms in VAC. The integration is solid and based on VACs strong underlying API driven approach. More importantly, this is the first extensibility feature of VAC using a Plugin framework…the idea is for it to just be the start.

Alarm for tracking VMs stored in cloud repositories:  A smaller enhancement, but one that is important for those running Cloud Connect is the new alarm that allows you to be notified when the number of customer VMs stored in the cloud repository exceeds a certain threshold.

Scalability enhancements: Finally there has been a significant improvement in VAC scalability limits when it comes to the number of managed Backup & Replication servers for each VAC instance. This ensures stable operation and performance when managing up to 10,000 Veeam Agents and up to 600 Backup & Replication servers, protecting 150-200 VMs or Veeam Agents each.

References and Product Guides:

https://www.veeam.com/vac_2_0_u1_release_notes_rn.pdf

https://www.veeam.com/documentation-guides-datasheets.html

https://www.veeam.com/availability-console-service-providers-faq.html

https://www.veeam.com/vac_2_0_u1_whats_new_wn.pdf

Installing and Managing Veeam Agent for Linux with Backup & Replication

With the release of Update 3 of Veeam Backup & Replication we introduced the ability to manage agent from within the console. This was for both our Windows and Linux agents and aimed to add increased levels of manageability and control when deploying agents in larger enterprise type environments. For an overview of the features there is a veeam.com blog post here that goes through the different components and the online help documentation is also helpful in providing an detailed look at the ins and outs.

Scouring the web, there has been a lot written about the Windows Agent and how that’s managed from the Backup & Replication console, but not a lot written about managing Linux Agents. There theory is exactly the same…Add a Protection Group, add the machines you want to include in the Protection Group, scan the group and then install the agent. From there you can add the agents to a new or existing backup job and manage licenses.

In terms of how that looks and the steps you need to take. Head to the Inventory menu section and right click on Physical & Cloud Infrastructure to Add Protection Group. Give the group a meaningful name and then to add Linux machines select Individual or CSV method under Type. In my example I chose to add the Linux machines individually and added then added the machines via their Host Name or IP Address with the right credentials.

Under Options, you can select the Distribution Server which is where the agent will be deployed from and choose to set a schedule to Rescan the Protection Group.

Once this part is complete the first Discovery is run and all things being equal the Linux Agent will be installed to the machines that where added as part of the first step. I actually ran into an issue upon first run where the agent didn’t install due to the following error shown below.

The fix was as simple as installing the DKMS package on the servers via apt-get. Asking around, this was not a normal occurrence and that it should deploy and install without issue. Maybe this was due to my Linux server being TurnKey Linux appliances…in any case, once the package was installed I re-triggered the install by right clicking the machine and selecting Install Agent.

Once that job has finished we are able to assign the Linux agent machines to new or existing backup jobs.

As with the Windows Agent you have two different Job modes. In my example I created a job of each type. The result is one agent that is in lock down mode meaning reduced functionality from the GUI or Command line while the other has more functionality but is still managed by the system administrator. The differences between both GUIs is shown below.

From the Jobs list under the Home menu this is represented by the job type being Linux Agent Backup vs Linux Agent Policy.

Finally, when looking at the licensing aspect, once a license has been applied to a Backup & Replication server that contains agent licenses, an additional view will appear under the License view in the console where you can assign or remove agent licenses from.

From within Enterprise Manager (if the VBR instance is managed), you also see additional tab views for the Windows and Linux Agents as shown below.

References:

https://helpcenter.veeam.com/docs/backup/agents/introduction.html?ver=95

https://helpcenter.veeam.com/docs/agentforlinux/userguide/license_vbr_revoke.html?ver=20

https://helpcenter.veeam.com/docs/backup/agents/agent_policy.html?ver=95

Using Terraform to Deploy and Configure a Ready to use Backup Repo into an AWS VPC

A month of so ago I wrote a post on deploying Veeam Powered Network into an AWS VPC as a way to extend the VPC network to a remote site to leverage a Veeam Linux Repository running as an EC2 instance. During the course of deploying that solution I came across a lot of little check boxes and settings that needed to by tweaked in order to get things working. After that, I set myself the goal of trying to automate and orchestrate the deployment end to end.

For an overview of the intended purpose behind the solution head to the original blog post here. That post was mainly focused around the Veeam PN component, however I was using that as a mechanism to create a site-to-site connection to allow Veeam Backup & Replication to talk to the other EC2 instance which was the Veeam Linux Repository.

Terraform by HashiCorp:

In order to automate the deployment into AWS, I looked at Cloudformation first…but found that learning curve to be a little steep…so I went back to HashiCorp’s Terraform which I have been familiar with for a number of years, but never gotten my hands dirty with. HashiCorp specialise in Cloud Infrastructure Automation and their provisioning product is called Terraform.

Terraform is used to create, manage, and update infrastructure resources such as physical machines, VMs, network switches, containers, and more. Almost any infrastructure type can be represented as a resource in Terraform.

A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).

Terraform supports a host of providers and once you wrap your head around the basics and view some example code, provisioning Infrastructure as Code can be achieved with relatively no coding experience…however, as I did find out, you need to be careful in this world and not make the same initial mistake I did as explained in this post.

Going from Manual to Orchestrated with Automation:

The Terraform AWS provider is what I used to create the code required to deploy the required components. Like everything that’s automated, you need to understand the manual process first and that is where the previous experience came in handy. I knew what the end result was…I just needed to work backwards and make sure that the Terraform provider had all the instructions it needed to orchestrate the build.

the basic flow is:

  • Fetch AWS Access Key and Secret
  • Fetch AWS Key Pair
  • Create AWS VPC
    • Configure Networking and Routing for VPC
  • Create CentOS EC2 Instance for Veeam Linux Repo
    • Add new disk and set size
    • Execute configuration script
      • Install PERL modules
  • Create Ubuntu EC2 Instance for Veeam PN
    • Execute configuration script
      • Install VeeamPN modules from repo
  • Login to Veeam PN Web Console and Import Site Configuration.

I’ve uploaded the code to a GitHub project. An overview and instructions for the project can be found here. I’ve also posted a video to YouTube showing the end to end process which i’ve embedded below (best watched at 2x speed):

In order to get the Terraform plan to work there are some variables that need modifying in the GitHub Project and you will need to download, install and initialise Terraform. I’m intending to continue to tweak the project and complete the provisioning end to end, including the Veeam PN site configuration part at the end. The remote execution feature of Terraform allows some pretty cool things by way of script initiation.

References:

https://github.com/anthonyspiteri/automation/aws_create_veeamrepo_veeampn

https://www.terraform.io/intro/getting-started/install.html

 

Quick Look – Backing up AWS Workloads with Cloud Protection Manager from N2WS

Earlier this year Veeam acquired N2WS after announcements last year of a technology partnership at VeeamON 2017. The more I tinker with Cloud Protection Manager the more I understand why we made the acquisition. N2WS was founded in 2012 with their first product shipping in 2013. Purpose built for AWS supporting all types of EC2 instances, EBS volumes, RDS, DynamoDB & Redshift and AMI creation and distributed as an AMI through the AWS Marketplace. The product is easy to deploy and has extended it’s feature set with the release of 2.3d announced during VeeamON 2018 a couple weeks ago.

From the datasheet:

Cloud Protection Manager (CPM) is an enterprise-class backup, recovery, and disaster recovery solution purpose-built for Amazon Web Services EC2 environments. CPM enhances AWS data protection with automated and flexible backup policies, application consistent backups, 1-click instant recovery, and disaster recovery to other AWS region or AWS accounts ensuring cloud resiliency for the largest production AWS environment. By extending and enhancing native AWS capabilities, CPM protects the valuable data and mission-critical applications in the AWS cloud.

In this post, I wanted to show how easy it is to deploy and install Cloud Protection Manager as well as look at some of the new features in the 2.3d release. I will do a follow up post going into more detail about how to protect AWS Instances and services with CPM.

What’s new with CPM 2.3:

  • Automated backup for Amazon DynamoDB: CPM provides backup and recovery for Amazon DynamoDB, you can now apply existing policies and schedules to backup and restore their DynamoDB tables and metadata.
  • RESTful API:  Completely automate backup and recovery operations with the new Cloud Protection Manager API. This feature provides seamless integration between CPM and other applications.
  • Enhanced reporting features: Enhancements include the ability to gather all reports in one tab, run as a CSV, view both protected and unprotected resources and include new filtering options as well.

Other new features that come as part of the CPM 2.3 release include full cross-region and cross-account disaster recovery for Aurora databases, enhanced permissions for users and a fast and efficient on boarding process using CloudFormation’s 1-click template.

Installing, Configuring and Managing CPM:

The process to install Cloud Protection Manager from the AWS Marketplace is seamless and can be done via a couple different methods including a 1-Click deployment. The offical install guide can be read here. The CPM EC2 instance is deployed into a new or existing VPC configured with a subnet and must be put into an existing, or new Security Group.

Once deployed you are given the details of the installation.

And you can see it from the AWS Console under the EC2 instances. I’ve added a name for the instance just for clarities sake.

One thing to note is that there is no public IP assigned to the instance as part of the deployment. You can create a new Elastic IP and attach it to the instance, or you can access the configuration website via it’s internal IP if you have access to the subnet via some form of VPN or network extension.

There is an initial configuration wizard that guides you through the registration and setup of CPM. Note that you do need internet connectivity to complete the process otherwise you will get this error.

The final step will allow you to configure a volume for CPM use. With that the wizard finalises the setup and you can log into the Cloud Protection Manager.

Conclusion: 

The ability to backup AWS services natively has it’s advantages over traditional methods such as agents. Cloud Protection Manager from N2WS can be installed and ready to go within 5 minutes. In the next post, i’ll walk through the CPM interface and show how you backup and recover AWS instances and services.

References:

https://n2ws.com/cpm-install-guide

https://support.n2ws.com/portal/kb/articles/release-notes-for-the-latest-v2-3-x-cpm-release

Quick Post – Configuring Key Based Authentication for AWS based Veeam Linux Repository

I’ve been doing a little more within AWS over the past month or so related to my work with VMware Cloud on AWS and the setting up of EC2 instances to use as Veeam Linux Repositories. When deploying a linux based instance in AWS you set a key pair to the instance at the time of deployment. You then download the private key pem file and use that to remotely connect to the instance when desired.

In my testing, I wanted to configure this EC2 instance as a Linux Repository. When creating a new repository you need to set up the Linux server with the key pair. To do this you need to select the Add Linux Private Key drop down in the new Linux Server window.

Next you need to enter the username of the EC2 instance which in this case is centos (best practice here is to create a new repository user and elevate to root but for my testing using the provided) and then load up the pem file that contains the private key. You don’t need to enter in a Passphrase.

The check box to Elevate specified account to root is also selected. Accept the server thumbprint as shown below.

Once accepted the Veeam Linux components will be installed and all things being equal you will have a Veeam Linux based repository ready for action that lives remotely on an EC2 instance.

Once complete you can tag the location against the repository and now use it as a backup target.

So there you go, a quick post on how to get an EC2 Linux instance up and running in Veeam Backup & Replication as a Linux Repository.

VeeamON 2018 Recap

VeeamON has come an gone for another year and it is an exciting time to be in the (hyper) availability industry. There has been a significant shift in the way that backup and recovery is thought about in the IT Industry and Veeam is without question leading the way in this space. We have been the driving force of change for an industry that was once seen as mundane yet necessary. This year we did not announce any new products or features but more importantly laid the ground work for what is to come with our new vision and strategy. To be the leading provider of intelligent data management solution for a world where data is now highly distributed, is growing at exponential rates and where hyper-availability is desired.

What does that exactly mean?

Well for me it is an evolution of our messaging that what presented in August of 2016 where the Veeam Availability platform was first launched. The platform it’s self has evolved over the past eighteen months with the release of Veeam Availability Orchestrator, Veeam Availability Console, Backup for Office 365, both the Windows and Linux agents and more recently the pending releases of our Nutanix AHV backup and support for AIX and Solaris. Put that together with the acquisition of N2WS for AWS availability and you can see that we are serious about fulfilling the promise of the vision laid out during the event.

2018 Highlights:

Apart from delivering three sessions, my highlights revolve around my discussions with customers and partners and getting face to face feedback on how we are doing. This is critical to our function in the Product Strategy team but for me personally it allows me to interact with some of the best innovators in the service provider landscape. On that note, another highlight was the inaugural Veeam Innovation Awards of which I was a voting panel member along with Michael Cade and Jason Buffington. It was great to see four VCSPs win recognition and awesome to have Probax (a local Perth company) included as part of the initial group of winners.

From the Show Floor:

I have copied in a number of media interviews and daily wraps below that go into more detail about the event, it’s announcements and the messaging that we are putting forward as a leader in the space. Enjoy the discussions below and I am already looking forward to VeeamON 2019…I have a feeling it’s going to be massive!

 

Veeam Cloud Announcements:

Veeam expands multi-cloud solutions at VeeamON 2018

Cloud Connect Subtenants, Veeam Availability Console and Agents!

Cloud Connect Subtenants have gone under the radar for the most but can play an important role in how Service Provider customers consume Cloud Connect services. In a previous post, I described how subtenants work in the context of Cloud Connect Backup.

Subtenants can be configured by either the VCSP or by the tenant consuming a Cloud Connect Backup service. Subtenants are used to carve up and assign a subset of the parent tenant storage quota. This allows individual agents to authenticate against the Cloud Connect service with a unique login allowing backups to Cloud Repositories that can be managed and monitored from the Backup & Replication console.

In this post I’m going to dive into how subtenants are created by the Veeam Availability Console and how they are then used by agents that are managed by VAC. For those that may not know what VAC does, head to this post for a primer.

Automatic Creation of Subtenant Users:

Veeam Availability Console automatically creates subtenant users if a backup policy that is configured to use a cloud repository as a backup target is chosen. When such a backup policy is assigned to an agent, VAC creates a subtenant account on the Cloud Connect Server for each backup agent.

Looking below you can see a list of the Backup Agents under the Discovery Menu.

Looking at the Backup Policy you can see that the Backup Target is a Cloud Repository, which results in the corresponding subtenant account being created.

The backup agents use these subtenant accounts to connect and send data to a Cloud Connect endpoint that are backed by a cloud repository. The name of each subtenant account is created according to the following naming convention:

companyname_computername

At the Cloud Provider end from within the Backup & Replication console under the Cloud Connect Menu and under tenants, clicking on Manage Subtenants will show you the corresponding list of subtenant accounts.

The view above is the same to that seen at the tenant end. A tenant can modify the quota details from the Veeam Backup & Replication console. This will result in a Custom Policy status as shown below. The original policy can be reapplied from VAC to bring it back into line.

The folder structure on the Cloud Repository maps what’s seen above. As you can also see, if you have Backup Protection enable you will also have _RecycleBin objects there.

NOTE: When a new policy is applied to an agent the old subtenant account and data is retained on the Cloud Connect repository. The new policy gets applied and a subtenant account with an _n gets created. Service Providers will need to purge old data manually.

Finally if we look at the endpoint where the agent is installed and managed by VAC you will see the subtenant account configured.

Conclusion:

So there is a deeper look at how subtenants are used as part of the Veeam Availability Console and how they are created, managed and used by the Agent for Windows.

References:

https://helpcenter.veeam.com/docs/vac/provider_admin/create_subtenant_user.html?ver=20

« Older Entries