Category Archives: AWS

#TFD20 Follow Up – Veeam Cloud Tier Glossary

Yesterday I presented at Tech Field Day 20. My first topic was on the enhancements we are bringing to Cloud Tier in our Backup & Replication v10 release. Rick Vanover setup the v10 enhancement session by doing some ground work on what a Scale Out Backup Repository is and briefly went over the initial features of Cloud Tier released in Backup & Replication Update 4.

We had a few questions around some of the terminology being used with regards to the Cloud Tier so I thought as a followup I would list out the glossary of terminology I’ve been building since the Update 4 release with the additions of the new v10 enhancements.

  • Cloud Tier – Cloud Tier is the name given to this feature in Veeam Backup & Replication 9.5 Update 4
  • Object Storage Repository – Object Storage Repository is the name given to a repository that is backed by Amazon S3, Azure Blob or IBM Cloud
  • Scale Out Backup Repository (SOBR) – Scale-Out Backup Repository is a Veeam feature first introduced in Veeam Backup & Replication v9. It consists of one or more Performance Tier extents and exactly one Capacity Tier extent.
  • Capacity Tier – Capacity Tier is the name given to extent on a SOBR using an Object Storage Repository.
  • Performance Tier – Name given to the one or more extents on a SOBR using a standard backup repository
  • Move Mode – Name given to a policy introduced in Update 4 that offloads data from sealed chains and has it in either Performance or Capacity Tier
  • Copy Mode – Name given to policy coming in v10 that immediately duplicates backup files from Performance to Capacity Tier once a backup job has completed
  • Offload Job – Name given to the process that moves data from Performance to Capacity Tier
  • Immutability Period – New feature coming in v10 that sets an Amazon S3 or S3 Compatible Object Lock on blocks copied or moved from the Performance or Capacity Tier protecting them against accidental or malicious deletion.

In addition to that, I have pasted a link to the offical Deep Dive Veeam Whitepaper for Cloud Tier that goes into the why the what and the how of the Cloud Tier and dives into the innovative technologies we have built into the feature.

White Paper Link: https://www.veeam.com/wp-cloud-tier-deep-dive.html

If you want to catch the Cloud Field Day 5 presentation on Cloud Tier, as well as the most recent one yesterday at Tech Field Day 20, I have embedded them below.

The Separation of Dev and Ops is Upon Us!

Apart from the K word, there was one other enduring message that I think a lot of people took home from VMworld 2019. That is, that Dev and Ops should be considered as seperate entities again. For the best part of the last five or so years the concept of DevOps, SecOps and other X-Ops has been perpetuated mainly due to the rise of consumable platforms outside the traditional control of IT operations people.

The pressure to DevOp has become very real in the IT communities that I am involved with. These circles are mainly made up of traditional infrastructure guys. I’ve written a few pieces around how the industry trend to try turn everyone into developers isn’t one that needs to be followed. Automation doesn’t equal development and there are a number of Infrastructure as Code tools that looks to bridge the gap between the developer and the infrastructure guy.

That isn’t to say that traditional IT guys shouldn’t be looking to push themselves to learn new things and improve and evolve. In fact, IT Ops needs to be able to code in slightly abstracted ways to work with APIs or leverage IaC tooling. However my view is that IT Ops number one role is to understand fundamentally what is happening within a platform, and be able to support infrastructure that developers can consume.

I had a bit of an aha moment this week while working on some Kubernetes (that word again!) automation work with Terraform which I’ll release later this week. The moment was when I was trying to get the Sock Shop demo working on my fresh Kubernetes cluster. I finally understood why Kubernetes had been created. Everything about the application was defined in the json files and deployed as is holistically through one command. It’s actually rather elegant compared to how I worked with developers back in the early days of web hosting on Windows and Linux web servers with their database backends and whatnot.

Regardless of the ease of deployment, I still had to understand the underlying networking and get the application to listen on external IPs and different ports. At this point I was doing dev and doing IT Ops in one. However this is all contained within my lab environment that has no bearing on the availability of the application, security or otherwise. This is where separation is required.

For developers, they want to consume services and take advantages of the constructs of a containerised platform like Docker paired together with the orchestrations and management of those resources that Kubernetes provides. They don’t care what’s under the hood and shouldn’t be concerned what their application runs on.

For IT Operations they want to be able to manage the supporting platforms as they did previously. The compute, the networking the storage… this is all still relevant in a hybrid world. They should absolutely still care about what’s under the hood and the impact applications can have to infrastructure.

VMware has introduced that (re)split of dev and ops with the introduction of Project Pacific and I applaude them for going against the grain and endorsing the separation of roles and responsibilities. Kubernetes and ESXi in one vSphere platform is where that vision lies. Outside of vSphere, it is still very true that devs can consume public clouds without a care about underlying infrastructure… but for me… it all comes back down to this…

Let devs be devs… and let IT Ops be IT Ops! They need to work together in this hybrid, multi-cloud world!

First Look: On Demand Recovery with Cloud Tier and VMware Cloud on AWS

Since Veeam Cloud Tier was released as part of Backup & Replication 9.5 Update 4, i’ve written a lot about how it works and what it offers in terms of offloading data from more expensive local storage to what is fundamentally cheaper remote Object Storage. As with most innovative technologies, if you dig a little deeper… different use cases start to present themselves and unintended use cases find their way to the surface.

Such was the case when, together with AWS and VMware, we looked at how Cloud Tier could be used as a way to allow on demand recovery into a cloud platform like VMware Cloud on AWS. By way of a quick overview, the solution shown below has Veeam backing up to a Scale Out Backup Repository which has a Capacity Tier backed by an Object Storage repository in Amazon S3. There is a minimal operational restore window set which means data is offloaded quicker to the Capacity Tier.

Once there, if disaster happens on premises, an SDDC is spun up, a Backup & Replication Server deployed and configured into that SDDC. From there, a SOBR is configured with the same Amazon S3 credentials that connects to the Object Storage bucket which detects the backup data and starts a resync of the metadata back to the local performance tier. (as described here) Once the resync has finished workloads can be recovered, streamed directly from the Capacity Tier.

The diagram above has been published on the AWS Reference Architecture page, and while this post has been brief, there is more to come by way of an offical AWS Blog Post co-authored by myself Frank Fan from AWS around this solution. We will also look to automate the process as much as possible to make this a truely on demand solution that can be actioned with the click of a button.

For now, the concept has been validated and the hope is people looking to leverage VMware Cloud on AWS as a target for disaster and recovery look to leverage Veeam and the Cloud Tier to make that happen.

References: AWS Reference Architecture

Cloud Tier Data Migration between AWS and Azure… or anywhere in between!

At the recent Cloud Field Day 5 (CFD#5) I presented a deep dive on the Veeam Cloud Tier which was released as a feature extension of our Scale Out Backup Repository (SOBR) in Update 4 of Veeam Backup & Replication. Since we went GA we have been able to track the success of this feature by looking at Public Cloud Object Storage consumption by Veeam customers using the feature. As of last week Veeam customers have been offloading petabytes of backup data into Azure Blob and Amazon S3…not counting the data being offloaded to other Object Storage repositories.

During the Cloud Field Day 5 presentation, Michael Cade talked about the Portability of Veeam’s data format, around how we do not lock our customers into any specific hardware or format that requires a specific underlying File System. We offer complete Flexibility and Agnosticity where your data is stored and the same is true when talking about what Object Storage platform to choose for the offloading of data with the Cloud Tier.

I had a need recently to setup a Capacity Tier extent that was backed by an Object Storage Repository on Azure Blob. I wanted to use the same backup data that I had in an existing Amazon S3 backed Capacity Tier while still keeping things clean in my Backup & Replication console…luckily we have built in a way to migrate to a new Object Storage Repository, taking advantage of the innovative tech we have built into the Cloud Tier.

Cloud Tier Data Migration:

During the offload process data is tiered from the Performance Tier to the Capacity Tier effectively Dehydrating the VBK files of all backup data only leaving the metadata with an Index that points to where the data blocks have been offloaded into the Object Storage.

This process can also be reversed and the VBK file can be rehydrated. The ability to bring the data back from Capacity Tier to the Performance Tier means that if there was ever a requirement to evacuate or migrate away from a particular Object Storage Provider, the ability to do so is built into Backup & Replication.

In this small example, as you can see below, the SOBR was configured with a Capacity Tier backed by Amazon S3 and using about 15GB of Object Storage.

The first step is to download the data back from the Object Storage and rehydrate the VBK files on the Performance Tier extents.

There are two ways to achieve the rehydration or download operation.

  1. Via the Backup & Replication Console
  2. Via a PowerShell Cmdlet
Rehydration via the Console:

From the Home Menu under Backups right click on the Job Name and select Backup Properties. From here there is a list of the Files contained within the job and also the objects that they contain. Depending on where the data is stored (remembering that the data blocks are only even in one location… the Performance Tier or the Capacity Tier) the icon against the File name will be slightly different with files offloaded represented with a Cloud.

Right Clicking on any of these files will give you the option to Copy the data back to the Performance Tier. You have the choice to copy back the backup file or the backup files and all its dependancies.

Once this is selected, a SOBR Download job is kicked off and the data is moved back to the Performance Tier. It’s important to note that our Intelligent Block Recovery will come into play here and look at the local data blocks to see if any match what is trying to be downloaded from the Object Storage… if so it will copy them from the Performance Tier, saving on egress charges and also speeding up the process.

In the image above you can see the Download Job working and only downloaded 95.5MB from Object Storage with 15.1GB copied from the Performance Tier… meaning the data blocks for the most that are local are able to be used for the rehydration.

The one caveat to this method is that you can’t select bulk files or multiple backup jobs so the process to rehydrate everything from the Capacity Tier can be tedious.

Rehydration via PowerShell:

To solve that problem we can use PowerShell to call the Start-VBRDownloadBackupFile cmdlet to do the bulk of the work for us. Below are the steps I used to get the backup job details, feed that through to variable that contains all the file names, and then kick off the Download Job.

The PowerShell window will then show the Download Job running

Completing the Migration:

No matter which way the Download job is initiated, we can see the progress form the Backup & Replication Console under the Jobs section.

And looking at the Disk and Network sections of Windows Resource Monitor we can see connections to Amazon S3 pulling the required blocks of data down.

Once the Download job has been completed and all VBKs have been rehydrated, the next step is to change the configuration of the SOBR Capacity Tier to point at the Object Storage Repository backed by Azure Blob.

The final step is to initiate an offload to the new Capacity Tier via an Offload Job…this can be triggered via the console or via Powershell (as shown in the last command of the PowerShell code above) and because we have already a set of data that satisfies the conditions for offload (sealed chains and backups outside the operational restore window) data will be dehydrated once again…but this time up to Azure Blob.

The used space shown below in the Azure Blob Object Storage matches the used space initially in Amazon S3 All recovery operations show Restore Points on the Performance Tier and on the Capacity Tier as dictated by the operational restore window policy.
Conclusion:

As mentioned in the intro, the ability for Veeam customers to have control of their data is an important principal revolving around data portability. With the Cloud Tier we have extended that by allowing you to choose the Object Storage Repository of your choice for cloud based storage or Veeam backup data…but also given you the option to pull that data out and shift when and where desired. Migrating data between AWS, Azure or any platform is easily achieved and can be done without too much hassle.

References:

https://helpcenter.veeam.com/docs/backup/powershell/object_storage_data_transfer.html?ver=95u4

Update 4 for Service Providers – Cloud Mobility and External Repository for N2WS

When Veeam Backup & Replication 9.5 Update 4 went Generally Available a couple of weeks ago I posted a What’s in it for Service Providers blog. In that post I briefly outlined all the new features and enhancements in Update 4 as it related to our Veeam Cloud and Service Providers. As mentioned each new major feature deserves it’s own seperate post. I’ve covered off three feature so far, and today i’m going to talk about two features that are more aligned to Managed Service Providers, but still could have a place in the pure IaaS world.

As a reminder here are the top new features and enhancements in Update 4 for VCSPs.

Cloud Mobility:

The Cloud Mobility feature is actually the new umbrella name for our Restore to functionality. Prior to Update 4 we had the ability to Restore to Microsoft Azure only. With the release of Update 4 we have added the ability to Restore to Microsoft Azure Stack and Amazon EC2. It’s important to point out what Cloud Mobility isn’t…that is a disaster recovery feature set. in that you can’t rely on this feature in the same way that Cloud Connect Replication allows you to power on VM replicas on demand for DR.

Though you could configure restore tasks to run on demand via PowerShell commands and have systems in a ready state after recovery it is difficult to attach an RPTO to the recovery process and therefore Cloud Mobility should be used for migrations and testing. In essence this is why it is called Cloud Mobility…to give users and Service Providers the flexibility to shift workloads from one platform to another with ease.

Restore to EC2:

The ability to restore direct to EC2 is something that is demanded these days and the addition of this feature to Update 4 was one of the most highly anticipated. In enabling the restoration of workloads into EC2 we have enabled our customers and partners to have the option to backup workloads from the following:

These backups, once stored in the Veeam Backup File format, ensures absolute portability of those workloads. In terms of restoring to EC2, the process is straight forward and can be done via the Backup & Replication console or via PowerShell.

Again, the focus of this feature is to enable migrations and testing. However when put together with the External Repository, we also complete a loopback by way of having a way to restore EC2 instances that where initially backed up with N2WS Backup & Recovery and archived to an Amazon S3 Bucket.

It should also be noted that to perform a recovery, only the most recent restore point can be used.

External Repository:

The External Repository allows you to add an Amazon S3 bucket that contain backups created by N2WS Backup & Recovery for AWS environments. Backup & Recovery for AWS will create backups of Elastic Block Stores disk volumes of EC2 instances. As part of the 2.4 release these backups where able to be placed directly to Amazon S3 object storage repositories. This is what is added to the Veeam Backup & Replication console as an External Repositories.

Backup & Recovery for AWS uses the Veeam Backup API to preserve the backup structure in the native Veeam format which are housed in the Amazon S3 Bucket as oVBKs. The External Repository cannot be used as a target for backup or backup copy jobs. Once the External Repository is configured, N2WS VMs can be manipulated through the Backup & Replication Console as per usual. This allows all the restore capabilities including Restore to EC2 and also more importantly the ability to perform Backup Copy Jobs against the backed up data to enable even longer term retention outside of Amazon S3.

Wrap Up:

The addition of Restore to EC2, Azure Stack and the External Repository can be used by manager service providers and service providers to offer true Cloud Mobility to their customers. Also, while a lot of organization are moving to the Public Cloud…this is not a fait accompli and they do sometimes want to get workloads out of those platforms and back on-premises or to Service Provider Clouds.. It shouldn’t be a Hotel California situation and with these new Update 4 features Veeam customers have more choice than other.

References:

https://helpcenter.veeam.com/docs/backup/vsphere/restore_amazon.html?ver=95u4

https://helpcenter.veeam.com/docs/backup/vsphere/external_repository.html?ver=95u4

Automatic restore of multiple machines from Veeam to AWS

 

Quick Look – New Cloud Credentials Manager in Update 4

With the release of Update 4 for Veeam Backup & Replication 9.5 we further enhanced our overall cloud capabilities by adding a number of new features and enhancements that focus on tenants being able to leverage Veeam Cloud and Service Providers as well as Public Cloud services. With the addition of Cloud Mobility, External Repository and Cloud Connect Replication supporting vCloud Director we decided to break out the existing credential manager and create a new manager dedicated to the configuration and management of Cloud specific credentials.

The manager can be accessed by clicking on the top left dropdown menu from the Backup & Replication Console and then choosing Manage Cloud Credentials.

You can use the Cloud Credentials Manager to create and manage all credentials that are planned to use to connect to cloud services.

The following types of credentials can be configured and managed:

  • Veeam Cloud Connect (Backup and Replication for both Hardware Plans and vCD)
  • Amazon AWS (Storage and Compute)
  • Microsoft Azure Storage (Azure Blob)
  • Microsoft Azure Compute (Azure and Azure Stack)

The Cloud Connect credentials are straight forward in terms of what they are used for. There is even a way for non vCloud Director Authenticated tenants to change their own default passwords directly.

When it comes to AWS and Azure credentials the manager will allow you to configure accounts that can be used with Object Storage Repositories, Restore to AWS (new in Update 4), Restore to Azure and Restore to Azure Stack (new in Update 4).

PowerShell is still an Option:

For those that would like to configure these accounts outside of the Backup & Replication Console, there is a full complement of PowerShell commands available via the Veeam PowerShell Snap-in.

As an example, as part of my Configure-Veeam GitHub Project I have a section that configures a new Scale Out Backup Repository with an Object Storage Repository Capacity Tier backed by Amazon S3. The initial part of that code is to create a new Amazon Storage Account.

For a full list of PowerShell capabilities related to this, click here.

So there you go…a very quick look at another new enhancement in Update 4 for Backup & Replication 9.5 that might have gone under the radar.

References:

https://helpcenter.veeam.com/docs/backup/vsphere/cloud_credentials.html?ver=95u4

Update 4 for Service Providers – Tape as a Service

When Veeam Backup & Replication 9.5 Update 4 went Generally Available a couple of weeks ago I posted a What’s in it for Service Providers blog. In that post I briefly outlined all the new features and enhancements in Update 4 that pertain to our Veeam Cloud and Service Providers. As mentioned each new major feature deserves it’s own seperate post and today I’m kicking off the series with what I feel was probably the least talked about new feature in Update 4…Tape as a Service for Cloud Connect Backup.

As a reminder here are the top new features and enhancements in Update 4 for VCSPs.

Tape as a Service for Cloud Connect Backup:

When we introduced Cloud Connect Backup in version 8 of Backup & Replication we offered the ability for VCSPs to offer a secure, remote offsite repository for their tenants. When thinking about air-gapped backups…though protected at the VCSP end, ultimate control for what was backed up to the Cloud Repository is in the hands of the tenant. From the tenant’s server they could manipulate the backups stored via policy or a malicious user could gain access to the server and delete the offsite copies.

In Update 3 of Backup & Replication 9.5 we added Insider Protection to Cloud Connect Backup, which allowed the VCSP to put a policy on the tenant’s Cloud Repository that would protect backups from a malicious attack. With this option enabled, when a backup or a specific restore point in the backup chain is deleted or aged out from the cloud repository. The actual backup files are not deleted immediately, instead, they are moved to a _RecycleBin folder on the repositories.

In Update 4 we have taken that a step further to add true air-gapped backup options that VCSPs can create services around for longer term retention with the Tenant to Tape feature. This allows a VCSP to offer additional level of data protection for their tenants. The tenant sends a copy of the backup data to their cloud repository, and the VCSP then configures backup to tape to send another copy to the tape media. If there is a situation that requires recovery if data in the cloud repository becomes unavailable, the VCSP can initiate a restore from tape.

VCSPs can also offer a tape out services to help their tenants achieve compliance and internal policies without maintaining their own tape infrastructure. Tapes can be stored by the service providers, or shipped back to tenant as shown in the diagram below.

To take advantage of this new Update 4 feature VCSPs will need to configure Tape Infrastructure on the Cloud Connect server. What’s great about Veeam is that we have the option to use traditional tape infrastructure or take advantage of Virtual Tape Libraries (VTLs) which can then be backed by Object Storage such as Amazon S3. I am not going to walk through that process in this post, there are a number of blogs and White Papers available that guide you on the setup of an Amazon Storage Gateway to use as a VTL.

Once the Tape Infrastructure is in place, as a VCSP with a Cloud Connect license when you upgrade to Update 4, under Tape Infrastructure you will see a new option called Tenant to Tape.

A tenant backup to tape job is a variant of a backup to tape job targeted at a GFS Media Pool which is available for Veeam customers with regular licensing. What’s interesting about this feature is that there are a number of options that allow flexibility on how the jobs are created which also leads to a change of use case for the feature depending on which option is chosen.

Choosing Backup Jobs will allow VCSPs to add any jobs that may be registered on the Cloud Connect server…though in reality there shouldn’t be any configured due to licensing constraints. The other two options provide the different use cases.

Backup Repositories:

This allows the VCSP to backup to tape one or more cloud repositories that can contain one or multiple tenants. The can allow the VCSP to backup the Cloud Connect repository in whole to an offsite location for longer term retention.

The ability to archive tenant Cloud Connect Backups to tape can help VCSPs protect their own infrastructure against disasters that may result in loss of tenant data. It can be used as another level of revenue generating service. As an example, there could be two service offerings for Cloud Connect Backup… one with a basic SLA which only has one copy of the backup data stored… and another with an advanced SLA that has data saved in two locations…the Cloud Connect Repository and the tape media. 

Tenants:

This option offers a lot more granularity and gives the VCSP the ability to offer an additional level of protection on a per tenant level. In fact you can also drill down to the Tenant repository level and select individual repositories if tenants have more than one configured.

Again, this can be done per tenant, or there can be one master job for all tenants.

It’s important to understand that all tasks within the tenant backup to tape feature are performed by the VCSP. Unless the VCSP has created a portal that has information about the jobs, the tenant is generally unaware of the tape infrastructure and the tenant can’t view or manage backup to tape jobs configured or perform operations with backups created by these jobs. There is scope for VCSPs to integrate such jobs and actions into their automation portals for self service.

Restores:

VCSPs can restore tenant data from tape for one tenant or more tenants at the same time. The restore can go to the original location or to a new location or be exported to backup files on local disk

Wrap Up:

Tenant to Tape or Tape as a Service for Cloud Connect Backup was a feature that didn’t get much airplay in the lead-up to the Update 4 launch, however it give VCSPs more options to protect tenant data and truly offer an air-gapped solution to better protect that data.

References:

https://www.veeam.com/wp-using-aws-vtl-gateway-deployment-guide.html

https://aws.amazon.com/about-aws/whats-new/2016/08/backup-and-archive-to-aws-storage-gateway-vtl-with-veeam-backup-and-replication-v9/

Configuring Amazon S3 Access from VMware Cloud on AWS through an S3 Endpoint

When looking at how to configure networking for interactions between a VMware Cloud on AWS SDDC and an Amazon VPC there is a little bit to grasp in terms of what needs to be done to achieve traffic flow between the SDDC and the rest of the world.

As an example, by default if you want to connect to S3 the default configuration is to go through the Amazon ENI (Elastic Network Interface) which means that unless configured correctly, connectively to Amazon S3 will fail. Brian Gaff has a really good series of posts on Networking and Security Groups when working on VMware Cloud on AWS and are worth a read to get a deeper understanding of VMC to AWS networking.

There is a way to change this behaviour to make connectivity to Amazon S3 connect via the SDDCs Internet Gateway. This is done through the VMware Cloud Portal by going to the Networking section of the relevant SDDC.

Doing this, while easy enough means that you loose a lot of the benefits that passing traffic through the ENI provides. That is a high-bandwidth, low latency connection between the VPC and the SDDC which also provides free egress. In the case of S3 and the utilising the Veeam Cloud Tier it means more optimal connectivity between a Veeam Backup & Replication instance hosted in the SDDC and Amazon S3.

To allow communication between the SDDC and Amazon S3 over the ENI the following needs to be actioned.

Create Endpoint:

First step is to go into the AWS Console, go to the VPC thats connected to the VMC service and create a new Endpoint for S3 as shown below making sure you select the correct Route Table.

Configure Security Group:

Next is to configure the Security Group associated with your VPC to allow traffic to the logical network or networks. It’s a basic HTTPS Inbound rule where your source is the SDDN network or networks you want access from.

Create Compute Gateway Firewall Rule:

The final step is to configure a firewall rule on the SDDC Compute Gateway to allow HTTPS traffic to the Amazon VPC from the network or networks you want access to Amazon S3 from.

That’s pretty much it! After that, you should be able to access Amazon S3 over the ENI and get all the benefits that delivers.

References:

https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-B501FA3C-EAF9-4005-AC72-155C3F592281.html

How to Copy Amazon S3 Buckets with AWS CLI

I am doing some work on validated restore scenarios using the new Veeam Cloud Tier that backed by an Object Storage Repository pointing at an Amazon S3 Bucket. So that I am not messing with the live data I wanted a way to copy and access the objects from another bucket or folder. There is no option at the moment to achieve this via the AWS Console, however it can be done via the AWS CLI.

First step was to ensure I had the AWS CLI installed on my MBP and it was at the latest version:

For the first part of the copy process, I cheated and created a new Bucket from the AWS Console that was based on the one I wanted to copy.

Next step is to make sure that the AWS CLI is configured with the correct AWS Access and Secret keys. Once done, the command to copy/sync buckets is a simple one.

Obviously the time to complete the operation will depend on the amount of Objects in the Bucket and whether its cross region or local. It took about 4 hours to copy across ~50GB of data from US-EAST-2 to US-WEST-2 going at about 4MB/s. By default the process is shown on the screen.

Once the first pass was complete I ran the same command again which will this time look for differences between the source and destination and only sync the differences. You can run the command below to view the Total Objects and Total Size of both buckets for comparison.

That is it! Pretty simple process. I’ll blog around the actual reason behind the Veeam Cloud Tier requirement and put this into action at a later date!

References:

https://docs.aws.amazon.com/cli/latest/userguide/install-macos.html

https://aws.amazon.com/premiumsupport/knowledge-center/move-objects-s3-bucket

AWS Outposts and VMware…Hybridity Defined!

Now that AWS re:Invent 2018 has well and truly passed…the biggest industry shift to come out of the event from my point of view was the fact that AWS are going full guns blazing into the on-premises world. With the announcement of AWS Outposts the long held belief that the public cloud is the panacea of all things became blurred. No one company has pushed such a hard cloud only message as AWS…no one company had the power to change the definition of what it is to run cloud services…AWS did that last week at re:Invent.

Yes, Microsoft have had the Azure Stack concept for a number of years now, however they have not executed on the promise of that yet. Azure Stack is seen by many as a white elephant even though it’s now in the wild and (depending on who you talk to) doing relatively well in certain verticals. The point though is that even Microsoft did not have the power to make people truely believe that a combination of a public cloud and on premises platform was the path to hybridity.

AWS is a Juggernaut and it’s my belief that they now have reached an inflection point in mindshare and can now dictate trends in our industry. They had enough power for VMware to partner with them so VMware could keep vSphere relevant in the cloud world. This resulted in VMware Cloud on AWS. It seems like AWS have realised that with this partnership in place, they can muscle their way into the on-premises/enterprise world that VMware have and still dominate…at this stage.

Outposts as a Product Name is no Accident

Like many, I like the product name Outposts. It’s catchy and straight away you can make sense of what it is…however, I decided to look up the offical meaning of the word…and it makes for some interesting reading:

  • An isolated or remote branch
  • A remote part of a country or empire
  • A small military camp or position at some distance from the main army, used especially as a guard against surprise attack

The first definition as per the Oxford Dictionary fits the overall idea of AWS Outposts. Putting a compute platform in an isolated or remote branch office that is seperate to AWS regions while also offering the ability to consume that compute platform like it was an AWS region. This represents a legitimate use case for Outposts and can be seen as AWS fulling a gap in the market that is being craved for by shifting IT sentiment.

The second definition is an interesting one when taken in the context of AWS and Amazon as a whole. They are big enough to be their own country and have certainly built up an empire over the last decade. All empires eventually crumble, however AWS is not going anywhere fast. This move does however indicate a shift in tactics and means that AWS can penetrate the on-premises market quicker to extend their empire.

The third definition is also pertinent in context to what AWS are looking to achieve with Outposts. They are setting up camp and positioning themselves a long way from their traditional stronghold. However my feeling is that they are not guarding against an attack…they are the attack!

Where does VMware fit in all this?

Given my thoughts above…where does VMware fit into all this? At first when the announcement was made on stage I was confused. With Pat Gelsinger on stage next to Andy Jessy my first impression was that VMware had given in. Here was AWS announcing a direct competitive platform to on-premises vSphere installations. Not only that, but VMware had announced Project Dimension at VMworld a few months earlier which looked to be their own on-premises managed service offering…though the wording around that was for edge rather than on-premises.

With the initial dust settled and after reading this blog post from William Lam, I came to understand the VMware play here.

VMware and Amazon are expanding their partnership to deliver a new, as-a-service, on-premises offering that will include the full VMware SDDC stack (vSphere, NSX, vSAN) running on AWS Outposts, a fully managed and configurable server and network installation built with AWS-designed hardware. VMware Cloud in AWS Outposts is VMware’s new As-a-Service offering in partnership with AWS to run on AWS Outposts – it will leverage the innovations we’ve developed with Project Dimension and apply them on top of AWS Outposts. VMware Cloud on AWS Outposts will be a subscription-based service and will support existing VMware payment options.

The reality is that on-premises environments are not going away any time soon but customers like the operating model of the cloud. More and more they don’t care about where infrastructure lives as long as a services outcome is achieved. Customers are after simplicity and cost efficiency. Outposts delivers all this by enabling convenience and choice…the choice to run VMware for traditional workloads using the familiar VMware SDDC stack all while having access to native AWS services.

A Managed Service Offering means a Mind shift

The big shift here from VMware that began with VMware Cloud on AWS is a shift towards managed services. A fundamental change in the mindset of the customer in the way in which they consume their infrastructure. Without needing to worry about the underlying platform, IT can focus on the applications and the availability of those applications. For VMware this means from the VM up…for AWS, this means from the platform up.

VMware Cloud on AWS is a great example of this new managed services world, with VMware managing most of the traditional stack. VMware can now extend VMware Cloud on AWS to Outposts to boomerang the management of on-premises as well. Overall Outposts is a win win for both AWS and VMware…however proof will be in the execution and uptake. We won’t know how it all pans out until the product becomes available…apparently in the later half of 2019.

IT admins have some contemplating to do as well…what does a shift to managed platforms mean for them? This is going to be an interesting ride as it pans out over the next twelve months!

References:

VMware Cloud on AWS Outposts: Cloud Managed SDDC for your Data Center

« Older Entries