Tag Archives: AWS

v10 Enhancements – Mounting Object Storage Repository for DR

Version 10 of Veeam Backup & Replication isn’t too far away and we are currently at the end of a second private BETA for our customers and partners. There has been a fair bit of content released around v10 functionality and features from our Veeam Vanguard’s over the past couple of weeks and as we move closer to GA, as part of the lead up, I am doing a series on some of the cool new enhancements that are coming as part of the release. These will be quick short takes that give a glimpse into what’s coming as part of the v10 release.

Mounting Object Storage Repository for Streaming Disaster Recovery

The Cloud Tier was introduced in Veeam Backup & Replication 9.5 Update 4 and focused on the offloading of data from local repositories to Object Storage repositories. Essentially looking to reduce the cost and overheads of ever growing local primary repositories. Due to the smarts we built into the feature, the use cases for Cloud Tier started to expand beyond the offloading of data and looked at recovery options.

Because we hold a replicated copy of the VBK metadata as well the actual backup data that is indexed as blocks in Object Storage we have the ability to leverage the data sitting there for recovery purposes. I’ve already shown this a number of times this year, and presented on the recovery and resiliency of the Cloud Tier at Cloud Field Day 5.

With v10, we have made this process even easier by introducing a Mount function that will enable users to import backup restore points for recovery purposes in the case of disaster. This can even be done with the Community Edition which means that the Cloud Tier now becomes a mechanism for recovery from any device to almost any platform.

Quickly going over how this works, the first step is to recreate the Object Storage Repository with the same settings as the one existed in the original location.

At this point we can leverage the new v10 feature that allows you to Import the backup data contained on the Object Storage Repository by right clicking on the repository and selecting Import Backups.

This will store the available restore points in the Backup & Replication database and have them appear under Imported Backups in the console.

It’s important and cool to note, that as this stage we haven’t downloaded the metadata shells that constitute the de-hydrated VBK. One of the extra smart things we have built into this feature is that the metadata and VBK shells are only downloaded once a restore operation has been started, meaning quicker setup and more specific re-syncing of the metadata shells.

On that note, all existing restore operations are available at this point.

Once a restore operation is triggered, only then is the required metadata downloaded and reconstructed into the required shell chain to a temp directory. The example below shows the shells of a full and an incremental triggered by an Instant VM Recovery (IVMR) Operation.

The data required to perform the IVMR is streamed from the Object Storage Repository (Capacity Tier Extent).

Once restore operations have been completed you can go back to the Object Storage Repository, right click and select Detach

This unmounts the Object Storage and removed the restore points from the Imported Backup view and deletes the downloaded contents of the temp folder where the metadata shells where staged.

Wrap Up:

That was a quick look at one of my personal favourite new enhancements in v10. We have improved an operation that was being leveraged in 9.5 Update 4 due to the smarts built into the Cloud Tier for recovery operations and made it quicker and more efficient. This also allows users to effectively restore to any platform from any device that has Veeam Backup & Replication installed and has access to the Object Storage platform!

When put together with the new Copy mode being introduced into v10, we all of a sudden have a solution that can achieve very low RPO and RTO for disaster recovery… more to come on that aspect when v10 launches.

Stay tuned over the next few weeks as I go through some more hidden gems.

Disclaimer: The information and screen shots in this post is based on BETA code and may be subject to change come final GA.

Released: Backup for AWS …Free for up to 10 EC2 Instances!

This week at AWS re:Invent, exciting news for a lot of us in Veeam was announced as we made available for GA Veeam Backup for AWS (Build 1.0.0.1345). Available through the AWS Marketplace the product can be deployed within five minutes and be ready to backup EC2 instances. From my point of view, apart from the technical aspects, probably the biggest news of this release is that the FREE edition will backup up to 10 EC2 instances out of the box and is fully featured.

Apart from being free there are a number of capabilities in this initial release which are innovative and will prove valuable for our customers consuming the product.

  • Automates Amazon EBS snapshots for frequent backup and fast restores
  • Backup to Amazon S3 Repository for long-term retention
  • Policy-based protection and job creation
  • On-Demand Worker Nodes that work as data movers
  • Web-based management UI
  • Built-in cost estimation
  • Support for IAM role separation as well as cross-region and cross-account configuration and Multi-factor authentication compatible
  • Restore to the original instance
  • Restore to a new instance
  • File-level recovery

The Veeam Product Strategy Team has already produced a few blog posts around the release which goes into some greater details around the core components and features of Backup for AWS.

As we look to quickly iterate on this initial v1 release we will look to support even more AWS services but for now, this is a significant moment in Veeam’s history as we look to broaden our own in-house capabilities and extend the Veeam Backup Platform to cover even more workloads and make them portable as they land in our repositories and in our portable data format.

Links and Downloads:

There is still a Sting in the Tail for Cloud Service Providers

This week it gave me great pleasure to see my former employer, Zettagrid announced a significant expansion in their operations, with the addition of three new hosting zones to go along with their existing four zones in Australia and Indonesia. They also announced the opening of operations in the US. Apart from the fact I still have a lot of good friends working at Zettagrid the announcement vindicates the position and role of the boutique Cloud Service Provider in the era of the hyper-scale public cloud providers.

When I decided to leave Zettagrid, I’ll be honest and say that one of the reasons was that I wasn’t sure where the IaaS industry would be placed in five years. That was now, more than three years ago and in that time the industry has pulled back significantly from the previous inferred position of total and complete hyper-scale dominance in the cloud and hosting market.

Cloud is not a Panacea:

The Industry no longer talks about the cloud as a holistic destination for workloads, and more and more over the past couple of years the move has been towards multi and hybrid cloud platforms. VMware has (in my eyes) been the leader of this push but the inflection point came at AWS re:Invent last year, when AWS Outposts was announced. This shift in mindset is driven by the undisputed leader in the public cloud space towards consuming an on-premises resource in a cloud way.

I’ve always been a big supporter of boutique Service Providers and Managed Service Providers… it’s in my blood and my role at Veeam allows me to continue to work with top innovative service providers around the world. Over the past three years, I’ve seen the really successful ones thrive through themselves pivoting by offering their partners and tenants differential services… going beyond just traditional IaaS.

These might be in the form of enhancing their IaaS platform by adding more avenues to consume services. Examples of this are adding APIs, or the ability for the new wave of Infrastructure as Code tools to provision and manage workloads. vCloud Director is a great example of continued enhancement that, upon every releases offers something new to the service provider tenant. The Plugable Extension Architecture now allows service providers to offer new services for backup, Kubernetes and Object Storage.

Backup and Disaster Recovery is Driving Revenue:

A lot of service providers have also transitioned to offering Backup and Disaster Recovery solutions which in many cases has been the biggest growth area for them over the past number of years.  Even with the extreme cheapness that the hyper-scalers offer for the their cloud object storage platform.

All this leads me to believe that there is still a very significant role to be had for Service Providers in conjunction with other cloud platforms for a long time to come. The service providers that are succeeding and growing are not sitting on their hands and expecting what once worked to continue working. The successful service providers are looking at ways to offer more services and continue to be that trusted provider of IT.

I was once told in the early days of my career that if a client has 2.3 products with you, then they are sticky and the likelihood is that you will have them as a customer for a number of years. I don’t know the actual accuracy of that, but I’ve always carried that belief. This flies in the face of modern thinking around service mobility which has been reinforced by the improvement in underlying network technologies to allow the portability and movement of workloads. This also extends to the ease to which a modern application can be provisioned, managed and ultimately migrated. That said, all service providers want their tenants to be sticky and not move.

There is a Future!

Whether it be through continuing to evolve existing service offerings, adding more ways to consume their platform, becoming a broker for public cloud services or being a trusted final destination for backup and Disaster Recovery, the talk about the hyper-scalers dominating the market is currently not a true reflection of the industry… and that is a good thing!

Cloud Tier Data Migration between AWS and Azure… or anywhere in between!

At the recent Cloud Field Day 5 (CFD#5) I presented a deep dive on the Veeam Cloud Tier which was released as a feature extension of our Scale Out Backup Repository (SOBR) in Update 4 of Veeam Backup & Replication. Since we went GA we have been able to track the success of this feature by looking at Public Cloud Object Storage consumption by Veeam customers using the feature. As of last week Veeam customers have been offloading petabytes of backup data into Azure Blob and Amazon S3…not counting the data being offloaded to other Object Storage repositories.

During the Cloud Field Day 5 presentation, Michael Cade talked about the Portability of Veeam’s data format, around how we do not lock our customers into any specific hardware or format that requires a specific underlying File System. We offer complete Flexibility and Agnosticity where your data is stored and the same is true when talking about what Object Storage platform to choose for the offloading of data with the Cloud Tier.

I had a need recently to setup a Capacity Tier extent that was backed by an Object Storage Repository on Azure Blob. I wanted to use the same backup data that I had in an existing Amazon S3 backed Capacity Tier while still keeping things clean in my Backup & Replication console…luckily we have built in a way to migrate to a new Object Storage Repository, taking advantage of the innovative tech we have built into the Cloud Tier.

Cloud Tier Data Migration:

During the offload process data is tiered from the Performance Tier to the Capacity Tier effectively Dehydrating the VBK files of all backup data only leaving the metadata with an Index that points to where the data blocks have been offloaded into the Object Storage.

This process can also be reversed and the VBK file can be rehydrated. The ability to bring the data back from Capacity Tier to the Performance Tier means that if there was ever a requirement to evacuate or migrate away from a particular Object Storage Provider, the ability to do so is built into Backup & Replication.

In this small example, as you can see below, the SOBR was configured with a Capacity Tier backed by Amazon S3 and using about 15GB of Object Storage.

The first step is to download the data back from the Object Storage and rehydrate the VBK files on the Performance Tier extents.

There are two ways to achieve the rehydration or download operation.

  1. Via the Backup & Replication Console
  2. Via a PowerShell Cmdlet
Rehydration via the Console:

From the Home Menu under Backups right click on the Job Name and select Backup Properties. From here there is a list of the Files contained within the job and also the objects that they contain. Depending on where the data is stored (remembering that the data blocks are only even in one location… the Performance Tier or the Capacity Tier) the icon against the File name will be slightly different with files offloaded represented with a Cloud.

Right Clicking on any of these files will give you the option to Copy the data back to the Performance Tier. You have the choice to copy back the backup file or the backup files and all its dependancies.

Once this is selected, a SOBR Download job is kicked off and the data is moved back to the Performance Tier. It’s important to note that our Intelligent Block Recovery will come into play here and look at the local data blocks to see if any match what is trying to be downloaded from the Object Storage… if so it will copy them from the Performance Tier, saving on egress charges and also speeding up the process.

In the image above you can see the Download Job working and only downloaded 95.5MB from Object Storage with 15.1GB copied from the Performance Tier… meaning the data blocks for the most that are local are able to be used for the rehydration.

The one caveat to this method is that you can’t select bulk files or multiple backup jobs so the process to rehydrate everything from the Capacity Tier can be tedious.

Rehydration via PowerShell:

To solve that problem we can use PowerShell to call the Start-VBRDownloadBackupFile cmdlet to do the bulk of the work for us. Below are the steps I used to get the backup job details, feed that through to variable that contains all the file names, and then kick off the Download Job.

The PowerShell window will then show the Download Job running

Completing the Migration:

No matter which way the Download job is initiated, we can see the progress form the Backup & Replication Console under the Jobs section.

And looking at the Disk and Network sections of Windows Resource Monitor we can see connections to Amazon S3 pulling the required blocks of data down.

Once the Download job has been completed and all VBKs have been rehydrated, the next step is to change the configuration of the SOBR Capacity Tier to point at the Object Storage Repository backed by Azure Blob.

The final step is to initiate an offload to the new Capacity Tier via an Offload Job…this can be triggered via the console or via Powershell (as shown in the last command of the PowerShell code above) and because we have already a set of data that satisfies the conditions for offload (sealed chains and backups outside the operational restore window) data will be dehydrated once again…but this time up to Azure Blob.

The used space shown below in the Azure Blob Object Storage matches the used space initially in Amazon S3 All recovery operations show Restore Points on the Performance Tier and on the Capacity Tier as dictated by the operational restore window policy.
Conclusion:

As mentioned in the intro, the ability for Veeam customers to have control of their data is an important principal revolving around data portability. With the Cloud Tier we have extended that by allowing you to choose the Object Storage Repository of your choice for cloud based storage or Veeam backup data…but also given you the option to pull that data out and shift when and where desired. Migrating data between AWS, Azure or any platform is easily achieved and can be done without too much hassle.

References:

https://helpcenter.veeam.com/docs/backup/powershell/object_storage_data_transfer.html?ver=95u4

Update 4 for Service Providers – Cloud Mobility and External Repository for N2WS

When Veeam Backup & Replication 9.5 Update 4 went Generally Available a couple of weeks ago I posted a What’s in it for Service Providers blog. In that post I briefly outlined all the new features and enhancements in Update 4 as it related to our Veeam Cloud and Service Providers. As mentioned each new major feature deserves it’s own seperate post. I’ve covered off three feature so far, and today i’m going to talk about two features that are more aligned to Managed Service Providers, but still could have a place in the pure IaaS world.

As a reminder here are the top new features and enhancements in Update 4 for VCSPs.

Cloud Mobility:

The Cloud Mobility feature is actually the new umbrella name for our Restore to functionality. Prior to Update 4 we had the ability to Restore to Microsoft Azure only. With the release of Update 4 we have added the ability to Restore to Microsoft Azure Stack and Amazon EC2. It’s important to point out what Cloud Mobility isn’t…that is a disaster recovery feature set. in that you can’t rely on this feature in the same way that Cloud Connect Replication allows you to power on VM replicas on demand for DR.

Though you could configure restore tasks to run on demand via PowerShell commands and have systems in a ready state after recovery it is difficult to attach an RPTO to the recovery process and therefore Cloud Mobility should be used for migrations and testing. In essence this is why it is called Cloud Mobility…to give users and Service Providers the flexibility to shift workloads from one platform to another with ease.

Restore to EC2:

The ability to restore direct to EC2 is something that is demanded these days and the addition of this feature to Update 4 was one of the most highly anticipated. In enabling the restoration of workloads into EC2 we have enabled our customers and partners to have the option to backup workloads from the following:

These backups, once stored in the Veeam Backup File format, ensures absolute portability of those workloads. In terms of restoring to EC2, the process is straight forward and can be done via the Backup & Replication console or via PowerShell.

Again, the focus of this feature is to enable migrations and testing. However when put together with the External Repository, we also complete a loopback by way of having a way to restore EC2 instances that where initially backed up with N2WS Backup & Recovery and archived to an Amazon S3 Bucket.

It should also be noted that to perform a recovery, only the most recent restore point can be used.

External Repository:

The External Repository allows you to add an Amazon S3 bucket that contain backups created by N2WS Backup & Recovery for AWS environments. Backup & Recovery for AWS will create backups of Elastic Block Stores disk volumes of EC2 instances. As part of the 2.4 release these backups where able to be placed directly to Amazon S3 object storage repositories. This is what is added to the Veeam Backup & Replication console as an External Repositories.

Backup & Recovery for AWS uses the Veeam Backup API to preserve the backup structure in the native Veeam format which are housed in the Amazon S3 Bucket as oVBKs. The External Repository cannot be used as a target for backup or backup copy jobs. Once the External Repository is configured, N2WS VMs can be manipulated through the Backup & Replication Console as per usual. This allows all the restore capabilities including Restore to EC2 and also more importantly the ability to perform Backup Copy Jobs against the backed up data to enable even longer term retention outside of Amazon S3.

Wrap Up:

The addition of Restore to EC2, Azure Stack and the External Repository can be used by manager service providers and service providers to offer true Cloud Mobility to their customers. Also, while a lot of organization are moving to the Public Cloud…this is not a fait accompli and they do sometimes want to get workloads out of those platforms and back on-premises or to Service Provider Clouds.. It shouldn’t be a Hotel California situation and with these new Update 4 features Veeam customers have more choice than other.

References:

https://helpcenter.veeam.com/docs/backup/vsphere/restore_amazon.html?ver=95u4

https://helpcenter.veeam.com/docs/backup/vsphere/external_repository.html?ver=95u4

Automatic restore of multiple machines from Veeam to AWS

 

Configuring Amazon S3 Access from VMware Cloud on AWS through an S3 Endpoint

When looking at how to configure networking for interactions between a VMware Cloud on AWS SDDC and an Amazon VPC there is a little bit to grasp in terms of what needs to be done to achieve traffic flow between the SDDC and the rest of the world.

As an example, by default if you want to connect to S3 the default configuration is to go through the Amazon ENI (Elastic Network Interface) which means that unless configured correctly, connectively to Amazon S3 will fail. Brian Gaff has a really good series of posts on Networking and Security Groups when working on VMware Cloud on AWS and are worth a read to get a deeper understanding of VMC to AWS networking.

There is a way to change this behaviour to make connectivity to Amazon S3 connect via the SDDCs Internet Gateway. This is done through the VMware Cloud Portal by going to the Networking section of the relevant SDDC.

Doing this, while easy enough means that you loose a lot of the benefits that passing traffic through the ENI provides. That is a high-bandwidth, low latency connection between the VPC and the SDDC which also provides free egress. In the case of S3 and the utilising the Veeam Cloud Tier it means more optimal connectivity between a Veeam Backup & Replication instance hosted in the SDDC and Amazon S3.

To allow communication between the SDDC and Amazon S3 over the ENI the following needs to be actioned.

Create Endpoint:

First step is to go into the AWS Console, go to the VPC thats connected to the VMC service and create a new Endpoint for S3 as shown below making sure you select the correct Route Table.

Configure Security Group:

Next is to configure the Security Group associated with your VPC to allow traffic to the logical network or networks. It’s a basic HTTPS Inbound rule where your source is the SDDN network or networks you want access from.

Create Compute Gateway Firewall Rule:

The final step is to configure a firewall rule on the SDDC Compute Gateway to allow HTTPS traffic to the Amazon VPC from the network or networks you want access to Amazon S3 from.

That’s pretty much it! After that, you should be able to access Amazon S3 over the ENI and get all the benefits that delivers.

References:

https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-B501FA3C-EAF9-4005-AC72-155C3F592281.html

How to Copy Amazon S3 Buckets with AWS CLI

I am doing some work on validated restore scenarios using the new Veeam Cloud Tier that backed by an Object Storage Repository pointing at an Amazon S3 Bucket. So that I am not messing with the live data I wanted a way to copy and access the objects from another bucket or folder. There is no option at the moment to achieve this via the AWS Console, however it can be done via the AWS CLI.

First step was to ensure I had the AWS CLI installed on my MBP and it was at the latest version:

For the first part of the copy process, I cheated and created a new Bucket from the AWS Console that was based on the one I wanted to copy.

Next step is to make sure that the AWS CLI is configured with the correct AWS Access and Secret keys. Once done, the command to copy/sync buckets is a simple one.

Obviously the time to complete the operation will depend on the amount of Objects in the Bucket and whether its cross region or local. It took about 4 hours to copy across ~50GB of data from US-EAST-2 to US-WEST-2 going at about 4MB/s. By default the process is shown on the screen.

Once the first pass was complete I ran the same command again which will this time look for differences between the source and destination and only sync the differences. You can run the command below to view the Total Objects and Total Size of both buckets for comparison.

That is it! Pretty simple process. I’ll blog around the actual reason behind the Veeam Cloud Tier requirement and put this into action at a later date!

References:

https://docs.aws.amazon.com/cli/latest/userguide/install-macos.html

https://aws.amazon.com/premiumsupport/knowledge-center/move-objects-s3-bucket

What Services Providers Need to Think About in 2019 and Beyond…

We are entering interesting times in the cloud space! We should no longer be talking about the cloud as a destination and we shouldn’t be talking about how cloud can transform business…those days are over! We have entered the next level of adoption whereby the cloud as a delivery framework has become mainstream. You only have to look at what AWS announced last year at Re:Invent with its Outposts offering. The rise of automation and orchestration in mainstream IT also has meant that cloud can be consumed in a more structured and repeatable way.

To that end…where does it leave traditional Service Providers who have for years offered Infrastructure as a Service as the core of their offerings?

Last year I wrote a post on how the the VM shouldn’t  be the base unit of measurement for cloud…and even with some of the happenings since then, I remain convinced that Service Providers can continue to exist and thrive through offering value around the VM construct. Backup and DR as a service remains core to this however and there is ample thirst out there in the market for customers wanting to consume services from cloud providers that are not the giant hyper-scalers.

Almost all technology vendors are succumbing to the reality that they need to extend their own offering to include public cloud services. It is what the market is demanding…and it’s what the likes of AWS Azure, IBM and GCP are pushing for. The backup vendor space especially has had to extend technologies to consume public cloud services such as Amazon S3, Glacier or Azure Blob as targets for offsite backups. Veeam is upping the ante with our Update 4 release of Veeam Backup & Replication 9.5 which includes Cloud Tier to object storage and additional Direct Restore capabilities to Azure Stack and Amazon EC2.

With these additional public cloud features, Service Providers have a right to feel somewhat under threat. However we have seen this before (Office 365 for Hosted Exchange as an example) and the direction that Service Providers need to take is to continue to develop offerings based on vendor technologies and continue to add value to the relationship that they have with their clients. I wrote a long time ago when VMware first announced vCloud Air that people tend to buy based on relationship…and there is no more trusted relationship than that of the Service Provider.

With that, there is no doubting that clients will want to look at using a combination of services from a number of different providers. From where I stand, the days of clients going all in with one provider for all services are gone. This is an opportunity for Service Providers to be the broker. This isn’t a new concept and plenty of Service Providers have thought about how they themselves leverage the Public Cloud to not only augment their own backend services, but make them consumable for their clients via there own portals or systems.

With all that in mind…in my opinion, there are five main areas where Service Providers need to be looking in 2019 and beyond:

  1. Networking is central this and the most successful Service Providers have already worked this out and offer a number of different networking services. It’s imperative that Service Providers offer a way for clients to go beyond their own networks and have the option to connect out to other cloud networks. Telco’s and other carriers have built amazing technology frameworks based on APIs to consume networking in ways that mean extending a network shouldn’t be thought of as a complex undertaking anymore.
  2. Backup, Replication and Recovery is something that Service Providers have offered for a long time now, however there is more and more completion in this area today in the form of built in protection at the application and hardware level. Where providers have traditionally excelled at is a the VM level. Again, that will remain the base unit of measurement for cloud moving forward, but Service Providers need to enhance their BaaS, R/DRaaS offerings for them to remain competitive. Leveraging public cloud to gain economies of scale is one way to enhance those offerings.
  3. Gateway Services are a great way to lock in customers. Gateway services are typically those which a low effort for both the Service Provider and client alike. Take the example of Veeam’s Cloud Connect Backup. It’s a simple service to setup at both ends and works without too much hassle…but there is power for the Service Provider in the data that’s being transferred into their network. From there auxiliary services can be offered such as recovery or other business continuity services. It also leads into discussions about Replication services which can be worked into the total service offering as well.
  4. Managed Services is the one thing that the hyper-scalers can’t match Service Providers in and it’s the one thing that will keep all Service Providers relevant. I’ve mentioned already the trusted advisor thought process in the sales cycle. This is all about continuing to offer value around great vendor technologies that aims to secure the Service Provider to client relationship.
  5. Developing a Channel is central to be able to scale without the need to add resources to the business. Again, the most successful Service Providers all have Channel/Partner program in place and it’s the best way to extend that managed service, trusted provider reach. I’ve seen a number of providers not able to execute on a successful channel play due to poor execution, however if done right it’s one way to extend that reach to more clients…staying relevant in the wake of the hyper-scalers.

This isn’t a new Differentiate or Die!? message…it’s one of ensuring that Service Providers continue to evolve with the market and with industry expectation. That is the only way to thrive and survive!

AWS Outposts and VMware…Hybridity Defined!

Now that AWS re:Invent 2018 has well and truly passed…the biggest industry shift to come out of the event from my point of view was the fact that AWS are going full guns blazing into the on-premises world. With the announcement of AWS Outposts the long held belief that the public cloud is the panacea of all things became blurred. No one company has pushed such a hard cloud only message as AWS…no one company had the power to change the definition of what it is to run cloud services…AWS did that last week at re:Invent.

Yes, Microsoft have had the Azure Stack concept for a number of years now, however they have not executed on the promise of that yet. Azure Stack is seen by many as a white elephant even though it’s now in the wild and (depending on who you talk to) doing relatively well in certain verticals. The point though is that even Microsoft did not have the power to make people truely believe that a combination of a public cloud and on premises platform was the path to hybridity.

AWS is a Juggernaut and it’s my belief that they now have reached an inflection point in mindshare and can now dictate trends in our industry. They had enough power for VMware to partner with them so VMware could keep vSphere relevant in the cloud world. This resulted in VMware Cloud on AWS. It seems like AWS have realised that with this partnership in place, they can muscle their way into the on-premises/enterprise world that VMware have and still dominate…at this stage.

Outposts as a Product Name is no Accident

Like many, I like the product name Outposts. It’s catchy and straight away you can make sense of what it is…however, I decided to look up the offical meaning of the word…and it makes for some interesting reading:

  • An isolated or remote branch
  • A remote part of a country or empire
  • A small military camp or position at some distance from the main army, used especially as a guard against surprise attack

The first definition as per the Oxford Dictionary fits the overall idea of AWS Outposts. Putting a compute platform in an isolated or remote branch office that is seperate to AWS regions while also offering the ability to consume that compute platform like it was an AWS region. This represents a legitimate use case for Outposts and can be seen as AWS fulling a gap in the market that is being craved for by shifting IT sentiment.

The second definition is an interesting one when taken in the context of AWS and Amazon as a whole. They are big enough to be their own country and have certainly built up an empire over the last decade. All empires eventually crumble, however AWS is not going anywhere fast. This move does however indicate a shift in tactics and means that AWS can penetrate the on-premises market quicker to extend their empire.

The third definition is also pertinent in context to what AWS are looking to achieve with Outposts. They are setting up camp and positioning themselves a long way from their traditional stronghold. However my feeling is that they are not guarding against an attack…they are the attack!

Where does VMware fit in all this?

Given my thoughts above…where does VMware fit into all this? At first when the announcement was made on stage I was confused. With Pat Gelsinger on stage next to Andy Jessy my first impression was that VMware had given in. Here was AWS announcing a direct competitive platform to on-premises vSphere installations. Not only that, but VMware had announced Project Dimension at VMworld a few months earlier which looked to be their own on-premises managed service offering…though the wording around that was for edge rather than on-premises.

With the initial dust settled and after reading this blog post from William Lam, I came to understand the VMware play here.

VMware and Amazon are expanding their partnership to deliver a new, as-a-service, on-premises offering that will include the full VMware SDDC stack (vSphere, NSX, vSAN) running on AWS Outposts, a fully managed and configurable server and network installation built with AWS-designed hardware. VMware Cloud in AWS Outposts is VMware’s new As-a-Service offering in partnership with AWS to run on AWS Outposts – it will leverage the innovations we’ve developed with Project Dimension and apply them on top of AWS Outposts. VMware Cloud on AWS Outposts will be a subscription-based service and will support existing VMware payment options.

The reality is that on-premises environments are not going away any time soon but customers like the operating model of the cloud. More and more they don’t care about where infrastructure lives as long as a services outcome is achieved. Customers are after simplicity and cost efficiency. Outposts delivers all this by enabling convenience and choice…the choice to run VMware for traditional workloads using the familiar VMware SDDC stack all while having access to native AWS services.

A Managed Service Offering means a Mind shift

The big shift here from VMware that began with VMware Cloud on AWS is a shift towards managed services. A fundamental change in the mindset of the customer in the way in which they consume their infrastructure. Without needing to worry about the underlying platform, IT can focus on the applications and the availability of those applications. For VMware this means from the VM up…for AWS, this means from the platform up.

VMware Cloud on AWS is a great example of this new managed services world, with VMware managing most of the traditional stack. VMware can now extend VMware Cloud on AWS to Outposts to boomerang the management of on-premises as well. Overall Outposts is a win win for both AWS and VMware…however proof will be in the execution and uptake. We won’t know how it all pans out until the product becomes available…apparently in the later half of 2019.

IT admins have some contemplating to do as well…what does a shift to managed platforms mean for them? This is going to be an interesting ride as it pans out over the next twelve months!

References:

VMware Cloud on AWS Outposts: Cloud Managed SDDC for your Data Center

AWS re:Invent 2018 – Veeam and N2WS Recap and Thoughts

There was so much to take away from AWS re:Invent last week. In my opinion, having attended a lot of industry events over the past ten or so years, this years re:Invent has left the industry with a lot to think about it! AWS vigorously defended their position as the number one Public Cloud destination (in their eyes) while trying to lay a path for future growth by expanding into the true enterprise space. Also, with the announcement of Outposts set a path to try and dominate the hybrid world with an on-premises offering.

Instead of writing down my extended thoughts it’s more consumable to hear Rick Vanover and myself talk about the event from a Veeam perspective in the short embedded video below. I’ve also embedded a video with David Hill and Sebastian Straub covering things from an N2WS perspective, as well as talk about the N2WS related announcements at re:Invent 2018.

I’ve also posted the Veeam session video here:

« Older Entries