Tag Archives: AWS

Cloud Tier Data Migration between AWS and Azure… or anywhere in between!

At the recent Cloud Field Day 5 (CFD#5) I presented a deep dive on the Veeam Cloud Tier which was released as a feature extension of our Scale Out Backup Repository (SOBR) in Update 4 of Veeam Backup & Replication. Since we went GA we have been able to track the success of this feature by looking at Public Cloud Object Storage consumption by Veeam customers using the feature. As of last week Veeam customers have been offloading petabytes of backup data into Azure Blob and Amazon S3…not counting the data being offloaded to other Object Storage repositories.

During the Cloud Field Day 5 presentation, Michael Cade talked about the Portability of Veeam’s data format, around how we do not lock our customers into any specific hardware or format that requires a specific underlying File System. We offer complete Flexibility and Agnosticity where your data is stored and the same is true when talking about what Object Storage platform to choose for the offloading of data with the Cloud Tier.

I had a need recently to setup a Capacity Tier extent that was backed by an Object Storage Repository on Azure Blob. I wanted to use the same backup data that I had in an existing Amazon S3 backed Capacity Tier while still keeping things clean in my Backup & Replication console…luckily we have built in a way to migrate to a new Object Storage Repository, taking advantage of the innovative tech we have built into the Cloud Tier.

Cloud Tier Data Migration:

During the offload process data is tiered from the Performance Tier to the Capacity Tier effectively Dehydrating the VBK files of all backup data only leaving the metadata with an Index that points to where the data blocks have been offloaded into the Object Storage.

This process can also be reversed and the VBK file can be rehydrated. The ability to bring the data back from Capacity Tier to the Performance Tier means that if there was ever a requirement to evacuate or migrate away from a particular Object Storage Provider, the ability to do so is built into Backup & Replication.

In this small example, as you can see below, the SOBR was configured with a Capacity Tier backed by Amazon S3 and using about 15GB of Object Storage.

The first step is to download the data back from the Object Storage and rehydrate the VBK files on the Performance Tier extents.

There are two ways to achieve the rehydration or download operation.

  1. Via the Backup & Replication Console
  2. Via a PowerShell Cmdlet
Rehydration via the Console:

From the Home Menu under Backups right click on the Job Name and select Backup Properties. From here there is a list of the Files contained within the job and also the objects that they contain. Depending on where the data is stored (remembering that the data blocks are only even in one location… the Performance Tier or the Capacity Tier) the icon against the File name will be slightly different with files offloaded represented with a Cloud.

Right Clicking on any of these files will give you the option to Copy the data back to the Performance Tier. You have the choice to copy back the backup file or the backup files and all its dependancies.

Once this is selected, a SOBR Download job is kicked off and the data is moved back to the Performance Tier. It’s important to note that our Intelligent Block Recovery will come into play here and look at the local data blocks to see if any match what is trying to be downloaded from the Object Storage… if so it will copy them from the Performance Tier, saving on egress charges and also speeding up the process.

In the image above you can see the Download Job working and only downloaded 95.5MB from Object Storage with 15.1GB copied from the Performance Tier… meaning the data blocks for the most that are local are able to be used for the rehydration.

The one caveat to this method is that you can’t select bulk files or multiple backup jobs so the process to rehydrate everything from the Capacity Tier can be tedious.

Rehydration via PowerShell:

To solve that problem we can use PowerShell to call the Start-VBRDownloadBackupFile cmdlet to do the bulk of the work for us. Below are the steps I used to get the backup job details, feed that through to variable that contains all the file names, and then kick off the Download Job.

The PowerShell window will then show the Download Job running

Completing the Migration:

No matter which way the Download job is initiated, we can see the progress form the Backup & Replication Console under the Jobs section.

And looking at the Disk and Network sections of Windows Resource Monitor we can see connections to Amazon S3 pulling the required blocks of data down.

Once the Download job has been completed and all VBKs have been rehydrated, the next step is to change the configuration of the SOBR Capacity Tier to point at the Object Storage Repository backed by Azure Blob.

The final step is to initiate an offload to the new Capacity Tier via an Offload Job…this can be triggered via the console or via Powershell (as shown in the last command of the PowerShell code above) and because we have already a set of data that satisfies the conditions for offload (sealed chains and backups outside the operational restore window) data will be dehydrated once again…but this time up to Azure Blob.

The used space shown below in the Azure Blob Object Storage matches the used space initially in Amazon S3 All recovery operations show Restore Points on the Performance Tier and on the Capacity Tier as dictated by the operational restore window policy.
Conclusion:

As mentioned in the intro, the ability for Veeam customers to have control of their data is an important principal revolving around data portability. With the Cloud Tier we have extended that by allowing you to choose the Object Storage Repository of your choice for cloud based storage or Veeam backup data…but also given you the option to pull that data out and shift when and where desired. Migrating data between AWS, Azure or any platform is easily achieved and can be done without too much hassle.

References:

https://helpcenter.veeam.com/docs/backup/powershell/object_storage_data_transfer.html?ver=95u4

Update 4 for Service Providers – Cloud Mobility and External Repository for N2WS

When Veeam Backup & Replication 9.5 Update 4 went Generally Available a couple of weeks ago I posted a What’s in it for Service Providers blog. In that post I briefly outlined all the new features and enhancements in Update 4 as it related to our Veeam Cloud and Service Providers. As mentioned each new major feature deserves it’s own seperate post. I’ve covered off three feature so far, and today i’m going to talk about two features that are more aligned to Managed Service Providers, but still could have a place in the pure IaaS world.

As a reminder here are the top new features and enhancements in Update 4 for VCSPs.

Cloud Mobility:

The Cloud Mobility feature is actually the new umbrella name for our Restore to functionality. Prior to Update 4 we had the ability to Restore to Microsoft Azure only. With the release of Update 4 we have added the ability to Restore to Microsoft Azure Stack and Amazon EC2. It’s important to point out what Cloud Mobility isn’t…that is a disaster recovery feature set. in that you can’t rely on this feature in the same way that Cloud Connect Replication allows you to power on VM replicas on demand for DR.

Though you could configure restore tasks to run on demand via PowerShell commands and have systems in a ready state after recovery it is difficult to attach an RPTO to the recovery process and therefore Cloud Mobility should be used for migrations and testing. In essence this is why it is called Cloud Mobility…to give users and Service Providers the flexibility to shift workloads from one platform to another with ease.

Restore to EC2:

The ability to restore direct to EC2 is something that is demanded these days and the addition of this feature to Update 4 was one of the most highly anticipated. In enabling the restoration of workloads into EC2 we have enabled our customers and partners to have the option to backup workloads from the following:

These backups, once stored in the Veeam Backup File format, ensures absolute portability of those workloads. In terms of restoring to EC2, the process is straight forward and can be done via the Backup & Replication console or via PowerShell.

Again, the focus of this feature is to enable migrations and testing. However when put together with the External Repository, we also complete a loopback by way of having a way to restore EC2 instances that where initially backed up with N2WS Backup & Recovery and archived to an Amazon S3 Bucket.

It should also be noted that to perform a recovery, only the most recent restore point can be used.

External Repository:

The External Repository allows you to add an Amazon S3 bucket that contain backups created by N2WS Backup & Recovery for AWS environments. Backup & Recovery for AWS will create backups of Elastic Block Stores disk volumes of EC2 instances. As part of the 2.4 release these backups where able to be placed directly to Amazon S3 object storage repositories. This is what is added to the Veeam Backup & Replication console as an External Repositories.

Backup & Recovery for AWS uses the Veeam Backup API to preserve the backup structure in the native Veeam format which are housed in the Amazon S3 Bucket as oVBKs. The External Repository cannot be used as a target for backup or backup copy jobs. Once the External Repository is configured, N2WS VMs can be manipulated through the Backup & Replication Console as per usual. This allows all the restore capabilities including Restore to EC2 and also more importantly the ability to perform Backup Copy Jobs against the backed up data to enable even longer term retention outside of Amazon S3.

Wrap Up:

The addition of Restore to EC2, Azure Stack and the External Repository can be used by manager service providers and service providers to offer true Cloud Mobility to their customers. Also, while a lot of organization are moving to the Public Cloud…this is not a fait accompli and they do sometimes want to get workloads out of those platforms and back on-premises or to Service Provider Clouds.. It shouldn’t be a Hotel California situation and with these new Update 4 features Veeam customers have more choice than other.

References:

https://helpcenter.veeam.com/docs/backup/vsphere/restore_amazon.html?ver=95u4

https://helpcenter.veeam.com/docs/backup/vsphere/external_repository.html?ver=95u4

Automatic restore of multiple machines from Veeam to AWS

 

Configuring Amazon S3 Access from VMware Cloud on AWS through an S3 Endpoint

When looking at how to configure networking for interactions between a VMware Cloud on AWS SDDC and an Amazon VPC there is a little bit to grasp in terms of what needs to be done to achieve traffic flow between the SDDC and the rest of the world.

As an example, by default if you want to connect to S3 the default configuration is to go through the Amazon ENI (Elastic Network Interface) which means that unless configured correctly, connectively to Amazon S3 will fail. Brian Gaff has a really good series of posts on Networking and Security Groups when working on VMware Cloud on AWS and are worth a read to get a deeper understanding of VMC to AWS networking.

There is a way to change this behaviour to make connectivity to Amazon S3 connect via the SDDCs Internet Gateway. This is done through the VMware Cloud Portal by going to the Networking section of the relevant SDDC.

Doing this, while easy enough means that you loose a lot of the benefits that passing traffic through the ENI provides. That is a high-bandwidth, low latency connection between the VPC and the SDDC which also provides free egress. In the case of S3 and the utilising the Veeam Cloud Tier it means more optimal connectivity between a Veeam Backup & Replication instance hosted in the SDDC and Amazon S3.

To allow communication between the SDDC and Amazon S3 over the ENI the following needs to be actioned.

Create Endpoint:

First step is to go into the AWS Console, go to the VPC thats connected to the VMC service and create a new Endpoint for S3 as shown below making sure you select the correct Route Table.

Configure Security Group:

Next is to configure the Security Group associated with your VPC to allow traffic to the logical network or networks. It’s a basic HTTPS Inbound rule where your source is the SDDN network or networks you want access from.

Create Compute Gateway Firewall Rule:

The final step is to configure a firewall rule on the SDDC Compute Gateway to allow HTTPS traffic to the Amazon VPC from the network or networks you want access to Amazon S3 from.

That’s pretty much it! After that, you should be able to access Amazon S3 over the ENI and get all the benefits that delivers.

References:

https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-B501FA3C-EAF9-4005-AC72-155C3F592281.html

How to Copy Amazon S3 Buckets with AWS CLI

I am doing some work on validated restore scenarios using the new Veeam Cloud Tier that backed by an Object Storage Repository pointing at an Amazon S3 Bucket. So that I am not messing with the live data I wanted a way to copy and access the objects from another bucket or folder. There is no option at the moment to achieve this via the AWS Console, however it can be done via the AWS CLI.

First step was to ensure I had the AWS CLI installed on my MBP and it was at the latest version:

For the first part of the copy process, I cheated and created a new Bucket from the AWS Console that was based on the one I wanted to copy.

Next step is to make sure that the AWS CLI is configured with the correct AWS Access and Secret keys. Once done, the command to copy/sync buckets is a simple one.

Obviously the time to complete the operation will depend on the amount of Objects in the Bucket and whether its cross region or local. It took about 4 hours to copy across ~50GB of data from US-EAST-2 to US-WEST-2 going at about 4MB/s. By default the process is shown on the screen.

Once the first pass was complete I ran the same command again which will this time look for differences between the source and destination and only sync the differences. You can run the command below to view the Total Objects and Total Size of both buckets for comparison.

That is it! Pretty simple process. I’ll blog around the actual reason behind the Veeam Cloud Tier requirement and put this into action at a later date!

References:

https://docs.aws.amazon.com/cli/latest/userguide/install-macos.html

https://aws.amazon.com/premiumsupport/knowledge-center/move-objects-s3-bucket

What Services Providers Need to Think About in 2019 and Beyond…

We are entering interesting times in the cloud space! We should no longer be talking about the cloud as a destination and we shouldn’t be talking about how cloud can transform business…those days are over! We have entered the next level of adoption whereby the cloud as a delivery framework has become mainstream. You only have to look at what AWS announced last year at Re:Invent with its Outposts offering. The rise of automation and orchestration in mainstream IT also has meant that cloud can be consumed in a more structured and repeatable way.

To that end…where does it leave traditional Service Providers who have for years offered Infrastructure as a Service as the core of their offerings?

Last year I wrote a post on how the the VM shouldn’t  be the base unit of measurement for cloud…and even with some of the happenings since then, I remain convinced that Service Providers can continue to exist and thrive through offering value around the VM construct. Backup and DR as a service remains core to this however and there is ample thirst out there in the market for customers wanting to consume services from cloud providers that are not the giant hyper-scalers.

Almost all technology vendors are succumbing to the reality that they need to extend their own offering to include public cloud services. It is what the market is demanding…and it’s what the likes of AWS Azure, IBM and GCP are pushing for. The backup vendor space especially has had to extend technologies to consume public cloud services such as Amazon S3, Glacier or Azure Blob as targets for offsite backups. Veeam is upping the ante with our Update 4 release of Veeam Backup & Replication 9.5 which includes Cloud Tier to object storage and additional Direct Restore capabilities to Azure Stack and Amazon EC2.

With these additional public cloud features, Service Providers have a right to feel somewhat under threat. However we have seen this before (Office 365 for Hosted Exchange as an example) and the direction that Service Providers need to take is to continue to develop offerings based on vendor technologies and continue to add value to the relationship that they have with their clients. I wrote a long time ago when VMware first announced vCloud Air that people tend to buy based on relationship…and there is no more trusted relationship than that of the Service Provider.

With that, there is no doubting that clients will want to look at using a combination of services from a number of different providers. From where I stand, the days of clients going all in with one provider for all services are gone. This is an opportunity for Service Providers to be the broker. This isn’t a new concept and plenty of Service Providers have thought about how they themselves leverage the Public Cloud to not only augment their own backend services, but make them consumable for their clients via there own portals or systems.

With all that in mind…in my opinion, there are five main areas where Service Providers need to be looking in 2019 and beyond:

  1. Networking is central this and the most successful Service Providers have already worked this out and offer a number of different networking services. It’s imperative that Service Providers offer a way for clients to go beyond their own networks and have the option to connect out to other cloud networks. Telco’s and other carriers have built amazing technology frameworks based on APIs to consume networking in ways that mean extending a network shouldn’t be thought of as a complex undertaking anymore.
  2. Backup, Replication and Recovery is something that Service Providers have offered for a long time now, however there is more and more completion in this area today in the form of built in protection at the application and hardware level. Where providers have traditionally excelled at is a the VM level. Again, that will remain the base unit of measurement for cloud moving forward, but Service Providers need to enhance their BaaS, R/DRaaS offerings for them to remain competitive. Leveraging public cloud to gain economies of scale is one way to enhance those offerings.
  3. Gateway Services are a great way to lock in customers. Gateway services are typically those which a low effort for both the Service Provider and client alike. Take the example of Veeam’s Cloud Connect Backup. It’s a simple service to setup at both ends and works without too much hassle…but there is power for the Service Provider in the data that’s being transferred into their network. From there auxiliary services can be offered such as recovery or other business continuity services. It also leads into discussions about Replication services which can be worked into the total service offering as well.
  4. Managed Services is the one thing that the hyper-scalers can’t match Service Providers in and it’s the one thing that will keep all Service Providers relevant. I’ve mentioned already the trusted advisor thought process in the sales cycle. This is all about continuing to offer value around great vendor technologies that aims to secure the Service Provider to client relationship.
  5. Developing a Channel is central to be able to scale without the need to add resources to the business. Again, the most successful Service Providers all have Channel/Partner program in place and it’s the best way to extend that managed service, trusted provider reach. I’ve seen a number of providers not able to execute on a successful channel play due to poor execution, however if done right it’s one way to extend that reach to more clients…staying relevant in the wake of the hyper-scalers.

This isn’t a new Differentiate or Die!? message…it’s one of ensuring that Service Providers continue to evolve with the market and with industry expectation. That is the only way to thrive and survive!

AWS Outposts and VMware…Hybridity Defined!

Now that AWS re:Invent 2018 has well and truly passed…the biggest industry shift to come out of the event from my point of view was the fact that AWS are going full guns blazing into the on-premises world. With the announcement of AWS Outposts the long held belief that the public cloud is the panacea of all things became blurred. No one company has pushed such a hard cloud only message as AWS…no one company had the power to change the definition of what it is to run cloud services…AWS did that last week at re:Invent.

Yes, Microsoft have had the Azure Stack concept for a number of years now, however they have not executed on the promise of that yet. Azure Stack is seen by many as a white elephant even though it’s now in the wild and (depending on who you talk to) doing relatively well in certain verticals. The point though is that even Microsoft did not have the power to make people truely believe that a combination of a public cloud and on premises platform was the path to hybridity.

AWS is a Juggernaut and it’s my belief that they now have reached an inflection point in mindshare and can now dictate trends in our industry. They had enough power for VMware to partner with them so VMware could keep vSphere relevant in the cloud world. This resulted in VMware Cloud on AWS. It seems like AWS have realised that with this partnership in place, they can muscle their way into the on-premises/enterprise world that VMware have and still dominate…at this stage.

Outposts as a Product Name is no Accident

Like many, I like the product name Outposts. It’s catchy and straight away you can make sense of what it is…however, I decided to look up the offical meaning of the word…and it makes for some interesting reading:

  • An isolated or remote branch
  • A remote part of a country or empire
  • A small military camp or position at some distance from the main army, used especially as a guard against surprise attack

The first definition as per the Oxford Dictionary fits the overall idea of AWS Outposts. Putting a compute platform in an isolated or remote branch office that is seperate to AWS regions while also offering the ability to consume that compute platform like it was an AWS region. This represents a legitimate use case for Outposts and can be seen as AWS fulling a gap in the market that is being craved for by shifting IT sentiment.

The second definition is an interesting one when taken in the context of AWS and Amazon as a whole. They are big enough to be their own country and have certainly built up an empire over the last decade. All empires eventually crumble, however AWS is not going anywhere fast. This move does however indicate a shift in tactics and means that AWS can penetrate the on-premises market quicker to extend their empire.

The third definition is also pertinent in context to what AWS are looking to achieve with Outposts. They are setting up camp and positioning themselves a long way from their traditional stronghold. However my feeling is that they are not guarding against an attack…they are the attack!

Where does VMware fit in all this?

Given my thoughts above…where does VMware fit into all this? At first when the announcement was made on stage I was confused. With Pat Gelsinger on stage next to Andy Jessy my first impression was that VMware had given in. Here was AWS announcing a direct competitive platform to on-premises vSphere installations. Not only that, but VMware had announced Project Dimension at VMworld a few months earlier which looked to be their own on-premises managed service offering…though the wording around that was for edge rather than on-premises.

With the initial dust settled and after reading this blog post from William Lam, I came to understand the VMware play here.

VMware and Amazon are expanding their partnership to deliver a new, as-a-service, on-premises offering that will include the full VMware SDDC stack (vSphere, NSX, vSAN) running on AWS Outposts, a fully managed and configurable server and network installation built with AWS-designed hardware. VMware Cloud in AWS Outposts is VMware’s new As-a-Service offering in partnership with AWS to run on AWS Outposts – it will leverage the innovations we’ve developed with Project Dimension and apply them on top of AWS Outposts. VMware Cloud on AWS Outposts will be a subscription-based service and will support existing VMware payment options.

The reality is that on-premises environments are not going away any time soon but customers like the operating model of the cloud. More and more they don’t care about where infrastructure lives as long as a services outcome is achieved. Customers are after simplicity and cost efficiency. Outposts delivers all this by enabling convenience and choice…the choice to run VMware for traditional workloads using the familiar VMware SDDC stack all while having access to native AWS services.

A Managed Service Offering means a Mind shift

The big shift here from VMware that began with VMware Cloud on AWS is a shift towards managed services. A fundamental change in the mindset of the customer in the way in which they consume their infrastructure. Without needing to worry about the underlying platform, IT can focus on the applications and the availability of those applications. For VMware this means from the VM up…for AWS, this means from the platform up.

VMware Cloud on AWS is a great example of this new managed services world, with VMware managing most of the traditional stack. VMware can now extend VMware Cloud on AWS to Outposts to boomerang the management of on-premises as well. Overall Outposts is a win win for both AWS and VMware…however proof will be in the execution and uptake. We won’t know how it all pans out until the product becomes available…apparently in the later half of 2019.

IT admins have some contemplating to do as well…what does a shift to managed platforms mean for them? This is going to be an interesting ride as it pans out over the next twelve months!

References:

VMware Cloud on AWS Outposts: Cloud Managed SDDC for your Data Center

AWS re:Invent 2018 – Veeam and N2WS Recap and Thoughts

There was so much to take away from AWS re:Invent last week. In my opinion, having attended a lot of industry events over the past ten or so years, this years re:Invent has left the industry with a lot to think about it! AWS vigorously defended their position as the number one Public Cloud destination (in their eyes) while trying to lay a path for future growth by expanding into the true enterprise space. Also, with the announcement of Outposts set a path to try and dominate the hybrid world with an on-premises offering.

Instead of writing down my extended thoughts it’s more consumable to hear Rick Vanover and myself talk about the event from a Veeam perspective in the short embedded video below. I’ve also embedded a video with David Hill and Sebastian Straub covering things from an N2WS perspective, as well as talk about the N2WS related announcements at re:Invent 2018.

I’ve also posted the Veeam session video here:

AWS re:Invent 2018 Recap – Times…they a̶r̶e̶ have a̶ Changi̶n̶g̶ed!

I wrote this sitting in the Qantas Lounge in Melbourne waiting for the last leg back to Perth after spending the week in Las Vegas at AWS re:Invent 2018. I had fifteen hours on the LAX to MEL leg and before that flight took off, I struck up a conversation (something I never usually do on flights) with a guy in the seat next to me. He noticed my 2017 AWS re:Invent jumper (which is 100x better than the 2018 version) and asked me if had attended re:Invent.

It ended up that he worked for a San Francisco based company that wrote middleware integration for Salesforce. After a little bit of small talk, we got into some deep technical discussions about the announcements and around what we did in our day to day roles. Though I shouldn’t have been surprised, just as I had never heard of his company, he had never heard of Veeam…ironically he was from Russia and now working in Melbourne.

The fact he hadn’t heard of Veeam in its self wasn’t the most surprising part…it was the fact that he claimed to be a DevOps engineer. But had never touched any piece of VMware software or virtualisation infrastructure. His day to day was exclusively working with AWS web technologies. He wasn’t young…maybe early 40s…this to me seemed strange in itself.

He worked exclusively around APIs using AWS API Gateway, CloudFormations and other technologies but also used Nginx for reverse proxy purposes. That got me thinking that the web application developers of today are far far different to those that I used to work with in the early 2000’s and 2010’s. I come from the world of LAMP and .NET applications platforms…I stopped working on web and hosting technologies around the time Nginx was becoming popular.

I can still hold a conversion (and we did have a great exchange around how he DevOp’ed his applications) around the base frameworks of applications and components that go into making a web application work…but they are very very different from the web applications I used to architect and support on Windows and Linux.

All In on AWS!

The other interesting thing from the conversation was that his Technical Director commands the exclusive use of AWS services. Nothing outside of the service catalog on the AWS Console. That to me was amazing in itself. I started to talk to him about automation and orchestration tools and I mentioned that i’d been using Terraform of late…he had never used it himself. He asked me about it and in this case I was the one telling him how it worked! That at least made me feel somewhat not totally dated and past it!

My takeaway from the conversation plus what I experienced at re:Invent was that there is a strong, established sector of the IT industry that AWS has created, nurtured and is now helping to flourish. This isn’t a change or die message…this is simply my own realisation that the times have changed and as a technologist in the the industry I owe it to myself to make sure I am aware of how AWS has shifted web and application development from what I (and from my assumption the majority of those reading this post) perceive to be mainstream.

That said, just like the fact that a hybrid approach to infrastructure has solidified as the accepted hosting model for applications, so to the fact that in the application world there will still be a combination of the old and new. The biggest difference is that more than ever…these worlds are colliding…and that is something that shouldn’t be ignored!

Veeam’s AWS re:Invent 2018 Session Posted

This week, myself and David Hill presented at AWS re:Invent 2018 around what at Veeam is offering by way of providing data protection and availability for native AWS workloads, VMware Cloud on AWS workloads and how we are leveraging AWS technologies to offer new features in the upcoming Update 4 release of Backup & Replication 9.5.

For those that where not at AWS re:Invent this week or for those who could not attend the session on Wednesday, the video recording has been posted on the offical AWS YouTube page.

We had some audio issues at the start which made for some interesting banter between David and myself…but once we got into it we talked about the following:

  • The N2WS 2.4 Release
  • Veeam VTL and AWS Storage Gateway
  • Update 4 Cloud Tier
  • Update 4 Cloud Mobility
  • Data Protection for VMware Cloud on AWS

I wanted to highlight the Cloud Tier section where I give an overview and quick deepdive into the smarts behind the new repository feature coming in Update 4. The live demo of me using our Patented Instant VM Recovery feature to bring up a VM with data residing in Amazon S3 is a great example of the power of this upcoming feature. Not only does it allow storage efficiencies locally but offloading old data to Object Storage for long term retention, but is also is intelligent enough to recover quickly and efficiently with its Intelligent Block Recovery.

Veeam at AWS re:Invent 2018

AWS re:Invent 2018 is happening next week and for the first time Veeam is at the event in a big way! Last year, we effectively tested the waters with a small booth, no main session and without the usual event presence that you would expect of Veeam at an VMworld or Microsoft Ignite. This year is a little different and we will be there as Diamond Sponsors of the event and with a lot to share in regards to how Veeam is leveraging AWS technologies to enhance our availability messaging.

We bolstered our native AWS capabilities earlier this year with the acquisition of N2SW who already where a leader in the protection of AWS workloads and with the upcoming release of Backup & Replication 9.5 Update 4 we will be further enhancing our ability to not only backup AWS workloads, but also leverage AWS technologies such as S3 to facilitate a change in mindset as to what it is to have a local backup repository. We will also be talking about migration into AWS and also how we are the best data protection choice for VMware Cloud on AWS.

Breakout Session:

At the event we will have a breakout session which myself and David Hill will be presenting. This will be on Wednesday at 5:30pm in the Aria Casino and we are looking forward to deep diving into what’s coming in Update 4 as well as showing off what’s coming in the next release of N2WS as we start to jointly develop solutions between the two companies.

STG206-S – A Deeper Look at How Veeam is Evolving Availability on AWS

Wednesday, Nov 28, 5:30 PM – 6:30 PM – Aria East, Level 1, Joshua 6

Veeam has made significant enhancements to its platform, focusing on the availability of AWS workloads over the past year. Join this technical deep dive where representatives from Veeam demonstrate how the company protects cloud-native workloads on AWS as well as how they back up to and from on-premises environments. They also discuss data protection for VMware Cloud on AWS. Finally, they review the enhancements to Veeam’s Backup and Replication feature set, which now includes cloud mobility to AWS and a cloud archive that leverages Amazon S3 for long-term data retention of backed-up workloads.

In terms of the technologies and solutions that we will be diving into and showing off via some live demos…we will be looking at:

  • The N2WS 2.4 Release
  • Veeam VTL and AWS Storage Gateway
  • Update 4 Cloud Tier
  • Update 4 Cloud Mobility
  • Data Protection for VMware Cloud on AWS

I will also be giving a Booth Presentation at the Cloudcheckr booth, Tuesday at 10am which will effectively be a slimmed down version of the main session happening on the Wednesday.

Booth and Show Floor:

As mentioned, this year we will have significant presence on the show floor with two areas to come and see Veeam technologies as well as chat to us about how we are protecting and leveraging AWS and AWS workloads. On the main show floor we will be at booth #1011 which is well positioned next to the GitHub booth and we will also have a second location at the Mirage called the Data Protection Lounge which will be a place to relax, enjoy a snack and engage in technical discussions with our experts…including myself!

Social Events:

This year we are jointly sponsoring a location for the re:Invent Pub Crawl which is happening on Tuesday night. Details are below

Pub Crawl – Veeam | N2WS and VMware
Date & Time: Tuesday, November 27, 6pm – 8pm
Location: Mercato della Pescheria – The Venetian Shoppes

Wrapping Up:

I’m looking forward to the event and being more than a spectator this year I’m expecting big things from it. Make sure you come visit us at our booth or at the lounge to check out what has been brewing from Veeam and N2WS R&D over the past twelve months…and also don’t forget to attend the session on Wednesday afternoon. I’m excited about some of the new features we will release as part of Update 4…and this session is a chance to see them working and get an understanding as to what they will be delivering.

If you would like to schedule a meeting with myself or any other member of the Veeam Product Strategy team attending, please reach out.

« Older Entries