Tag Archives: Cloud

Cloud Field Day 5 – Recap and Videos #CFD5

Last week I had the pleasure of presenting at Cloud Field Day 5 (a Tech Field Day event). Joined by Michael Cade and David Hill, we took the delegates through Veeam’s cloud vision by showcasing current product and features in the Veeam platform including specific technology that both leverages and protects Public Cloud workloads and services. We also touched on where Veeam is at in terms of market success and also dug into how Veeam enables Service Providers to build services off our Cloud Connect technology.

First off, I would like to thank Stephen Foskett and the guys at Gestalt IT for putting together the event. Believe me there is a lot that goes on behind the scenes and it is impressive how the team are able to setup, tear down and setup agin in different venues while handling the delegates themselves. Also to all the delegates, it was extremely valuable being able to not only present to the group, but also have a chance to talk shop at the offical reception dinner…some great thought provoking conversations where had and I look forward to seeing where your IT journey takes you all next!

Getting back to the recap, i’ve pasted in the YouTube links to the Veeam session below. Michael Cade has a great recap here, where he gives his overview on what was presented and some thoughts about the event.

In terms of what was covered:

We tried to focus on core features relating to cloud and then show a relatable live demo to reinforce the slide decks. No smoke and mirrors when the Veeam Product Strategy Team is doing demos… they are always live!

For those that might not have been up to speed with what Veeam has done over the past couple of years it is a great opportunity to learn about what we have done innovating the Data Protection space, while also looking at the progress we have made in recent times in transitioning to a true software defined, hardware agnostic platform that offers customers absolute choice. We like to say that Veeam was born in the virtual world…but is evolving in the Cloud!

Summary:

Once again, being part of Cloud Field Day 5 was a fantastic experience, and the team executed the event well. In terms of what Veeam set out to achieve, Michael, David and myself where happy with what we where able to present and demo and we where happy with the level of questions being asked by the delegates. We are looking forward to attending Tech Field Day 20 later in the year and maybe as well as continue to show what Veeam can do today…take a look at where we are going in future releases!

References:

Cloud Field Day 5

Quick Look – New Cloud Credentials Manager in Update 4

With the release of Update 4 for Veeam Backup & Replication 9.5 we further enhanced our overall cloud capabilities by adding a number of new features and enhancements that focus on tenants being able to leverage Veeam Cloud and Service Providers as well as Public Cloud services. With the addition of Cloud Mobility, External Repository and Cloud Connect Replication supporting vCloud Director we decided to break out the existing credential manager and create a new manager dedicated to the configuration and management of Cloud specific credentials.

The manager can be accessed by clicking on the top left dropdown menu from the Backup & Replication Console and then choosing Manage Cloud Credentials.

You can use the Cloud Credentials Manager to create and manage all credentials that are planned to use to connect to cloud services.

The following types of credentials can be configured and managed:

  • Veeam Cloud Connect (Backup and Replication for both Hardware Plans and vCD)
  • Amazon AWS (Storage and Compute)
  • Microsoft Azure Storage (Azure Blob)
  • Microsoft Azure Compute (Azure and Azure Stack)

The Cloud Connect credentials are straight forward in terms of what they are used for. There is even a way for non vCloud Director Authenticated tenants to change their own default passwords directly.

When it comes to AWS and Azure credentials the manager will allow you to configure accounts that can be used with Object Storage Repositories, Restore to AWS (new in Update 4), Restore to Azure and Restore to Azure Stack (new in Update 4).

PowerShell is still an Option:

For those that would like to configure these accounts outside of the Backup & Replication Console, there is a full complement of PowerShell commands available via the Veeam PowerShell Snap-in.

As an example, as part of my Configure-Veeam GitHub Project I have a section that configures a new Scale Out Backup Repository with an Object Storage Repository Capacity Tier backed by Amazon S3. The initial part of that code is to create a new Amazon Storage Account.

For a full list of PowerShell capabilities related to this, click here.

So there you go…a very quick look at another new enhancement in Update 4 for Backup & Replication 9.5 that might have gone under the radar.

References:

https://helpcenter.veeam.com/docs/backup/vsphere/cloud_credentials.html?ver=95u4

What Services Providers Need to Think About in 2019 and Beyond…

We are entering interesting times in the cloud space! We should no longer be talking about the cloud as a destination and we shouldn’t be talking about how cloud can transform business…those days are over! We have entered the next level of adoption whereby the cloud as a delivery framework has become mainstream. You only have to look at what AWS announced last year at Re:Invent with its Outposts offering. The rise of automation and orchestration in mainstream IT also has meant that cloud can be consumed in a more structured and repeatable way.

To that end…where does it leave traditional Service Providers who have for years offered Infrastructure as a Service as the core of their offerings?

Last year I wrote a post on how the the VM shouldn’t  be the base unit of measurement for cloud…and even with some of the happenings since then, I remain convinced that Service Providers can continue to exist and thrive through offering value around the VM construct. Backup and DR as a service remains core to this however and there is ample thirst out there in the market for customers wanting to consume services from cloud providers that are not the giant hyper-scalers.

Almost all technology vendors are succumbing to the reality that they need to extend their own offering to include public cloud services. It is what the market is demanding…and it’s what the likes of AWS Azure, IBM and GCP are pushing for. The backup vendor space especially has had to extend technologies to consume public cloud services such as Amazon S3, Glacier or Azure Blob as targets for offsite backups. Veeam is upping the ante with our Update 4 release of Veeam Backup & Replication 9.5 which includes Cloud Tier to object storage and additional Direct Restore capabilities to Azure Stack and Amazon EC2.

With these additional public cloud features, Service Providers have a right to feel somewhat under threat. However we have seen this before (Office 365 for Hosted Exchange as an example) and the direction that Service Providers need to take is to continue to develop offerings based on vendor technologies and continue to add value to the relationship that they have with their clients. I wrote a long time ago when VMware first announced vCloud Air that people tend to buy based on relationship…and there is no more trusted relationship than that of the Service Provider.

With that, there is no doubting that clients will want to look at using a combination of services from a number of different providers. From where I stand, the days of clients going all in with one provider for all services are gone. This is an opportunity for Service Providers to be the broker. This isn’t a new concept and plenty of Service Providers have thought about how they themselves leverage the Public Cloud to not only augment their own backend services, but make them consumable for their clients via there own portals or systems.

With all that in mind…in my opinion, there are five main areas where Service Providers need to be looking in 2019 and beyond:

  1. Networking is central this and the most successful Service Providers have already worked this out and offer a number of different networking services. It’s imperative that Service Providers offer a way for clients to go beyond their own networks and have the option to connect out to other cloud networks. Telco’s and other carriers have built amazing technology frameworks based on APIs to consume networking in ways that mean extending a network shouldn’t be thought of as a complex undertaking anymore.
  2. Backup, Replication and Recovery is something that Service Providers have offered for a long time now, however there is more and more completion in this area today in the form of built in protection at the application and hardware level. Where providers have traditionally excelled at is a the VM level. Again, that will remain the base unit of measurement for cloud moving forward, but Service Providers need to enhance their BaaS, R/DRaaS offerings for them to remain competitive. Leveraging public cloud to gain economies of scale is one way to enhance those offerings.
  3. Gateway Services are a great way to lock in customers. Gateway services are typically those which a low effort for both the Service Provider and client alike. Take the example of Veeam’s Cloud Connect Backup. It’s a simple service to setup at both ends and works without too much hassle…but there is power for the Service Provider in the data that’s being transferred into their network. From there auxiliary services can be offered such as recovery or other business continuity services. It also leads into discussions about Replication services which can be worked into the total service offering as well.
  4. Managed Services is the one thing that the hyper-scalers can’t match Service Providers in and it’s the one thing that will keep all Service Providers relevant. I’ve mentioned already the trusted advisor thought process in the sales cycle. This is all about continuing to offer value around great vendor technologies that aims to secure the Service Provider to client relationship.
  5. Developing a Channel is central to be able to scale without the need to add resources to the business. Again, the most successful Service Providers all have Channel/Partner program in place and it’s the best way to extend that managed service, trusted provider reach. I’ve seen a number of providers not able to execute on a successful channel play due to poor execution, however if done right it’s one way to extend that reach to more clients…staying relevant in the wake of the hyper-scalers.

This isn’t a new Differentiate or Die!? message…it’s one of ensuring that Service Providers continue to evolve with the market and with industry expectation. That is the only way to thrive and survive!

Hybrid World… Why IBM buying RedHat makes sense!

As Red October came to a close…at a time when US Tech stocks were taking their biggest battering in a long time the news came out over the weekend that IBM had acquired RedHat for 34 billion dollars! This seems to have taken the tech world by surprise…the all-cash deal represents a massive 63% premium on the previous close of RedHat’s stock price…all in all it seems ludicrous.

Most people that I’ve talked to about it and from reading comments on social media and blog sites suggests that the deal is horrible for the industry…but I’ve felt this is more a reaction to IBM than anything. IBM has a reputation as swallowing up companies whole and spitting them out the other side of the merger process a shell of what they once were. There has also been a lot of empathy for the employees of RedHat, especially from ex-IBM employees who have experience inside the Big Blue machine.

I’m no expert on M&A and I don’t pretend to understand the mechanics behind the deal and what is involved…but when I look at what RedHat has in its stable, I can see why IBM have made such an aggressive play for them. On the surface it seems like IBM are in trouble with their stock price and market capitalization falling nearly 20% this year and more than 30% in the last five years…they had to make a big move!

IBM’s previous 2013 acquisition of SoftLayer (for a measly 2 billion USD) helped them remain competitive in the Infrastructure as a Service space and if you believe the stories, have done very well out of integrating the SoftLayer platform into what was BlueMix, and is now IBM Cloud. This 2013 Forbes article on the acquisition sheds some light as to why this RedHat acquisition makes sense and is true to form for IBM.

IBM sees the shift of big companies moving to the cloud as a 20-year trend…

That was five years ago…and since then a lot has happened in the Cloud world. Hybrid cloud is now the accepted route to market with a mix of on-premises, IaaS and PaaS hosted and hyper-scale public cloud services being the norm. There is no one cloud to rule them all! And even though AWS and Azure continue to dominate and be front of mind there is still a lot of choice out there when it comes to how companies want to consume their cloud services.

Looking at RedHat’s stable and taking away the obvious Linux distro’s that are both enterprise and open sources the real sweet spot of the deal lies in RedHat’s products that contribute to hybrid cloud.

I’ve heard a lot more noise of late about RedHat OpenStack becoming the platform of choice as companies look to transform away from more traditional VMware/Hyper-V based platforms. RedHat OpenShift is also being considered as an enterprise ready platform for containerization of workloads. Some sectors of the industry (Government and Universities) have already decided on their move to platforms that are backed by RedHat…the one thing I would comment here is that there was an upside to that that might now be clouded by IBM being in the mix.

Rounding out the stable, RedHat have a Cloud Suite which encompasses most of the products listed above. CloudForms for Infrastructure as Code, with Ansible for orchestration…together with RedHat Virtualization together with OpenStack and OpenShift..it’s a decent preposition!

Put all that together with the current services of IBM Cloud and you start to have a compelling portfolio covering almost all desired aspects of hybrid and multi cloud service offerings. If the acquisition of SoftLayer was the start of a 20 year trend then IBM are trying to keep themselves positioned ahead of the curve and very much in step with the next evolution of that trend. That isn’t to say that they are not playing catchup with the likes of VMware, Microsoft, Amazon, Google and alike, but I truly believe that if they don’t butcher this deal they will come out a lot stronger and more importantly offer valid completion in the market…that can only be a good thing!

As for what it means for RedHat itself, their employees and culture…that I don’t know.

References:

https://www.redhat.com/en/about/press-releases/ibm-acquire-red-hat-completely-changing-cloud-landscape-and-becoming-world%E2%80%99s-1-hybrid-cloud-provider

IBM sees the shift of big companies moving to the cloud as a 20-year trend

First Look – Zenko, Multi-Platform Data Replication and Management

A couple of weeks ago I stumbled upon Zenko via a LinkedIn post. I was interested in what it had to offer and decided to go and have a deeper look. With Veeam launching our vision to be the leader of intelligent data management at VeeamON this year, I have been on the lookout for solutions that do smart thing with data that addresses the needs related to controlling the accelerated spread and sprawl of that data. Zenko looks to be on the right track with it’s notion of freedom to avoid being locked into a specific cloud platform whether it’s private or public.

Having come from service provider land I have always been against the idea of a Hyper-Scaler Public Cloud monopoly that forces lock-in and diminishes choice. Because of that, I gravitated to Zenko’s mission statement:

We believe that everyone should be in control of their data. Zenko’s mission is to allow everyone to be in control of their data, while leveraging the efficiency of private and public clouds.

This platform looks to do data mobility across multiple cloud platforms through common communication protocols and by sharing a common set of APIs to manage it’s data sets. Zenko is focused on achieving this multi-cloud capability through a unified AWS S3 API based services with data management and federated search capabilities driving it’s use cases. Data mobility between clouds, whether private or public cloud services it what Zenko is aimed at.

Zenko Orbit:

Zenko Orbit is the cloud portal for data placement, workflows and global search. Focused for application developers and “DevOps” the premise of Zenko Orbit is that those guys can spend less time learning multiple interfaces for different clouds while leveraging the power of cloud storage and data management services without needing to be an expert across different platforms.

Orbit provides an easy way to create replication workflows between difference cloud storage platforms…weather it be Amazon s3, Azure Blog, GCP Storage or others. You then have the ability to search across a global namespace for system and user-defined metadata.

Quick Walkthrough:

Given this is open source you have the option to download and install a Zenko instance which will then be registered against the Orbit cloud portal or you can pull the whole stack from GitHub. They also have a sandboxed instance hosted by them that can be used to take the system for a test drive.

Once done, you are presented with a Dashboard that gives you an overview of the amount of data and other metric contained in your instance. Looking at the Settings area you are given details about the instance, account details and endpoints to use to connect up into. They also other the ability to download pre generated Cyberduck Profiles.

You need to create a storage management account to be able to browse your buckets in the Orbit portal.

Once that’s been done you can create a bucket and select a location which in the sandbox defaults to AWS us-east-1.

From here, you can add a new storage location and configure the replication policy. For this, I created a new Azure Blob Storage account as shown below.

From the Orbit menu, I then added a New Storage Location.

Once the location has been added you can configure the bucket replication. This is the cool part that is the premise of the platform. Being able to setup policies to replicate data across multiple cloud platforms. From the sandbox, the policy is one way meaning there is no directional replication. Simply select the source and destination and the bucket from the menu.

Once that has been done you can connect to the endpoint and upload files. I tested this out with the setup above and it worked as advertised. Using the CyberDuck profile I connected in, uploaded some files and monitored the Azure Blog storage end for the files to replicate.

Conclusion: 

While you could say that Zenko feels like DFS-R for the multi-platform storage world, the solution has impressed me. Many would know that it’s not easy to orchestrate the replication of data between different platforms. They are also talking up their capabilities around extensibility of the platform as is relates to data management, backend storage plugins and search.

I think about this sort of technology and how it could be extended to cloud based backups. Customers could have the option to tier into cheaper cloud based storage and then further protect that data by replicating it to another cloud platform which could be cheaper yet. This could achieve added resiliency while offering cost benefits. However there is also the risk that the more spread out the data is, the harder it is to control. That’s where intelligent data management comes into play…interesting times!

References:

Zenko Orbit – Multi-Cloud Data Management Simplified

 

Quick Look – Backing up AWS Workloads with Cloud Protection Manager from N2WS

Earlier this year Veeam acquired N2WS after announcements last year of a technology partnership at VeeamON 2017. The more I tinker with Cloud Protection Manager the more I understand why we made the acquisition. N2WS was founded in 2012 with their first product shipping in 2013. Purpose built for AWS supporting all types of EC2 instances, EBS volumes, RDS, DynamoDB & Redshift and AMI creation and distributed as an AMI through the AWS Marketplace. The product is easy to deploy and has extended it’s feature set with the release of 2.3d announced during VeeamON 2018 a couple weeks ago.

From the datasheet:

Cloud Protection Manager (CPM) is an enterprise-class backup, recovery, and disaster recovery solution purpose-built for Amazon Web Services EC2 environments. CPM enhances AWS data protection with automated and flexible backup policies, application consistent backups, 1-click instant recovery, and disaster recovery to other AWS region or AWS accounts ensuring cloud resiliency for the largest production AWS environment. By extending and enhancing native AWS capabilities, CPM protects the valuable data and mission-critical applications in the AWS cloud.

In this post, I wanted to show how easy it is to deploy and install Cloud Protection Manager as well as look at some of the new features in the 2.3d release. I will do a follow up post going into more detail about how to protect AWS Instances and services with CPM.

What’s new with CPM 2.3:

  • Automated backup for Amazon DynamoDB: CPM provides backup and recovery for Amazon DynamoDB, you can now apply existing policies and schedules to backup and restore their DynamoDB tables and metadata.
  • RESTful API:  Completely automate backup and recovery operations with the new Cloud Protection Manager API. This feature provides seamless integration between CPM and other applications.
  • Enhanced reporting features: Enhancements include the ability to gather all reports in one tab, run as a CSV, view both protected and unprotected resources and include new filtering options as well.

Other new features that come as part of the CPM 2.3 release include full cross-region and cross-account disaster recovery for Aurora databases, enhanced permissions for users and a fast and efficient on boarding process using CloudFormation’s 1-click template.

Installing, Configuring and Managing CPM:

The process to install Cloud Protection Manager from the AWS Marketplace is seamless and can be done via a couple different methods including a 1-Click deployment. The offical install guide can be read here. The CPM EC2 instance is deployed into a new or existing VPC configured with a subnet and must be put into an existing, or new Security Group.

Once deployed you are given the details of the installation.

And you can see it from the AWS Console under the EC2 instances. I’ve added a name for the instance just for clarities sake.

One thing to note is that there is no public IP assigned to the instance as part of the deployment. You can create a new Elastic IP and attach it to the instance, or you can access the configuration website via it’s internal IP if you have access to the subnet via some form of VPN or network extension.

There is an initial configuration wizard that guides you through the registration and setup of CPM. Note that you do need internet connectivity to complete the process otherwise you will get this error.

The final step will allow you to configure a volume for CPM use. With that the wizard finalises the setup and you can log into the Cloud Protection Manager.

Conclusion: 

The ability to backup AWS services natively has it’s advantages over traditional methods such as agents. Cloud Protection Manager from N2WS can be installed and ready to go within 5 minutes. In the next post, i’ll walk through the CPM interface and show how you backup and recover AWS instances and services.

References:

https://n2ws.com/cpm-install-guide

https://support.n2ws.com/portal/kb/articles/release-notes-for-the-latest-v2-3-x-cpm-release

Public Cloud and Infrastructure as Code…The Good and the Bad all in One Day!

I’m ok admitting that I am still learning as I progress through my career and I’m ok to admit when things go wrong. Learning from mistakes is a crucial part of learning…and I learnt a harsh lesson today! That Infrastructure as Code is as dangerous as it is awesome…and that the public cloud is an unforgiving place!

Earlier today I created a new GitHub Repository for a project i’ve been working on. Before I realised my mistake I had uploaded a Terraform variables file with my AWS Access and Secret Key. I picked up on this probably two minutes after I pushed the contents up to the public repository. Roughly five minutes later I deleted the repository and was about to start fresh without the credentials but then realised than my Terraform plan was failing with a credential error.

I logged into the AWS Console and saw that my main VPC and EC2 instances had been terminated and that there was 20 new instances in it’s place. I knew exactly at that point what had happened! I’d been compromised and I had handed over the keys on a silver web scraper platter.

My access key had been deleted and new ones created along with VPCs and Key Pairs in every single AWS region across the world. I deleted the new access key the malicious user created locking him out from doing any more damage, however in the space of ten minutes 240 EC2 instances in total where spun up. This was a little more than the twenty I thought I had dealt with initially…costing only $4.50…Amazing!

I contacted AWS support and let them know what happened. To their credit (and to my surprise) I had a call back within a few hours. Meanwhile they automatically restricted my account until I had satisfied a series of clean up steps so as to limit any more potential damage. The billing will be reversed as well so I am a little less in a panic when I see my current month breakdown.

The Bad Side of Infrastructure as Code and Public Cloud:

This example shows how dangerous the world we are living in can be. With AWS and alike providing brilliant API access into their provisioning platforms malicious users have seen an opportunity to use Infrastructure as Code as a way to spin up cloud resources in a matter of seconds. All they need is an in. And in my case, that in was a moment of stupidity…and even though I realised what I had done, all it took was less than five minutes for them to take advantage of my lack of concentration and exploit my security lapse. They also exploited the fact that I am new to this space and had not learnt best practice for storing credentials.

I was lucky that everything I had in AWS was just there for demo purpose and I had nothing of real important there. However, if this happened to be someone running business critical applications they would be in for a very very bad day. Everything was wiped! Even the backup software I had running in there using local snapshots…as ever a case for offsite copies if there was one! (Ergo – Veeam Agents and N2WS)

The Good Side of Infrastructure as Code and Public Cloud:

What good could come of this? Well, apart from learning a little more about Terraform and how to store credentials the awesome part was that all the work I had put in over the past couple of weeks getting a start with Infrastructure as Code and Terraform was that I was able to reprovision everything that I lost within 5 minutes…once my account restriction was lifted.

That’s the power of APIs and the applications that take advantage of them. And even though I copped a slap in the face today…I’m converted. This stuff is cool! We just need to be aware of the dangers that come and the fact that the coolness can be used and exploited in the wrong way as well.

VMware Cloud Briefing Roundup – VMware Cloud on AWS and other Updates

VMware has held it’s first ever VMware Cloud Briefing today. This is an online, global event with an agenda featuring a keynote from Pat Gelsinger, new announcements and demos relating to VMware Cloud as well as discussions on cloud trends and market momentum. Key to the messaging is the fact that applications are driving cloud initiatives weather that be via delivering new SaaS or cloud applications as well as extending networks beyond traditional barriers while modernizing the datacenter.

The VMware Cloud is looking like a complete vision at this point and the graphic below highlights that fact. There are multiple partners offering VMware based Cloud Infrastructure along with the Public Cloud and SaaS providers. On top of that, VMware now talks about a complete cloud management layer underpinned by vSphere and NSX technologies.

VMware Cloud on AWS Updates:

The big news on the VMware Cloud on AWS front is that there is a new UK based service offering and continued expansion into Germany. This will extend into the APAC region later in the year.

VMware Cloud on AWS will also have support for stretch clusters using the same vSAN and NSX technologies used on-premises on top of the underlying AWS compute and networking platform. This looks to extend application uptime across AWS Availability Zones within AWS regions.

This will feature

  • Zero RPO high Availability across AZs
  • Built into the infrastructure layer with synchronous replication
  • Stretched Cluster with common logical networks with vSphere HA/DRS
  • If an AZ goes down it’s treated as a HA event and impacted VMs brought back in other AZ

They are also adding vSAN Compression and Deduplication for VMware Cloud on AWS services which in theory will save 40% in storage.

VMware Cloud Services Updates:

Hybrid Cloud Extension HCX (first announced at VMworld last year) has a new on-premises offering and is expanding availability through VMware Cloud Provider Partners. This included VMware Cloud on AWS, IBM Cloud and OVH. The promise here is an any-to-any vSphere migration that cross version while being still secure. We are talking about Hybridity here!

Log Intelligence is an interesting one…it looks like Log Insight delivered as a SaaS application. It is a real-time big data log management platform for VMware Cloud on AWS adding real-time visibility into infrastructure and application logs for faster troubleshooting. It support any SYSLOG source and will ingest over the internet in theory.

Cost Insight is an assessment tool for private cloud to VMware Cloud on AWS Migration. It calculates VMware Cloud on AWS capacity required to migrate from on-premises to VMC. It has integration with Network insight to calculate networking costs during migration as well.

Finally there is an update to Wavefront that expands inputs and integrations to enhance visibility and monitoring. There are 45 new integrations, monitoring of native AWS services and integration into vRealize Operations.

You can watch the whole event here.

AWS re:Invent – Expectations from a VM Hugger…

Today is the first day offical day of AWS re:Invent 2017 and things are kicking off with the global partner summit. Today also is my first day of AWS re:Invent and I am looking forward to experiencing a different type of big IT conference with all previous experiences being at VMworld or the old Microsoft Tech Eds. Just buy looking at the agenda, schedule and content catalog I can already tell re:Invent is a very very different type of IT conference.

As you may or may not know I started this blog as Hosting is Life! and the first half of my career was spent around hosting applications and web services…in that I gravitated towards looking at AWS solutions to help compliment the hosting platforms I looked after and I was actively using a few AWS services in 2011 and 2012 and attended a couple of AWS courses. After joining Zettagrid my use of AWS decreased and it wasn’t until Veeam announced supportability for AWS storage as part of our v10 announcements that I decided to get back into the swing of things.

Subsequently we announced Veeam Availability for AWS which leverages EBS snapshots to perform agentless backups of AWS instances and more recently we where announced as a launch partner for VMware Cloud on AWS data availability solutions. For me, the fact that VMware have jumped into bed with AWS has obviously raised AWS’s profile in the VMware community and it’s certainly being seen as the cool thing to know (or claim to know) within the ecosystem.

Veeam isn’t the only backup vendor looking to leverage what AWS has to offer by way of extending availability into the hyper-scale cloud and every leading vendor is rushing to claim features that offload backups to AWS cloud storage as well as offering services to protect native AWS workloads…as with IT Pros this is also the in thing!

Apart from backup and availability, my sessions are focused on storage, compute, scalability and scale as well as some sessions on home automation with Alexa and alike. This years re:Invent is 100% a learning experience and I am looking forward to attending a lot of sessions and taking a lot of notes. I might even come out taking the whole serverless thing a little more seriously!

Moving away from the tech the AWS world is one that I am currently removed from…unlike the VMware ecosystem and VMworld I wouldn’t know 95% of the people delivering sessions and I certainly don’t know much about the AWS community. While I can’t fix that by just being here this week, I can certainly use this week as a launching pad to get myself more entrenched with the technology, the ecosystem and the community.

Looking forward to the week and please reach out if you are around.

VMware Cloud on AWS Availability with Veeam

It’s been exactly a year since VMware announced their partnership with AWS and it’s no surprise that at this year’s VMworld the solution is front and center and will feature heavily at Monday’s keynote. Earlier today Veeam was announced as an officially supported backup, recovery and replication platform for VMware Cloud on AWS. This is an exciting announcement for existing customers of Veeam who currently use vSphere and are interesting in consuming VMware Cloud on AWS.

In terms of what Veeam has been able to achieve, there is little noticeable difference in the process to configure and run backup or replication jobs from within Veeam Backup & Replication. The VMware Cloud on AWS resources are treated as just another cluster so most actions and features of the core platform work as if the cloud based cluster was local or otherwise.

Below you can see a screen shot of an VMC vCenter from the AWS based HTML5 Web Client. What you can see if the minimum spec for a VMC customer which includes four hosts with 36 cores and 512GB of RAM, plus vSAN and NSX.

In terms of Veeam making this work, there were a few limitations that VMware have placed on the solution which means that our NFS based features such as Instant VM Recovery, Virtual Labs or Surebackups won’t work at this stage. HotAdd mode is the only supported backup transport mode (which isn’t a bad thing as it’s my preferred transport mode) which talks to a new VDDK library that is part of the VMC platform.

With that the following features work out of the box:

  • Backup with In Guest Processing
  • Restores to original or new locations
  • Backup Copy Jobs
  • Replication
  • Cloud Connect Backup
  • Windows File Level Recovery
  • Veeam Explorers

With the above there are a lot of options for VMC customers to stick to the 3-2-1 rule of backups…remembering that just because the compute resources are in AWS, doesn’t mean that they are highly valuable from a workload and application availability standpoint. Customers can also take advantage of the fact that VMC is just another cluster from their on-premises deployments and use Veeam Backup & Replication to replicate VMs into the VMC vCenter to which end it could be used as a DR site.

For more information and the offical blog post from Veeam co-CEO Peter McKay click here.

« Older Entries