Category Archives: AWS

First Look – Zenko, Multi-Platform Data Replication and Management

A couple of weeks ago I stumbled upon Zenko via a LinkedIn post. I was interested in what it had to offer and decided to go and have a deeper look. With Veeam launching our vision to be the leader of intelligent data management at VeeamON this year, I have been on the lookout for solutions that do smart thing with data that addresses the needs related to controlling the accelerated spread and sprawl of that data. Zenko looks to be on the right track with it’s notion of freedom to avoid being locked into a specific cloud platform whether it’s private or public.

Having come from service provider land I have always been against the idea of a Hyper-Scaler Public Cloud monopoly that forces lock-in and diminishes choice. Because of that, I gravitated to Zenko’s mission statement:

We believe that everyone should be in control of their data. Zenko’s mission is to allow everyone to be in control of their data, while leveraging the efficiency of private and public clouds.

This platform looks to do data mobility across multiple cloud platforms through common communication protocols and by sharing a common set of APIs to manage it’s data sets. Zenko is focused on achieving this multi-cloud capability through a unified AWS S3 API based services with data management and federated search capabilities driving it’s use cases. Data mobility between clouds, whether private or public cloud services it what Zenko is aimed at.

Zenko Orbit:

Zenko Orbit is the cloud portal for data placement, workflows and global search. Focused for application developers and “DevOps” the premise of Zenko Orbit is that those guys can spend less time learning multiple interfaces for different clouds while leveraging the power of cloud storage and data management services without needing to be an expert across different platforms.

Orbit provides an easy way to create replication workflows between difference cloud storage platforms…weather it be Amazon s3, Azure Blog, GCP Storage or others. You then have the ability to search across a global namespace for system and user-defined metadata.

Quick Walkthrough:

Given this is open source you have the option to download and install a Zenko instance which will then be registered against the Orbit cloud portal or you can pull the whole stack from GitHub. They also have a sandboxed instance hosted by them that can be used to take the system for a test drive.

Once done, you are presented with a Dashboard that gives you an overview of the amount of data and other metric contained in your instance. Looking at the Settings area you are given details about the instance, account details and endpoints to use to connect up into. They also other the ability to download pre generated Cyberduck Profiles.

You need to create a storage management account to be able to browse your buckets in the Orbit portal.

Once that’s been done you can create a bucket and select a location which in the sandbox defaults to AWS us-east-1.

From here, you can add a new storage location and configure the replication policy. For this, I created a new Azure Blob Storage account as shown below.

From the Orbit menu, I then added a New Storage Location.

Once the location has been added you can configure the bucket replication. This is the cool part that is the premise of the platform. Being able to setup policies to replicate data across multiple cloud platforms. From the sandbox, the policy is one way meaning there is no directional replication. Simply select the source and destination and the bucket from the menu.

Once that has been done you can connect to the endpoint and upload files. I tested this out with the setup above and it worked as advertised. Using the CyberDuck profile I connected in, uploaded some files and monitored the Azure Blog storage end for the files to replicate.

Conclusion: 

While you could say that Zenko feels like DFS-R for the multi-platform storage world, the solution has impressed me. Many would know that it’s not easy to orchestrate the replication of data between different platforms. They are also talking up their capabilities around extensibility of the platform as is relates to data management, backend storage plugins and search.

I think about this sort of technology and how it could be extended to cloud based backups. Customers could have the option to tier into cheaper cloud based storage and then further protect that data by replicating it to another cloud platform which could be cheaper yet. This could achieve added resiliency while offering cost benefits. However there is also the risk that the more spread out the data is, the harder it is to control. That’s where intelligent data management comes into play…interesting times!

References:

Zenko Orbit – Multi-Cloud Data Management Simplified

 

Using Terraform to Deploy and Configure a Ready to use Backup Repo into an AWS VPC

A month of so ago I wrote a post on deploying Veeam Powered Network into an AWS VPC as a way to extend the VPC network to a remote site to leverage a Veeam Linux Repository running as an EC2 instance. During the course of deploying that solution I came across a lot of little check boxes and settings that needed to by tweaked in order to get things working. After that, I set myself the goal of trying to automate and orchestrate the deployment end to end.

For an overview of the intended purpose behind the solution head to the original blog post here. That post was mainly focused around the Veeam PN component, however I was using that as a mechanism to create a site-to-site connection to allow Veeam Backup & Replication to talk to the other EC2 instance which was the Veeam Linux Repository.

Terraform by HashiCorp:

In order to automate the deployment into AWS, I looked at Cloudformation first…but found that learning curve to be a little steep…so I went back to HashiCorp’s Terraform which I have been familiar with for a number of years, but never gotten my hands dirty with. HashiCorp specialise in Cloud Infrastructure Automation and their provisioning product is called Terraform.

Terraform is used to create, manage, and update infrastructure resources such as physical machines, VMs, network switches, containers, and more. Almost any infrastructure type can be represented as a resource in Terraform.

A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).

Terraform supports a host of providers and once you wrap your head around the basics and view some example code, provisioning Infrastructure as Code can be achieved with relatively no coding experience…however, as I did find out, you need to be careful in this world and not make the same initial mistake I did as explained in this post.

Going from Manual to Orchestrated with Automation:

The Terraform AWS provider is what I used to create the code required to deploy the required components. Like everything that’s automated, you need to understand the manual process first and that is where the previous experience came in handy. I knew what the end result was…I just needed to work backwards and make sure that the Terraform provider had all the instructions it needed to orchestrate the build.

the basic flow is:

  • Fetch AWS Access Key and Secret
  • Fetch AWS Key Pair
  • Create AWS VPC
    • Configure Networking and Routing for VPC
  • Create CentOS EC2 Instance for Veeam Linux Repo
    • Add new disk and set size
    • Execute configuration script
      • Install PERL modules
  • Create Ubuntu EC2 Instance for Veeam PN
    • Execute configuration script
      • Install VeeamPN modules from repo
  • Login to Veeam PN Web Console and Import Site Configuration.

I’ve uploaded the code to a GitHub project. An overview and instructions for the project can be found here. I’ve also posted a video to YouTube showing the end to end process which i’ve embedded below (best watched at 2x speed):

In order to get the Terraform plan to work there are some variables that need modifying in the GitHub Project and you will need to download, install and initialise Terraform. I’m intending to continue to tweak the project and complete the provisioning end to end, including the Veeam PN site configuration part at the end. The remote execution feature of Terraform allows some pretty cool things by way of script initiation.

References:

https://github.com/anthonyspiteri/automation/aws_create_veeamrepo_veeampn

https://www.terraform.io/intro/getting-started/install.html

 

Quick Look – Backing up AWS Workloads with Cloud Protection Manager from N2WS

Earlier this year Veeam acquired N2WS after announcements last year of a technology partnership at VeeamON 2017. The more I tinker with Cloud Protection Manager the more I understand why we made the acquisition. N2WS was founded in 2012 with their first product shipping in 2013. Purpose built for AWS supporting all types of EC2 instances, EBS volumes, RDS, DynamoDB & Redshift and AMI creation and distributed as an AMI through the AWS Marketplace. The product is easy to deploy and has extended it’s feature set with the release of 2.3d announced during VeeamON 2018 a couple weeks ago.

From the datasheet:

Cloud Protection Manager (CPM) is an enterprise-class backup, recovery, and disaster recovery solution purpose-built for Amazon Web Services EC2 environments. CPM enhances AWS data protection with automated and flexible backup policies, application consistent backups, 1-click instant recovery, and disaster recovery to other AWS region or AWS accounts ensuring cloud resiliency for the largest production AWS environment. By extending and enhancing native AWS capabilities, CPM protects the valuable data and mission-critical applications in the AWS cloud.

In this post, I wanted to show how easy it is to deploy and install Cloud Protection Manager as well as look at some of the new features in the 2.3d release. I will do a follow up post going into more detail about how to protect AWS Instances and services with CPM.

What’s new with CPM 2.3:

  • Automated backup for Amazon DynamoDB: CPM provides backup and recovery for Amazon DynamoDB, you can now apply existing policies and schedules to backup and restore their DynamoDB tables and metadata.
  • RESTful API:  Completely automate backup and recovery operations with the new Cloud Protection Manager API. This feature provides seamless integration between CPM and other applications.
  • Enhanced reporting features: Enhancements include the ability to gather all reports in one tab, run as a CSV, view both protected and unprotected resources and include new filtering options as well.

Other new features that come as part of the CPM 2.3 release include full cross-region and cross-account disaster recovery for Aurora databases, enhanced permissions for users and a fast and efficient on boarding process using CloudFormation’s 1-click template.

Installing, Configuring and Managing CPM:

The process to install Cloud Protection Manager from the AWS Marketplace is seamless and can be done via a couple different methods including a 1-Click deployment. The offical install guide can be read here. The CPM EC2 instance is deployed into a new or existing VPC configured with a subnet and must be put into an existing, or new Security Group.

Once deployed you are given the details of the installation.

And you can see it from the AWS Console under the EC2 instances. I’ve added a name for the instance just for clarities sake.

One thing to note is that there is no public IP assigned to the instance as part of the deployment. You can create a new Elastic IP and attach it to the instance, or you can access the configuration website via it’s internal IP if you have access to the subnet via some form of VPN or network extension.

There is an initial configuration wizard that guides you through the registration and setup of CPM. Note that you do need internet connectivity to complete the process otherwise you will get this error.

The final step will allow you to configure a volume for CPM use. With that the wizard finalises the setup and you can log into the Cloud Protection Manager.

Conclusion: 

The ability to backup AWS services natively has it’s advantages over traditional methods such as agents. Cloud Protection Manager from N2WS can be installed and ready to go within 5 minutes. In the next post, i’ll walk through the CPM interface and show how you backup and recover AWS instances and services.

References:

https://n2ws.com/cpm-install-guide

https://support.n2ws.com/portal/kb/articles/release-notes-for-the-latest-v2-3-x-cpm-release

Public Cloud and Infrastructure as Code…The Good and the Bad all in One Day!

I’m ok admitting that I am still learning as I progress through my career and I’m ok to admit when things go wrong. Learning from mistakes is a crucial part of learning…and I learnt a harsh lesson today! That Infrastructure as Code is as dangerous as it is awesome…and that the public cloud is an unforgiving place!

Earlier today I created a new GitHub Repository for a project i’ve been working on. Before I realised my mistake I had uploaded a Terraform variables file with my AWS Access and Secret Key. I picked up on this probably two minutes after I pushed the contents up to the public repository. Roughly five minutes later I deleted the repository and was about to start fresh without the credentials but then realised than my Terraform plan was failing with a credential error.

I logged into the AWS Console and saw that my main VPC and EC2 instances had been terminated and that there was 20 new instances in it’s place. I knew exactly at that point what had happened! I’d been compromised and I had handed over the keys on a silver web scraper platter.

My access key had been deleted and new ones created along with VPCs and Key Pairs in every single AWS region across the world. I deleted the new access key the malicious user created locking him out from doing any more damage, however in the space of ten minutes 240 EC2 instances in total where spun up. This was a little more than the twenty I thought I had dealt with initially…costing only $4.50…Amazing!

I contacted AWS support and let them know what happened. To their credit (and to my surprise) I had a call back within a few hours. Meanwhile they automatically restricted my account until I had satisfied a series of clean up steps so as to limit any more potential damage. The billing will be reversed as well so I am a little less in a panic when I see my current month breakdown.

The Bad Side of Infrastructure as Code and Public Cloud:

This example shows how dangerous the world we are living in can be. With AWS and alike providing brilliant API access into their provisioning platforms malicious users have seen an opportunity to use Infrastructure as Code as a way to spin up cloud resources in a matter of seconds. All they need is an in. And in my case, that in was a moment of stupidity…and even though I realised what I had done, all it took was less than five minutes for them to take advantage of my lack of concentration and exploit my security lapse. They also exploited the fact that I am new to this space and had not learnt best practice for storing credentials.

I was lucky that everything I had in AWS was just there for demo purpose and I had nothing of real important there. However, if this happened to be someone running business critical applications they would be in for a very very bad day. Everything was wiped! Even the backup software I had running in there using local snapshots…as ever a case for offsite copies if there was one! (Ergo – Veeam Agents and N2WS)

The Good Side of Infrastructure as Code and Public Cloud:

What good could come of this? Well, apart from learning a little more about Terraform and how to store credentials the awesome part was that all the work I had put in over the past couple of weeks getting a start with Infrastructure as Code and Terraform was that I was able to reprovision everything that I lost within 5 minutes…once my account restriction was lifted.

That’s the power of APIs and the applications that take advantage of them. And even though I copped a slap in the face today…I’m converted. This stuff is cool! We just need to be aware of the dangers that come and the fact that the coolness can be used and exploited in the wrong way as well.

Quick Post – Configuring Key Based Authentication for AWS based Veeam Linux Repository

I’ve been doing a little more within AWS over the past month or so related to my work with VMware Cloud on AWS and the setting up of EC2 instances to use as Veeam Linux Repositories. When deploying a linux based instance in AWS you set a key pair to the instance at the time of deployment. You then download the private key pem file and use that to remotely connect to the instance when desired.

In my testing, I wanted to configure this EC2 instance as a Linux Repository. When creating a new repository you need to set up the Linux server with the key pair. To do this you need to select the Add Linux Private Key drop down in the new Linux Server window.

Next you need to enter the username of the EC2 instance which in this case is centos (best practice here is to create a new repository user and elevate to root but for my testing using the provided) and then load up the pem file that contains the private key. You don’t need to enter in a Passphrase.

The check box to Elevate specified account to root is also selected. Accept the server thumbprint as shown below.

Once accepted the Veeam Linux components will be installed and all things being equal you will have a Veeam Linux based repository ready for action that lives remotely on an EC2 instance.

Once complete you can tag the location against the repository and now use it as a backup target.

So there you go, a quick post on how to get an EC2 Linux instance up and running in Veeam Backup & Replication as a Linux Repository.

Deploying Veeam Powered Network into a AWS VPC

Veeam PN is a very cool product that has been GA for about four months now. Initially we combined the free product together with Veeam Direct Restore to Microsoft Azure to create Veeam Recovery to Microsoft Azure. Of late there has been a push to get Veeam PN out in the community as a standalone product that’s capable of simplifying the orchestration of site-to-site and point-to-site VPNs.

I’ve written a few posts on some of the use cases of Veeam PN as a standalone product. This post will focus on getting Veeam PN installed into an AWS VPC to be used as the VPN gateway. Given that AWS has VPN solutions built in, why would you look to use Veeam PN? The answer to that is one of the core reasons why I believe Veeam PN is a solid networking tool…The simplicity of the setup and ease of use for those looking to connect or extend on-premises or cloud networks quickly and efficiently.

Overview of Use Case and Solution:

My main user case for my wanting to extend the AWS VPC network into an existing Veeam PN Hub connected to my my Homelab and Veeam Product Strategy Lab was to test out using an EC2 instance as a remote Veeam Linux Repository. Having a look at the diagram below you can see the basics of the design with the blue dotted line representing the traffic flow.

 

The traffic flows between the Linux Repository EC2 instance and the Veeam Backup & Replication server in my Homelab through the Veeam PN EC2 instance. That is via the Veeam PN Hub that lives in Azure and the Veeam PN Site Gateway in the Homelab.

The configuration for this includes the following:

  • A virtual private cloud with a public subnet with a size /24 IPv4 CIDR (10.0.100.0/24). The public subnet is associated with the main route table that routes to the Internet gateway.
  • An Internet gateway that connects the VPC to the Internet and to other AWS products.
  • The VPN connection between the VPC network and the Homelab network. The VPN connection consists of a Veeam PN Site Gateway located in the AWS VPC and a the Veeam PN HUB and Site Gateway located at the Homelab side of the VPN connection.
  • Instances in the External subnet with Elastic IP addresses that enable them to be reached from the Internet for management.
  • The main route table associated with the public subnet. The route table contains an entry that enables instances in the subnet to communicate with other instances in the VPC, and two entries that enables instances in the subnet to communicate with the remote subnets (172.17.0.0/24 and 10.0.30.0/24).

AWS has a lot of knobs that need adjusting even for what would normally be assumed functionality. With that I had to work out which knobs to turn to make things work as expected and get the traffic flowing between sites.

Veeam PN Site Gateway Configuration:

To get a Veeam PN instance working within AWS you need to deploy an Ubuntu 16.04 LTS form the Instance Wizard or Marketplace into the VPC (see below for specific configuration items). In this scenario a t2.small instance works well with a 16GB SSD hard drive as provided by the instance wizard. To install the Veeam PN services onto the EC2 instance, follow my previous blog post on Installing Veeam Powered Network Direct from a Linux Repo.

Once deployed along with the EC2 instance that I am using as a Veeam Linux Repository I have two EC2 instances in the AWS Console that are part of the VPC.

From here you can configure the Veeam PN instance as a Site Gateway. This can be done via the exposed HTTP/S Web Console of the deployed VM. First you need to create a new Entire Site Client from the HUB Veeam PN Web Console with the network address of the VPC as shown below.

Once the configuration file is imported into the AWS Veeam PN instance it should connect up automatically.

Jumping on the Veeam PN instance to view the routing table, you can see what networks the Veeam HUB has connected to.

The last two entries there are referenced in the design diagram and are the subnets that have the static routes configured in the VPC. You can see the path the traffic takes, which is reflected in the diagram as well.

Looking at the same info from the Linux Repository instance you can see standard routing for a locally connected server without any specific routes to the 172.17.0.0/24 or 10.0.30.0/24 subnets.

Notice though with the traffic path to get to the 172.17.0.0/24 subnet it’s now going through an extra hop which is the Veeam PN instance.

Amazon VPC Configuration:

For the most part this was a straightforward VPC creation with a IPv4 CIDR block of 10.0.100.0/24 configured. However, to make the routing work and the traffic flowing as desired you need to tweak some settings. After initial deployment of the Veeam PN EC2 instance I had some issues resolving both forward and reverse DNS entries which meant I couldn’t update the servers or install anything off the Veeam Linux software repositories.

By default there are a couple of VPC options that is turned off for some reason which makes all that work.

Enable both DNS Resolution and DNS Hostnames via the menu options highlighted above.

For the Network ACLs the default Allows ALL/ALL for inbound and outbound can be left as is. In terms of Security Groups, I created a new one and added both the Veeam PN and Linux Repository instances into the group. Inbound we are catering for SSH access to connect to and configure the instances externally and as shown below there are also rules in there to allow HTTP and HTTPS traffic to access the Veeam PN Web Console.

These, along with the Network ACLs are pretty open rules so feel free to get more granular if you like.

From the Route Table menu, I added the static routes for the remote subnets so that anything on the 10.0.100.0/24 network trying to get to 172.17.0.0/24 or 10.0.30.0/24 will use the Veeam PN EC2 instance as it’s next hop target.

EC2 Configuration Gotchya:

A big shout out to James Kilby who helped me diagnose an initial static routing issue by discovering that you need to adjust the Source/Destination Check attribute which controls whether source/destination checking is enabled on the instance. This can be done either against the EC2 instance right click menu, or on the Network Interfaces menu as shown below.

Disabling this attribute enables an instance to handle network traffic that isn’t specifically destined for the instance. For example, instances running services such as network address translation, routing, or a firewall should set this value to disabled. The default value is enabled.

Conclusion:

The end result of all that was the ability to configure my Veeam Backup & Replication server in my Homeland to add the EC2 Veeam Linux instance as a repository which allowed me to backup to AWS from home through the Veeam PN network site-to-site connectivity.

Bear in mind this is a POC, however the ability to consider Veeam PN as another options for extending AWS VPCs to other networks in a quick and easy fashion should make you think of the possabilities. Once the VPC/EC2 knobs where turned and the correct settings put in place, the end to end deployment, setup and connecting into the extended Veeam PN HUB network took no more than 10 minutes.

That is the true power of the Veeam Powered Network!

References:

https://docs.aws.amazon.com/glue/latest/dg/set-up-vpc-dns.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#change_source_dest_check

9.5 Update 3 Officially Compatible with VMware Cloud on AWS

At VMworld 2017 Veeam was announced as one of only two foundation Data Protection partners for VMware Cloud on AWS. This functionality was dependant on the release of Veeam Backup & Replication 9.5 Update 3 that contained the enhancements for it to interoperate with VMware Cloud on AWS locked down vCenter.

This week 9.5 Update has been listed on the VMware Compatibility Guide (VCG) for Data Protection.

In terms of what you now get in Update 3, there is little noticeable difference in the process to configure and run backup or replication jobs from within Veeam Backup & Replication. The VMware Cloud on AWS resources are treated as just another cluster so most actions and features of the core platform work as if the cloud based cluster was local or otherwise.

There were a few limitations that VMware have placed on the solution which means that our NFS based features such as Instant VM Recovery, Virtual Labs or Surebackups won’t work at this stage. HotAdd mode is the only supported backup transport mode (which isn’t a bad thing as it’s my preferred transport mode) which talks to a new VDDK library that is part of the VMC platform.

With that the following features work out of the box:

  • Backup with In Guest Processing
  • Restores to original or new locations
  • Backup Copy Jobs
  • Replication
  • Cloud Connect Backup
  • Windows File Level Recovery
  • Veeam Explorers

I’m really excited where VMware takes VMware Cloud on AWS and I see a lot of opportunities for the platform to be used as an availability resource. Over the next couple of months I’m hoping to be able to dive a little more into how Veeam can offer both backup and replication solutions for VMware Cloud on AWS.

Resources:

https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsanps&details=1&partner=594&releases=282&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc

AWS re:invent Thursday Keynote – Evolution of the Voice UI

Given this was my first AWS re:invent I didn’t know what to expect from the keynotes and while Wednesday’s keynote focused on new release announcements, Thursday’s keynote with Werner Vogels was more geared towards thought leadership on where in AWS want’s to take the industry that it has enabled over the next two to five years. He titled this, 21st Century Architecture and talked around and how AWS don’t go about the building of their platforms by themselves in an isolated environment…they take feedback from clients which allows them to radically change the way they build their systems.

The goal is for them to design very nimble and fast tools from which their customers can decide exactly how to use them. The sheer number of new tools and services i’ve seen AWS release since I first used them back in 2011 is actually quiet daunting. As someone who is not a developer but has come from a hosting and virtualization background I sometimes look at AWS as offering complex simplicity. In fact I wrote about that very thing in this post from 2015. In that post I was a little cynical of AWS and while I still don’t have the opinion that AWS is the be all and end all of all things cloud, I have come around to understanding the way they go about things…..

Treating the machine as Human:

I wanted to take some time to comment on Vogels thoughts on voice and speech recognition. The premise was that all past and current interactions with computers has been driven my the machinery…screen, keyboard, mouse and fingers are all common however up to this point it could be argued that it’s not the way in which we naturally interact with other people. Because of the fact this interaction is driven by the machine we know how to not only interact with machines, but also manipulate the inputs so we get what we want as efficiently as possible.

If I look at the example of SIRI or Alexa today…when I try to ask them to answer me based on a query I have I know to fashion the question in such a way that will allow the technology to respond…this works most of the time because I know how to structure to questions to get the right answer. I treat the machine as a machine! If I look at how my kids interact with the same devices their way of asking questions is not crafted as if they where talking to a computer…for them they ask Alexa a question as if she was real. They treat the machine as a person.

This is where Vogels started talking about his vision for interfaces of the future to by more human centric all based around advances in neural network technology which allow for near realtime responses which will drive the future of interfaces to these digital systems. The first step in that is going to be voice and Amazon has looked to lead the way in which home users interact with Amazon.com with Alexa. With the release of Alexa for Business this will look to extend beyond the home.

For IT pros there is a future in voice interfaces that allow you to not only get feedback on current status of systems, but also (like in many SciFi movies of the last 30 to 40 years) allow us to command functions and dictate through voice interfaces the configuration, setup and management of core systems. This is already happening today with a few project that I’ve seen using Alex to interact with VMware vCenter, or like the video below showing Alex interacting with a Veeam API to get the status of backups.

There are negatives to voice interfaces with the potential to commit voice triggered mistakes high, however as these systems become more human centric voice should allow us to have a normal and more natural way of interacting with systems…at that point we may stop being able to manipulate the machine because the interaction will become natural. AWS is trying to lead the way with products like Alexa but almost every leading computer software company is toying with voice and AI which means we are quickly nearing an inflection point from which we will see an acceleration of the technology which will lead to it become a viable alternative to today’s more commonly used interfaces.

AWS re:Invent – Expectations from a VM Hugger…

Today is the first day offical day of AWS re:Invent 2017 and things are kicking off with the global partner summit. Today also is my first day of AWS re:Invent and I am looking forward to experiencing a different type of big IT conference with all previous experiences being at VMworld or the old Microsoft Tech Eds. Just buy looking at the agenda, schedule and content catalog I can already tell re:Invent is a very very different type of IT conference.

As you may or may not know I started this blog as Hosting is Life! and the first half of my career was spent around hosting applications and web services…in that I gravitated towards looking at AWS solutions to help compliment the hosting platforms I looked after and I was actively using a few AWS services in 2011 and 2012 and attended a couple of AWS courses. After joining Zettagrid my use of AWS decreased and it wasn’t until Veeam announced supportability for AWS storage as part of our v10 announcements that I decided to get back into the swing of things.

Subsequently we announced Veeam Availability for AWS which leverages EBS snapshots to perform agentless backups of AWS instances and more recently we where announced as a launch partner for VMware Cloud on AWS data availability solutions. For me, the fact that VMware have jumped into bed with AWS has obviously raised AWS’s profile in the VMware community and it’s certainly being seen as the cool thing to know (or claim to know) within the ecosystem.

Veeam isn’t the only backup vendor looking to leverage what AWS has to offer by way of extending availability into the hyper-scale cloud and every leading vendor is rushing to claim features that offload backups to AWS cloud storage as well as offering services to protect native AWS workloads…as with IT Pros this is also the in thing!

Apart from backup and availability, my sessions are focused on storage, compute, scalability and scale as well as some sessions on home automation with Alexa and alike. This years re:Invent is 100% a learning experience and I am looking forward to attending a lot of sessions and taking a lot of notes. I might even come out taking the whole serverless thing a little more seriously!

Moving away from the tech the AWS world is one that I am currently removed from…unlike the VMware ecosystem and VMworld I wouldn’t know 95% of the people delivering sessions and I certainly don’t know much about the AWS community. While I can’t fix that by just being here this week, I can certainly use this week as a launching pad to get myself more entrenched with the technology, the ecosystem and the community.

Looking forward to the week and please reach out if you are around.

VMware Cloud on AWS: Thoughts One Year On

Last week at VMworld 2017 in the US, VMware announced the initial availability of VMware Cloud on AWS. It was the focal point for VMware at the event and probably the most important strategic play that VMware has undertaken in it’s history. This partnership was officially announced at last year’s VMworld and at the time I wrote a couple of blog posts commenting on the potential impact to the then, vCloud Air Network (now VCPP) and what needed to be done to empower the network.

As you can imagine at the time, I was a little skeptical about the announcement, but since that time we have seen the fall of vCloud Air to OVH and a doubling down of the efforts around enhancing vCloud Director and general support for the VMware Cloud Provider Program. Put this together with me stepping out of my role within the VCPP to one that is on the outside supporting it I feel that VMware Cloud on AWS is good for VMware and also good for service providers.

What It Looks Like:

This time last year we didn’t know exactly what VMC would look like apart from using vSphere, NSX and vSAN as it’s compute, networking and storage platforms or how exactly it would work on top of AWS’s infrastructure. For a detailed look under the hood, Frank Denneman has published a Technical Overview which is worth a read. A lot of credit needs to go to the engineering teams at both ends for achieving what they have achieved within a relatively small period of time.

The key thing to point out is the default compute and storage that’s included as part of the service. Four ESXi hosts will have dual E5-2686 v4 CPUs @2.3GHz with 18 Cores and 512GB of RAM. Storage wise there will be 10TB raw of All Flash vSAN per host, meaning depending on the FTT of vSAN a usable minimum of 20TB. The scale-out model enables expansion to up to 16 hosts, resulting in 576 CPU cores and 8TB of memory which is insane!

What does is Cost:

Here is where is starts to get interesting for me. Pricing wasn’t discussed during the Keynotes or in the announcements but looking at the pricing page here you can see what this base cluster will cost you. It’s going to cost $8.37 USD per host per hour for the on-demand option, which is the only option until VMware launches one year and three year reserved instances in the future where there looks to be a thirty and fifty percent saving respectively.

Upon first glance this seems expensive…however it’s only expensive in relative terms because there is the default resources that come the service. You can’t get anything less than the four hosts with all the trimmings at the moment which, when taken into consideration might lock out non enterprise companies from taking the service up.

Unless pricing changes by way of offering a smaller resource footprint I can see this not being attractive in other regions like ANZ or EMEA where small to medium size enterprises are more common. This is where VCPP service providers can still remain competitive and continue to offer services around the same building blocks as VMC on their own platforms.

CloudPhysics have an interesting blog post here, on some cost analytics that they ran.

How Can it be Leveraged:

With Veeam being a launch partner with VMware Cloud on AWS offering availability services it got me thinking as to how the service could be leveraged by service providers. A few things need to fall into place from a technology point of view but I believe that one of the best potential use cases for VMC is for service providers to leverage it for failover, replication and disaster recovery scenarios.

The fact that there this service posses auto-scaling of hosts means that it has the potential to be used as a resource cluster for disaster recovery services. If I think about Cloud Connect Replication, one of the hardest things to get right as a provider is sizing the failover resources and the procurement of the compute and storage to deal with customer requirements. As long as the base resources are covered the auto scaling capabilities mean that service providers only need to cover the base resources and pay any additional costs if a failover event happens and exceed the default cluster resources.

It must be pointed out that Cloud Connect can’t use a VMC cluster as a target at the moment due to the networking used…that is VXLAN on top of AWS VPN networking.

As I wrote last year, I feel like there is a great opportunity for service providers to leverage VMC as vCloud Director provider clusters however I know that this currently isn’t being supported by VMware. I honestly feel that service providers would love the ability to have cloud based Provider vDCs available across the world and I’m hoping that VMware realise the potential and allow vCloud Director to connect and consume VMC.

VMworld End of Show Report on VMware Cloud on AWS:

References:

https://www.vmware.com/company/news/releases/vmw-newsfeed.VMware-and-AWS-Announce-Initial-Availability-of-VMware-Cloud-on-AWS.2184706.html

https://cloud.vmware.com/vmc-aws

https://www.crn.com.au/news/pricing-revealed-for-vmware-cloud-on-aws-472011