Tag Archives: SDDC

First Look: On Demand Recovery with Cloud Tier and VMware Cloud on AWS

Since Veeam Cloud Tier was released as part of Backup & Replication 9.5 Update 4, i’ve written a lot about how it works and what it offers in terms of offloading data from more expensive local storage to what is fundamentally cheaper remote Object Storage. As with most innovative technologies, if you dig a little deeper… different use cases start to present themselves and unintended use cases find their way to the surface.

Such was the case when, together with AWS and VMware, we looked at how Cloud Tier could be used as a way to allow on demand recovery into a cloud platform like VMware Cloud on AWS. By way of a quick overview, the solution shown below has Veeam backing up to a Scale Out Backup Repository which has a Capacity Tier backed by an Object Storage repository in Amazon S3. There is a minimal operational restore window set which means data is offloaded quicker to the Capacity Tier.

Once there, if disaster happens on premises, an SDDC is spun up, a Backup & Replication Server deployed and configured into that SDDC. From there, a SOBR is configured with the same Amazon S3 credentials that connects to the Object Storage bucket which detects the backup data and starts a resync of the metadata back to the local performance tier. (as described here) Once the resync has finished workloads can be recovered, streamed directly from the Capacity Tier.

The diagram above has been published on the AWS Reference Architecture page, and while this post has been brief, there is more to come by way of an offical AWS Blog Post co-authored by myself Frank Fan from AWS around this solution. We will also look to automate the process as much as possible to make this a truely on demand solution that can be actioned with the click of a button.

For now, the concept has been validated and the hope is people looking to leverage VMware Cloud on AWS as a target for disaster and recovery look to leverage Veeam and the Cloud Tier to make that happen.

References: AWS Reference Architecture

Configuring Amazon S3 Access from VMware Cloud on AWS through an S3 Endpoint

When looking at how to configure networking for interactions between a VMware Cloud on AWS SDDC and an Amazon VPC there is a little bit to grasp in terms of what needs to be done to achieve traffic flow between the SDDC and the rest of the world.

As an example, by default if you want to connect to S3 the default configuration is to go through the Amazon ENI (Elastic Network Interface) which means that unless configured correctly, connectively to Amazon S3 will fail. Brian Gaff has a really good series of posts on Networking and Security Groups when working on VMware Cloud on AWS and are worth a read to get a deeper understanding of VMC to AWS networking.

There is a way to change this behaviour to make connectivity to Amazon S3 connect via the SDDCs Internet Gateway. This is done through the VMware Cloud Portal by going to the Networking section of the relevant SDDC.

Doing this, while easy enough means that you loose a lot of the benefits that passing traffic through the ENI provides. That is a high-bandwidth, low latency connection between the VPC and the SDDC which also provides free egress. In the case of S3 and the utilising the Veeam Cloud Tier it means more optimal connectivity between a Veeam Backup & Replication instance hosted in the SDDC and Amazon S3.

To allow communication between the SDDC and Amazon S3 over the ENI the following needs to be actioned.

Create Endpoint:

First step is to go into the AWS Console, go to the VPC thats connected to the VMC service and create a new Endpoint for S3 as shown below making sure you select the correct Route Table.

Configure Security Group:

Next is to configure the Security Group associated with your VPC to allow traffic to the logical network or networks. It’s a basic HTTPS Inbound rule where your source is the SDDN network or networks you want access from.

Create Compute Gateway Firewall Rule:

The final step is to configure a firewall rule on the SDDC Compute Gateway to allow HTTPS traffic to the Amazon VPC from the network or networks you want access to Amazon S3 from.

That’s pretty much it! After that, you should be able to access Amazon S3 over the ENI and get all the benefits that delivers.

References:

https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-B501FA3C-EAF9-4005-AC72-155C3F592281.html

Creating a Single Host SDDC for VMware Cloud on AWS

While preparing for my VMworld session with Michael Cade on automating and orchestrating the deployment of Veeam into VMware Cloud on AWS, we have been testing against the Single Host SDDC that’s been made available for on demand POCs for those looking to test the waters on VMware Cloud on AWS. The great thing about using the Single Host SDDC is it’s obviously cheaper to run than the four node production version, but also that you can spin it up and destroy the instance as many times as you like.

Single Host SDDC is our low-cost gateway into the VMware Cloud on AWS hybrid cloud solution. Typically purchased as a 4-host service, it is the perfect way to test your first workload and leverage the additional capability and flexibility of VMware Cloud on AWS for 30 days. You can seamlessly scale-up to Production SDDC, a 4-host service, at any time during the 30-days and get even more from the world’s leading private cloud provider running on the most popular public cloud platform.

To get started with the Single Host SDDC, you need to head to this page and sign up…you will get an Activation email and from there be able to go through the account setup. This big thing to note at the moment is that a US Based Credit Card is required.

There are a few pre-requisites before getting an SDDC spun up…mainly around VPC networking within AWS. There is a brilliant blog post here, that describes the networking that needs to be considered before kicking off a fresh deployment. The offical help files are a little less clear on what needs to be put into place from an AWS VPC perspective, but in a nutshell you need:

  • An AWS Account
  • A fresh VPC with a VPC Networking configured
  • At least three VPC Subnets configured
  • A Management Subnet for the VMware Objects to sit on

Once this has been configured in the AWS Region the SDDC will be deployed into the process can be started. First step is to select a region (this is dictated by the choices made at account creation) and then select a deployment type followed by a name for the SDDC.

The next step is to link an existing AWS account. This is not required at the time of setup however it is required to get the most out of the solution. This will go off and launch an AWS CloudFormation template to connect the SDDC to the AWS account. It creates IAM role to allow communication between the SDDC and AWS.

[Note] I ran into an issue initially where the default location for the CloudFormation template to be run out of was not set to the region where the SDDC was to be deployed into. Make sure that when you click on the Launch button you take not the the AWS region and change where appropriate by change the URL to the correct region.

After a minute or so, the VMware Cloud on AWS Create an SDDC page will automatically refresh as shown below

The next step is to select the VPC and the VPC subnets for the raw SDDC components to be deployed into. I ran into a few gotcha’s on this initially and what you need to have configured is the subnets configured to size as listed in the user guides and the post I linked to that covers networking, but you also need to make sure you have at least three subnets configured across different AWS Availability zones within the region. This was not clear, but I was told by support that it was required.

If the AWS side of things is not configured correctly you will see this error.

What you should see…all things being equal is this.

Finally you need to set the Management Subnet which is used for the vCenter, Hosts, NSX Manager and other VMware components being deployed into the SDDC. There is a default, but it’s important to consider that this should not overlap with any existing networks that you may look to extend the SDDC into.

From here, the SDDC can be deployed by clicking on the Deploy SDDC button.

[Note] Even for the Single Instance Node SDDC it will take about 120 minutes to deploy and you can not cancel the process once it’s started.

Once completed we can click into the details of the SDDC, which allows you to see all the relevant information relating to it and also allows you to configure the networking.

Finally, to access the vCenter you need to configure a Firewall rule to allow web access through the management gateway.

Once completed you can login to the vCenter that’s hosted on the VMware Cloud on AWS instance and start to create VMs and have a play around with the environment.

There is a way to automate a lot of what i’ve stepped through above…for that, i’ll go through the tools in another blog post later this week.

References:

https://cloud.vmware.com/community/2018/04/24/selecting-ip-subnets-sddc/

VMware PEX ANZ 2013 Thoughts – Software Defined Storage

I was luckey to attend PEX at Australia Technology Park this week and thought I would share some of my take always. The venue was a little different to what you would come to expect from a tech event in Sydney… Usually we are in and around Darling Harbour at the Convention Centre… And even if there where whispers of VMware being late to book the event in the city the surroundings of the old rail works in Redfern refurbished and transformed into a spectacular Centre for technology and innovation fits.

There is a fundamental shift happening in how we consume IT and pretty much all leading technology vendors are in the process of embracing that change. VMware have chosen to focus on three key areas and after a few years of letting the dust settle they have three main pillars of focus.

Software Defined Datacenter
Hybrid Cloud
End User Computing

I’ve written about EUC and their Hybrid Cloud Offerings in the past so I’m not going to focus on that in this post…but the one thing I will say is that VMware still have a material understanding of where their partners sit in the ecosystem and still see them being central to their offerings… As a Service Provider guy working for a vCloud Powered provider there is some concern around the vHPC platform that will be deployed globally over the next few years… But we need to understand that there has to something significant in the Public Cloud space in order to compete with AWS and Google … And maybe Microsofts Azure. AWS is a massive beast and will only be slowed by its own success…will it get too big and product heavy… therefore loosing focus on the basics. There has been the evidence in recent weeks about increasing issues with instance performance due to capacity issues.

With regards to the SDDC push … Last year was the year of network virtualisation but what excites me more at this point is the upcoming features around software defined storage. There has been an explosion of software based storage solutions coming on the market over the past 18 months and VMware have seen this as a key piece to the SDDC.

vVOLs and vSANs represent a massive shift in how vSphere/vCloud environments are architected and engineered. Storage is the biggest pain point for most providers and traditional SANs might have well run their race. There is no doubt that storage arrays are still relevant but with the new technology behind virtual sans on the horizon direct access storage will start to feature… Where we had limitations around availability and redundancy previously the introduction of technology that can take DAS and create a distributed virtual San across multiple hosts excites me.

Why tier and put performance on a device that’s removed from the compute resource? It’s logical to start bringing it back closer to the compute.

Not only to you solve the HA/DRS issue but, given the right choices in DAS/flash/embedded storage there is potential to offer service levels based on low latency/high IOP data store design that takes away the common issue with shared LUNs presented as VMFS or NFS mounts for data stores. Traditional SANs can certainly still exist and this set and in fact will still be critical to act as lower tier high volume storage options.

For a technical overview of VMware Distributed Storage check out Duncan Eppings (@DuncanYB) Post here: There is also a slightly dated VMwareKB overview by Cormac Hogan (@VMwareStorage) that I have embedded below…note that it’s only the tech preview, but if it’s any indication of what’s coming later in the year…it can’t come soon enough.

Being able to control the max/min number of IOPs garunteed to VM/VMDK similar to the way in which you can select the IOP performance on AWS instances is worth the price of admission and solves the current limitations of vSphere in that you can only set max values to block out noisy neighbors.

Vendors that are already pushing out solutions around storage virtualization continue the great work…anything that sits on top of this technology and complements/improves/enhances it can only be a good thing.

It’s the year of storage virtualization…

Additional Reading:

http://www.yellow-bricks.com/2013/03/06/why-the-world-needs-software-defined-storage/
http://www.yellow-bricks.com/2013/04/05/software-defined-storage-just-some-random-thought/
http://www.nexenta.com/corp/products/what-is-openstorage/what-is-software-defined-storage
http://cto.vmware.com/2013-predictions-the-year-of-software-defined-storage/
http://virsto.com/blog/the-missing-link-in-software-defined-storage
http://www.nutanix.com/evolution-of-the-data-center/