Tag Archives: AWS

Configuring Amazon S3 Access from VMware Cloud on AWS through an S3 Endpoint

When looking at how to configure networking for interactions between a VMware Cloud on AWS SDDC and an Amazon VPC there is a little bit to grasp in terms of what needs to be done to achieve traffic flow between the SDDC and the rest of the world.

As an example, by default if you want to connect to S3 the default configuration is to go through the Amazon ENI (Elastic Network Interface) which means that unless configured correctly, connectively to Amazon S3 will fail. Brian Gaff has a really good series of posts on Networking and Security Groups when working on VMware Cloud on AWS and are worth a read to get a deeper understanding of VMC to AWS networking.

There is a way to change this behaviour to make connectivity to Amazon S3 connect via the SDDCs Internet Gateway. This is done through the VMware Cloud Portal by going to the Networking section of the relevant SDDC.

Doing this, while easy enough means that you loose a lot of the benefits that passing traffic through the ENI provides. That is a high-bandwidth, low latency connection between the VPC and the SDDC which also provides free egress. In the case of S3 and the utilising the Veeam Cloud Tier it means more optimal connectivity between a Veeam Backup & Replication instance hosted in the SDDC and Amazon S3.

To allow communication between the SDDC and Amazon S3 over the ENI the following needs to be actioned.

Create Endpoint:

First step is to go into the AWS Console, go to the VPC thats connected to the VMC service and create a new Endpoint for S3 as shown below making sure you select the correct Route Table.

Configure Security Group:

Next is to configure the Security Group associated with your VPC to allow traffic to the logical network or networks. It’s a basic HTTPS Inbound rule where your source is the SDDN network or networks you want access from.

Create Compute Gateway Firewall Rule:

The final step is to configure a firewall rule on the SDDC Compute Gateway to allow HTTPS traffic to the Amazon VPC from the network or networks you want access to Amazon S3 from.

That’s pretty much it! After that, you should be able to access Amazon S3 over the ENI and get all the benefits that delivers.

References:

https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-B501FA3C-EAF9-4005-AC72-155C3F592281.html

How to Copy Amazon S3 Buckets with AWS CLI

I am doing some work on validated restore scenarios using the new Veeam Cloud Tier that backed by an Object Storage Repository pointing at an Amazon S3 Bucket. So that I am not messing with the live data I wanted a way to copy and access the objects from another bucket or folder. There is no option at the moment to achieve this via the AWS Console, however it can be done via the AWS CLI.

First step was to ensure I had the AWS CLI installed on my MBP and it was at the latest version:

For the first part of the copy process, I cheated and created a new Bucket from the AWS Console that was based on the one I wanted to copy.

Next step is to make sure that the AWS CLI is configured with the correct AWS Access and Secret keys. Once done, the command to copy/sync buckets is a simple one.

Obviously the time to complete the operation will depend on the amount of Objects in the Bucket and whether its cross region or local. It took about 4 hours to copy across ~50GB of data from US-EAST-2 to US-WEST-2 going at about 4MB/s. By default the process is shown on the screen.

Once the first pass was complete I ran the same command again which will this time look for differences between the source and destination and only sync the differences. You can run the command below to view the Total Objects and Total Size of both buckets for comparison.

That is it! Pretty simple process. I’ll blog around the actual reason behind the Veeam Cloud Tier requirement and put this into action at a later date!

References:

https://docs.aws.amazon.com/cli/latest/userguide/install-macos.html

https://aws.amazon.com/premiumsupport/knowledge-center/move-objects-s3-bucket

What Services Providers Need to Think About in 2019 and Beyond…

We are entering interesting times in the cloud space! We should no longer be talking about the cloud as a destination and we shouldn’t be talking about how cloud can transform business…those days are over! We have entered the next level of adoption whereby the cloud as a delivery framework has become mainstream. You only have to look at what AWS announced last year at Re:Invent with its Outposts offering. The rise of automation and orchestration in mainstream IT also has meant that cloud can be consumed in a more structured and repeatable way.

To that end…where does it leave traditional Service Providers who have for years offered Infrastructure as a Service as the core of their offerings?

Last year I wrote a post on how the the VM shouldn’t  be the base unit of measurement for cloud…and even with some of the happenings since then, I remain convinced that Service Providers can continue to exist and thrive through offering value around the VM construct. Backup and DR as a service remains core to this however and there is ample thirst out there in the market for customers wanting to consume services from cloud providers that are not the giant hyper-scalers.

Almost all technology vendors are succumbing to the reality that they need to extend their own offering to include public cloud services. It is what the market is demanding…and it’s what the likes of AWS Azure, IBM and GCP are pushing for. The backup vendor space especially has had to extend technologies to consume public cloud services such as Amazon S3, Glacier or Azure Blob as targets for offsite backups. Veeam is upping the ante with our Update 4 release of Veeam Backup & Replication 9.5 which includes Cloud Tier to object storage and additional Direct Restore capabilities to Azure Stack and Amazon EC2.

With these additional public cloud features, Service Providers have a right to feel somewhat under threat. However we have seen this before (Office 365 for Hosted Exchange as an example) and the direction that Service Providers need to take is to continue to develop offerings based on vendor technologies and continue to add value to the relationship that they have with their clients. I wrote a long time ago when VMware first announced vCloud Air that people tend to buy based on relationship…and there is no more trusted relationship than that of the Service Provider.

With that, there is no doubting that clients will want to look at using a combination of services from a number of different providers. From where I stand, the days of clients going all in with one provider for all services are gone. This is an opportunity for Service Providers to be the broker. This isn’t a new concept and plenty of Service Providers have thought about how they themselves leverage the Public Cloud to not only augment their own backend services, but make them consumable for their clients via there own portals or systems.

With all that in mind…in my opinion, there are five main areas where Service Providers need to be looking in 2019 and beyond:

  1. Networking is central this and the most successful Service Providers have already worked this out and offer a number of different networking services. It’s imperative that Service Providers offer a way for clients to go beyond their own networks and have the option to connect out to other cloud networks. Telco’s and other carriers have built amazing technology frameworks based on APIs to consume networking in ways that mean extending a network shouldn’t be thought of as a complex undertaking anymore.
  2. Backup, Replication and Recovery is something that Service Providers have offered for a long time now, however there is more and more completion in this area today in the form of built in protection at the application and hardware level. Where providers have traditionally excelled at is a the VM level. Again, that will remain the base unit of measurement for cloud moving forward, but Service Providers need to enhance their BaaS, R/DRaaS offerings for them to remain competitive. Leveraging public cloud to gain economies of scale is one way to enhance those offerings.
  3. Gateway Services are a great way to lock in customers. Gateway services are typically those which a low effort for both the Service Provider and client alike. Take the example of Veeam’s Cloud Connect Backup. It’s a simple service to setup at both ends and works without too much hassle…but there is power for the Service Provider in the data that’s being transferred into their network. From there auxiliary services can be offered such as recovery or other business continuity services. It also leads into discussions about Replication services which can be worked into the total service offering as well.
  4. Managed Services is the one thing that the hyper-scalers can’t match Service Providers in and it’s the one thing that will keep all Service Providers relevant. I’ve mentioned already the trusted advisor thought process in the sales cycle. This is all about continuing to offer value around great vendor technologies that aims to secure the Service Provider to client relationship.
  5. Developing a Channel is central to be able to scale without the need to add resources to the business. Again, the most successful Service Providers all have Channel/Partner program in place and it’s the best way to extend that managed service, trusted provider reach. I’ve seen a number of providers not able to execute on a successful channel play due to poor execution, however if done right it’s one way to extend that reach to more clients…staying relevant in the wake of the hyper-scalers.

This isn’t a new Differentiate or Die!? message…it’s one of ensuring that Service Providers continue to evolve with the market and with industry expectation. That is the only way to thrive and survive!

AWS Outposts and VMware…Hybridity Defined!

Now that AWS re:Invent 2018 has well and truly passed…the biggest industry shift to come out of the event from my point of view was the fact that AWS are going full guns blazing into the on-premises world. With the announcement of AWS Outposts the long held belief that the public cloud is the panacea of all things became blurred. No one company has pushed such a hard cloud only message as AWS…no one company had the power to change the definition of what it is to run cloud services…AWS did that last week at re:Invent.

Yes, Microsoft have had the Azure Stack concept for a number of years now, however they have not executed on the promise of that yet. Azure Stack is seen by many as a white elephant even though it’s now in the wild and (depending on who you talk to) doing relatively well in certain verticals. The point though is that even Microsoft did not have the power to make people truely believe that a combination of a public cloud and on premises platform was the path to hybridity.

AWS is a Juggernaut and it’s my belief that they now have reached an inflection point in mindshare and can now dictate trends in our industry. They had enough power for VMware to partner with them so VMware could keep vSphere relevant in the cloud world. This resulted in VMware Cloud on AWS. It seems like AWS have realised that with this partnership in place, they can muscle their way into the on-premises/enterprise world that VMware have and still dominate…at this stage.

Outposts as a Product Name is no Accident

Like many, I like the product name Outposts. It’s catchy and straight away you can make sense of what it is…however, I decided to look up the offical meaning of the word…and it makes for some interesting reading:

  • An isolated or remote branch
  • A remote part of a country or empire
  • A small military camp or position at some distance from the main army, used especially as a guard against surprise attack

The first definition as per the Oxford Dictionary fits the overall idea of AWS Outposts. Putting a compute platform in an isolated or remote branch office that is seperate to AWS regions while also offering the ability to consume that compute platform like it was an AWS region. This represents a legitimate use case for Outposts and can be seen as AWS fulling a gap in the market that is being craved for by shifting IT sentiment.

The second definition is an interesting one when taken in the context of AWS and Amazon as a whole. They are big enough to be their own country and have certainly built up an empire over the last decade. All empires eventually crumble, however AWS is not going anywhere fast. This move does however indicate a shift in tactics and means that AWS can penetrate the on-premises market quicker to extend their empire.

The third definition is also pertinent in context to what AWS are looking to achieve with Outposts. They are setting up camp and positioning themselves a long way from their traditional stronghold. However my feeling is that they are not guarding against an attack…they are the attack!

Where does VMware fit in all this?

Given my thoughts above…where does VMware fit into all this? At first when the announcement was made on stage I was confused. With Pat Gelsinger on stage next to Andy Jessy my first impression was that VMware had given in. Here was AWS announcing a direct competitive platform to on-premises vSphere installations. Not only that, but VMware had announced Project Dimension at VMworld a few months earlier which looked to be their own on-premises managed service offering…though the wording around that was for edge rather than on-premises.

With the initial dust settled and after reading this blog post from William Lam, I came to understand the VMware play here.

VMware and Amazon are expanding their partnership to deliver a new, as-a-service, on-premises offering that will include the full VMware SDDC stack (vSphere, NSX, vSAN) running on AWS Outposts, a fully managed and configurable server and network installation built with AWS-designed hardware. VMware Cloud in AWS Outposts is VMware’s new As-a-Service offering in partnership with AWS to run on AWS Outposts – it will leverage the innovations we’ve developed with Project Dimension and apply them on top of AWS Outposts. VMware Cloud on AWS Outposts will be a subscription-based service and will support existing VMware payment options.

The reality is that on-premises environments are not going away any time soon but customers like the operating model of the cloud. More and more they don’t care about where infrastructure lives as long as a services outcome is achieved. Customers are after simplicity and cost efficiency. Outposts delivers all this by enabling convenience and choice…the choice to run VMware for traditional workloads using the familiar VMware SDDC stack all while having access to native AWS services.

A Managed Service Offering means a Mind shift

The big shift here from VMware that began with VMware Cloud on AWS is a shift towards managed services. A fundamental change in the mindset of the customer in the way in which they consume their infrastructure. Without needing to worry about the underlying platform, IT can focus on the applications and the availability of those applications. For VMware this means from the VM up…for AWS, this means from the platform up.

VMware Cloud on AWS is a great example of this new managed services world, with VMware managing most of the traditional stack. VMware can now extend VMware Cloud on AWS to Outposts to boomerang the management of on-premises as well. Overall Outposts is a win win for both AWS and VMware…however proof will be in the execution and uptake. We won’t know how it all pans out until the product becomes available…apparently in the later half of 2019.

IT admins have some contemplating to do as well…what does a shift to managed platforms mean for them? This is going to be an interesting ride as it pans out over the next twelve months!

References:

VMware Cloud on AWS Outposts: Cloud Managed SDDC for your Data Center

AWS re:Invent 2018 – Veeam and N2WS Recap and Thoughts

There was so much to take away from AWS re:Invent last week. In my opinion, having attended a lot of industry events over the past ten or so years, this years re:Invent has left the industry with a lot to think about it! AWS vigorously defended their position as the number one Public Cloud destination (in their eyes) while trying to lay a path for future growth by expanding into the true enterprise space. Also, with the announcement of Outposts set a path to try and dominate the hybrid world with an on-premises offering.

Instead of writing down my extended thoughts it’s more consumable to hear Rick Vanover and myself talk about the event from a Veeam perspective in the short embedded video below. I’ve also embedded a video with David Hill and Sebastian Straub covering things from an N2WS perspective, as well as talk about the N2WS related announcements at re:Invent 2018.

I’ve also posted the Veeam session video here:

AWS re:Invent 2018 Recap – Times…they a̶r̶e̶ have a̶ Changi̶n̶g̶ed!

I wrote this sitting in the Qantas Lounge in Melbourne waiting for the last leg back to Perth after spending the week in Las Vegas at AWS re:Invent 2018. I had fifteen hours on the LAX to MEL leg and before that flight took off, I struck up a conversation (something I never usually do on flights) with a guy in the seat next to me. He noticed my 2017 AWS re:Invent jumper (which is 100x better than the 2018 version) and asked me if had attended re:Invent.

It ended up that he worked for a San Francisco based company that wrote middleware integration for Salesforce. After a little bit of small talk, we got into some deep technical discussions about the announcements and around what we did in our day to day roles. Though I shouldn’t have been surprised, just as I had never heard of his company, he had never heard of Veeam…ironically he was from Russia and now working in Melbourne.

The fact he hadn’t heard of Veeam in its self wasn’t the most surprising part…it was the fact that he claimed to be a DevOps engineer. But had never touched any piece of VMware software or virtualisation infrastructure. His day to day was exclusively working with AWS web technologies. He wasn’t young…maybe early 40s…this to me seemed strange in itself.

He worked exclusively around APIs using AWS API Gateway, CloudFormations and other technologies but also used Nginx for reverse proxy purposes. That got me thinking that the web application developers of today are far far different to those that I used to work with in the early 2000’s and 2010’s. I come from the world of LAMP and .NET applications platforms…I stopped working on web and hosting technologies around the time Nginx was becoming popular.

I can still hold a conversion (and we did have a great exchange around how he DevOp’ed his applications) around the base frameworks of applications and components that go into making a web application work…but they are very very different from the web applications I used to architect and support on Windows and Linux.

All In on AWS!

The other interesting thing from the conversation was that his Technical Director commands the exclusive use of AWS services. Nothing outside of the service catalog on the AWS Console. That to me was amazing in itself. I started to talk to him about automation and orchestration tools and I mentioned that i’d been using Terraform of late…he had never used it himself. He asked me about it and in this case I was the one telling him how it worked! That at least made me feel somewhat not totally dated and past it!

My takeaway from the conversation plus what I experienced at re:Invent was that there is a strong, established sector of the IT industry that AWS has created, nurtured and is now helping to flourish. This isn’t a change or die message…this is simply my own realisation that the times have changed and as a technologist in the the industry I owe it to myself to make sure I am aware of how AWS has shifted web and application development from what I (and from my assumption the majority of those reading this post) perceive to be mainstream.

That said, just like the fact that a hybrid approach to infrastructure has solidified as the accepted hosting model for applications, so to the fact that in the application world there will still be a combination of the old and new. The biggest difference is that more than ever…these worlds are colliding…and that is something that shouldn’t be ignored!

Veeam’s AWS re:Invent 2018 Session Posted

This week, myself and David Hill presented at AWS re:Invent 2018 around what at Veeam is offering by way of providing data protection and availability for native AWS workloads, VMware Cloud on AWS workloads and how we are leveraging AWS technologies to offer new features in the upcoming Update 4 release of Backup & Replication 9.5.

For those that where not at AWS re:Invent this week or for those who could not attend the session on Wednesday, the video recording has been posted on the offical AWS YouTube page.

We had some audio issues at the start which made for some interesting banter between David and myself…but once we got into it we talked about the following:

  • The N2WS 2.4 Release
  • Veeam VTL and AWS Storage Gateway
  • Update 4 Cloud Tier
  • Update 4 Cloud Mobility
  • Data Protection for VMware Cloud on AWS

I wanted to highlight the Cloud Tier section where I give an overview and quick deepdive into the smarts behind the new repository feature coming in Update 4. The live demo of me using our Patented Instant VM Recovery feature to bring up a VM with data residing in Amazon S3 is a great example of the power of this upcoming feature. Not only does it allow storage efficiencies locally but offloading old data to Object Storage for long term retention, but is also is intelligent enough to recover quickly and efficiently with its Intelligent Block Recovery.

Veeam at AWS re:Invent 2018

AWS re:Invent 2018 is happening next week and for the first time Veeam is at the event in a big way! Last year, we effectively tested the waters with a small booth, no main session and without the usual event presence that you would expect of Veeam at an VMworld or Microsoft Ignite. This year is a little different and we will be there as Diamond Sponsors of the event and with a lot to share in regards to how Veeam is leveraging AWS technologies to enhance our availability messaging.

We bolstered our native AWS capabilities earlier this year with the acquisition of N2SW who already where a leader in the protection of AWS workloads and with the upcoming release of Backup & Replication 9.5 Update 4 we will be further enhancing our ability to not only backup AWS workloads, but also leverage AWS technologies such as S3 to facilitate a change in mindset as to what it is to have a local backup repository. We will also be talking about migration into AWS and also how we are the best data protection choice for VMware Cloud on AWS.

Breakout Session:

At the event we will have a breakout session which myself and David Hill will be presenting. This will be on Wednesday at 5:30pm in the Aria Casino and we are looking forward to deep diving into what’s coming in Update 4 as well as showing off what’s coming in the next release of N2WS as we start to jointly develop solutions between the two companies.

STG206-S – A Deeper Look at How Veeam is Evolving Availability on AWS

Wednesday, Nov 28, 5:30 PM – 6:30 PM – Aria East, Level 1, Joshua 6

Veeam has made significant enhancements to its platform, focusing on the availability of AWS workloads over the past year. Join this technical deep dive where representatives from Veeam demonstrate how the company protects cloud-native workloads on AWS as well as how they back up to and from on-premises environments. They also discuss data protection for VMware Cloud on AWS. Finally, they review the enhancements to Veeam’s Backup and Replication feature set, which now includes cloud mobility to AWS and a cloud archive that leverages Amazon S3 for long-term data retention of backed-up workloads.

In terms of the technologies and solutions that we will be diving into and showing off via some live demos…we will be looking at:

  • The N2WS 2.4 Release
  • Veeam VTL and AWS Storage Gateway
  • Update 4 Cloud Tier
  • Update 4 Cloud Mobility
  • Data Protection for VMware Cloud on AWS

I will also be giving a Booth Presentation at the Cloudcheckr booth, Tuesday at 10am which will effectively be a slimmed down version of the main session happening on the Wednesday.

Booth and Show Floor:

As mentioned, this year we will have significant presence on the show floor with two areas to come and see Veeam technologies as well as chat to us about how we are protecting and leveraging AWS and AWS workloads. On the main show floor we will be at booth #1011 which is well positioned next to the GitHub booth and we will also have a second location at the Mirage called the Data Protection Lounge which will be a place to relax, enjoy a snack and engage in technical discussions with our experts…including myself!

Social Events:

This year we are jointly sponsoring a location for the re:Invent Pub Crawl which is happening on Tuesday night. Details are below

Pub Crawl – Veeam | N2WS and VMware
Date & Time: Tuesday, November 27, 6pm – 8pm
Location: Mercato della Pescheria – The Venetian Shoppes

Wrapping Up:

I’m looking forward to the event and being more than a spectator this year I’m expecting big things from it. Make sure you come visit us at our booth or at the lounge to check out what has been brewing from Veeam and N2WS R&D over the past twelve months…and also don’t forget to attend the session on Wednesday afternoon. I’m excited about some of the new features we will release as part of Update 4…and this session is a chance to see them working and get an understanding as to what they will be delivering.

If you would like to schedule a meeting with myself or any other member of the Veeam Product Strategy team attending, please reach out.

Automating the Creation of AWS VPC and Subnets for VMware Cloud on AWS

Yesterday I wrote about how to deploy a Single Host SDDC through the VMware Cloud on AWS web console. I mentioned some pre-requisites that where required in order for the deployment to be successful. Part of those is to setup an AWS VPC up with networking in place so that the VMC components can be deployed. While it’s not too hard a task to perform through the AWS console, in the spirit of the work I’m doing around automation I have gotten this done via a Terraform plan.

The max lifetime for a Single Instance deployment is 30 days from creation, but the reality is most people will/should be using this to test the waters and may only want to spin the SDDC up for a couple of hours a day, run some tests and then destroy it. That obviously has it’s disadvantages as well. The main one being that you have to start from scratch every time. Given the nature of the VMworld session around the automation and orchestration of Veeam and VMC, starting from scratch is not an issue however it was desirable to look for efficiencies during the re-deployment.

For those looking to save time and automate parts of the deployment beyond the AWS VPC, there are a number of PowerShell code example and modules available that along with the Terraform plan, reduce the time to get a new SDDC firing.

I’m using a combination of the above scripts to deploy a new SDDC once the AWS VPC has been created. The first one actually deploys the SDDC through PowerShell while the second one is a module that allows some interactivity via commandlets to do things such as export and import Firewall rules.

Using Terraform to Create AWS VPC for VMware Cloud on AWS:

The Terraform plan linked here on GitHub does a couple of things:

  • Creates a new VPC
  • Creates a VPC Network
  • Creates three VPC subnets across different Availability Zones
  • Associates the three VPN subnets to the main route table
  • Creates desired security group rules

https://github.com/anthonyspiteri/vmc_vpc_subnet_create

[Note] Even for the Single Instance Node SDDC it will take about 120 minutes to deploy…so that needs to be factored in in terms of the window to work on the instance.

Creating a Single Host SDDC for VMware Cloud on AWS

While preparing for my VMworld session with Michael Cade on automating and orchestrating the deployment of Veeam into VMware Cloud on AWS, we have been testing against the Single Host SDDC that’s been made available for on demand POCs for those looking to test the waters on VMware Cloud on AWS. The great thing about using the Single Host SDDC is it’s obviously cheaper to run than the four node production version, but also that you can spin it up and destroy the instance as many times as you like.

Single Host SDDC is our low-cost gateway into the VMware Cloud on AWS hybrid cloud solution. Typically purchased as a 4-host service, it is the perfect way to test your first workload and leverage the additional capability and flexibility of VMware Cloud on AWS for 30 days. You can seamlessly scale-up to Production SDDC, a 4-host service, at any time during the 30-days and get even more from the world’s leading private cloud provider running on the most popular public cloud platform.

To get started with the Single Host SDDC, you need to head to this page and sign up…you will get an Activation email and from there be able to go through the account setup. This big thing to note at the moment is that a US Based Credit Card is required.

There are a few pre-requisites before getting an SDDC spun up…mainly around VPC networking within AWS. There is a brilliant blog post here, that describes the networking that needs to be considered before kicking off a fresh deployment. The offical help files are a little less clear on what needs to be put into place from an AWS VPC perspective, but in a nutshell you need:

  • An AWS Account
  • A fresh VPC with a VPC Networking configured
  • At least three VPC Subnets configured
  • A Management Subnet for the VMware Objects to sit on

Once this has been configured in the AWS Region the SDDC will be deployed into the process can be started. First step is to select a region (this is dictated by the choices made at account creation) and then select a deployment type followed by a name for the SDDC.

The next step is to link an existing AWS account. This is not required at the time of setup however it is required to get the most out of the solution. This will go off and launch an AWS CloudFormation template to connect the SDDC to the AWS account. It creates IAM role to allow communication between the SDDC and AWS.

[Note] I ran into an issue initially where the default location for the CloudFormation template to be run out of was not set to the region where the SDDC was to be deployed into. Make sure that when you click on the Launch button you take not the the AWS region and change where appropriate by change the URL to the correct region.

After a minute or so, the VMware Cloud on AWS Create an SDDC page will automatically refresh as shown below

The next step is to select the VPC and the VPC subnets for the raw SDDC components to be deployed into. I ran into a few gotcha’s on this initially and what you need to have configured is the subnets configured to size as listed in the user guides and the post I linked to that covers networking, but you also need to make sure you have at least three subnets configured across different AWS Availability zones within the region. This was not clear, but I was told by support that it was required.

If the AWS side of things is not configured correctly you will see this error.

What you should see…all things being equal is this.

Finally you need to set the Management Subnet which is used for the vCenter, Hosts, NSX Manager and other VMware components being deployed into the SDDC. There is a default, but it’s important to consider that this should not overlap with any existing networks that you may look to extend the SDDC into.

From here, the SDDC can be deployed by clicking on the Deploy SDDC button.

[Note] Even for the Single Instance Node SDDC it will take about 120 minutes to deploy and you can not cancel the process once it’s started.

Once completed we can click into the details of the SDDC, which allows you to see all the relevant information relating to it and also allows you to configure the networking.

Finally, to access the vCenter you need to configure a Firewall rule to allow web access through the management gateway.

Once completed you can login to the vCenter that’s hosted on the VMware Cloud on AWS instance and start to create VMs and have a play around with the environment.

There is a way to automate a lot of what i’ve stepped through above…for that, i’ll go through the tools in another blog post later this week.

References:

Selecting IP Subnets for your SDDC

« Older Entries