Tag Archives: AWS

AWS Outposts and VMware…Hybridity Defined!

Now that AWS re:Invent 2018 has well and truly passed…the biggest industry shift to come out of the event from my point of view was the fact that AWS are going full guns blazing into the on-premises world. With the announcement of AWS Outposts the long held belief that the public cloud is the panacea of all things became blurred. No one company has pushed such a hard cloud only message as AWS…no one company had the power to change the definition of what it is to run cloud services…AWS did that last week at re:Invent.

Yes, Microsoft have had the Azure Stack concept for a number of years now, however they have not executed on the promise of that yet. Azure Stack is seen by many as a white elephant even though it’s now in the wild and (depending on who you talk to) doing relatively well in certain verticals. The point though is that even Microsoft did not have the power to make people truely believe that a combination of a public cloud and on premises platform was the path to hybridity.

AWS is a Juggernaut and it’s my belief that they now have reached an inflection point in mindshare and can now dictate trends in our industry. They had enough power for VMware to partner with them so VMware could keep vSphere relevant in the cloud world. This resulted in VMware Cloud on AWS. It seems like AWS have realised that with this partnership in place, they can muscle their way into the on-premises/enterprise world that VMware have and still dominate…at this stage.

Outposts as a Product Name is no Accident

Like many, I like the product name Outposts. It’s catchy and straight away you can make sense of what it is…however, I decided to look up the offical meaning of the word…and it makes for some interesting reading:

  • An isolated or remote branch
  • A remote part of a country or empire
  • A small military camp or position at some distance from the main army, used especially as a guard against surprise attack

The first definition as per the Oxford Dictionary fits the overall idea of AWS Outposts. Putting a compute platform in an isolated or remote branch office that is seperate to AWS regions while also offering the ability to consume that compute platform like it was an AWS region. This represents a legitimate use case for Outposts and can be seen as AWS fulling a gap in the market that is being craved for by shifting IT sentiment.

The second definition is an interesting one when taken in the context of AWS and Amazon as a whole. They are big enough to be their own country and have certainly built up an empire over the last decade. All empires eventually crumble, however AWS is not going anywhere fast. This move does however indicate a shift in tactics and means that AWS can penetrate the on-premises market quicker to extend their empire.

The third definition is also pertinent in context to what AWS are looking to achieve with Outposts. They are setting up camp and positioning themselves a long way from their traditional stronghold. However my feeling is that they are not guarding against an attack…they are the attack!

Where does VMware fit in all this?

Given my thoughts above…where does VMware fit into all this? At first when the announcement was made on stage I was confused. With Pat Gelsinger on stage next to Andy Jessy my first impression was that VMware had given in. Here was AWS announcing a direct competitive platform to on-premises vSphere installations. Not only that, but VMware had announced Project Dimension at VMworld a few months earlier which looked to be their own on-premises managed service offering…though the wording around that was for edge rather than on-premises.

With the initial dust settled and after reading this blog post from William Lam, I came to understand the VMware play here.

VMware and Amazon are expanding their partnership to deliver a new, as-a-service, on-premises offering that will include the full VMware SDDC stack (vSphere, NSX, vSAN) running on AWS Outposts, a fully managed and configurable server and network installation built with AWS-designed hardware. VMware Cloud in AWS Outposts is VMware’s new As-a-Service offering in partnership with AWS to run on AWS Outposts – it will leverage the innovations we’ve developed with Project Dimension and apply them on top of AWS Outposts. VMware Cloud on AWS Outposts will be a subscription-based service and will support existing VMware payment options.

The reality is that on-premises environments are not going away any time soon but customers like the operating model of the cloud. More and more they don’t care about where infrastructure lives as long as a services outcome is achieved. Customers are after simplicity and cost efficiency. Outposts delivers all this by enabling convenience and choice…the choice to run VMware for traditional workloads using the familiar VMware SDDC stack all while having access to native AWS services.

A Managed Service Offering means a Mind shift

The big shift here from VMware that began with VMware Cloud on AWS is a shift towards managed services. A fundamental change in the mindset of the customer in the way in which they consume their infrastructure. Without needing to worry about the underlying platform, IT can focus on the applications and the availability of those applications. For VMware this means from the VM up…for AWS, this means from the platform up.

VMware Cloud on AWS is a great example of this new managed services world, with VMware managing most of the traditional stack. VMware can now extend VMware Cloud on AWS to Outposts to boomerang the management of on-premises as well. Overall Outposts is a win win for both AWS and VMware…however proof will be in the execution and uptake. We won’t know how it all pans out until the product becomes available…apparently in the later half of 2019.

IT admins have some contemplating to do as well…what does a shift to managed platforms mean for them? This is going to be an interesting ride as it pans out over the next twelve months!

References:

VMware Cloud on AWS Outposts: Cloud Managed SDDC for your Data Center

AWS re:Invent 2018 – Veeam and N2WS Recap and Thoughts

There was so much to take away from AWS re:Invent last week. In my opinion, having attended a lot of industry events over the past ten or so years, this years re:Invent has left the industry with a lot to think about it! AWS vigorously defended their position as the number one Public Cloud destination (in their eyes) while trying to lay a path for future growth by expanding into the true enterprise space. Also, with the announcement of Outposts set a path to try and dominate the hybrid world with an on-premises offering.

Instead of writing down my extended thoughts it’s more consumable to hear Rick Vanover and myself talk about the event from a Veeam perspective in the short embedded video below. I’ve also embedded a video with David Hill and Sebastian Straub covering things from an N2WS perspective, as well as talk about the N2WS related announcements at re:Invent 2018.

I’ve also posted the Veeam session video here:

AWS re:Invent 2018 Recap – Times…they a̶r̶e̶ have a̶ Changi̶n̶g̶ed!

I wrote this sitting in the Qantas Lounge in Melbourne waiting for the last leg back to Perth after spending the week in Las Vegas at AWS re:Invent 2018. I had fifteen hours on the LAX to MEL leg and before that flight took off, I struck up a conversation (something I never usually do on flights) with a guy in the seat next to me. He noticed my 2017 AWS re:Invent jumper (which is 100x better than the 2018 version) and asked me if had attended re:Invent.

It ended up that he worked for a San Francisco based company that wrote middleware integration for Salesforce. After a little bit of small talk, we got into some deep technical discussions about the announcements and around what we did in our day to day roles. Though I shouldn’t have been surprised, just as I had never heard of his company, he had never heard of Veeam…ironically he was from Russia and now working in Melbourne.

The fact he hadn’t heard of Veeam in its self wasn’t the most surprising part…it was the fact that he claimed to be a DevOps engineer. But had never touched any piece of VMware software or virtualisation infrastructure. His day to day was exclusively working with AWS web technologies. He wasn’t young…maybe early 40s…this to me seemed strange in itself.

He worked exclusively around APIs using AWS API Gateway, CloudFormations and other technologies but also used Nginx for reverse proxy purposes. That got me thinking that the web application developers of today are far far different to those that I used to work with in the early 2000’s and 2010’s. I come from the world of LAMP and .NET applications platforms…I stopped working on web and hosting technologies around the time Nginx was becoming popular.

I can still hold a conversion (and we did have a great exchange around how he DevOp’ed his applications) around the base frameworks of applications and components that go into making a web application work…but they are very very different from the web applications I used to architect and support on Windows and Linux.

All In on AWS!

The other interesting thing from the conversation was that his Technical Director commands the exclusive use of AWS services. Nothing outside of the service catalog on the AWS Console. That to me was amazing in itself. I started to talk to him about automation and orchestration tools and I mentioned that i’d been using Terraform of late…he had never used it himself. He asked me about it and in this case I was the one telling him how it worked! That at least made me feel somewhat not totally dated and past it!

My takeaway from the conversation plus what I experienced at re:Invent was that there is a strong, established sector of the IT industry that AWS has created, nurtured and is now helping to flourish. This isn’t a change or die message…this is simply my own realisation that the times have changed and as a technologist in the the industry I owe it to myself to make sure I am aware of how AWS has shifted web and application development from what I (and from my assumption the majority of those reading this post) perceive to be mainstream.

That said, just like the fact that a hybrid approach to infrastructure has solidified as the accepted hosting model for applications, so to the fact that in the application world there will still be a combination of the old and new. The biggest difference is that more than ever…these worlds are colliding…and that is something that shouldn’t be ignored!

Veeam’s AWS re:Invent 2018 Session Posted

This week, myself and David Hill presented at AWS re:Invent 2018 around what at Veeam is offering by way of providing data protection and availability for native AWS workloads, VMware Cloud on AWS workloads and how we are leveraging AWS technologies to offer new features in the upcoming Update 4 release of Backup & Replication 9.5.

For those that where not at AWS re:Invent this week or for those who could not attend the session on Wednesday, the video recording has been posted on the offical AWS YouTube page.

We had some audio issues at the start which made for some interesting banter between David and myself…but once we got into it we talked about the following:

  • The N2WS 2.4 Release
  • Veeam VTL and AWS Storage Gateway
  • Update 4 Cloud Tier
  • Update 4 Cloud Mobility
  • Data Protection for VMware Cloud on AWS

I wanted to highlight the Cloud Tier section where I give an overview and quick deepdive into the smarts behind the new repository feature coming in Update 4. The live demo of me using our Patented Instant VM Recovery feature to bring up a VM with data residing in Amazon S3 is a great example of the power of this upcoming feature. Not only does it allow storage efficiencies locally but offloading old data to Object Storage for long term retention, but is also is intelligent enough to recover quickly and efficiently with its Intelligent Block Recovery.

Veeam at AWS re:Invent 2018

AWS re:Invent 2018 is happening next week and for the first time Veeam is at the event in a big way! Last year, we effectively tested the waters with a small booth, no main session and without the usual event presence that you would expect of Veeam at an VMworld or Microsoft Ignite. This year is a little different and we will be there as Diamond Sponsors of the event and with a lot to share in regards to how Veeam is leveraging AWS technologies to enhance our availability messaging.

We bolstered our native AWS capabilities earlier this year with the acquisition of N2SW who already where a leader in the protection of AWS workloads and with the upcoming release of Backup & Replication 9.5 Update 4 we will be further enhancing our ability to not only backup AWS workloads, but also leverage AWS technologies such as S3 to facilitate a change in mindset as to what it is to have a local backup repository. We will also be talking about migration into AWS and also how we are the best data protection choice for VMware Cloud on AWS.

Breakout Session:

At the event we will have a breakout session which myself and David Hill will be presenting. This will be on Wednesday at 5:30pm in the Aria Casino and we are looking forward to deep diving into what’s coming in Update 4 as well as showing off what’s coming in the next release of N2WS as we start to jointly develop solutions between the two companies.

STG206-S – A Deeper Look at How Veeam is Evolving Availability on AWS

Wednesday, Nov 28, 5:30 PM – 6:30 PM – Aria East, Level 1, Joshua 6

Veeam has made significant enhancements to its platform, focusing on the availability of AWS workloads over the past year. Join this technical deep dive where representatives from Veeam demonstrate how the company protects cloud-native workloads on AWS as well as how they back up to and from on-premises environments. They also discuss data protection for VMware Cloud on AWS. Finally, they review the enhancements to Veeam’s Backup and Replication feature set, which now includes cloud mobility to AWS and a cloud archive that leverages Amazon S3 for long-term data retention of backed-up workloads.

In terms of the technologies and solutions that we will be diving into and showing off via some live demos…we will be looking at:

  • The N2WS 2.4 Release
  • Veeam VTL and AWS Storage Gateway
  • Update 4 Cloud Tier
  • Update 4 Cloud Mobility
  • Data Protection for VMware Cloud on AWS

I will also be giving a Booth Presentation at the Cloudcheckr booth, Tuesday at 10am which will effectively be a slimmed down version of the main session happening on the Wednesday.

Booth and Show Floor:

As mentioned, this year we will have significant presence on the show floor with two areas to come and see Veeam technologies as well as chat to us about how we are protecting and leveraging AWS and AWS workloads. On the main show floor we will be at booth #1011 which is well positioned next to the GitHub booth and we will also have a second location at the Mirage called the Data Protection Lounge which will be a place to relax, enjoy a snack and engage in technical discussions with our experts…including myself!

Social Events:

This year we are jointly sponsoring a location for the re:Invent Pub Crawl which is happening on Tuesday night. Details are below

Pub Crawl – Veeam | N2WS and VMware
Date & Time: Tuesday, November 27, 6pm – 8pm
Location: Mercato della Pescheria – The Venetian Shoppes

Wrapping Up:

I’m looking forward to the event and being more than a spectator this year I’m expecting big things from it. Make sure you come visit us at our booth or at the lounge to check out what has been brewing from Veeam and N2WS R&D over the past twelve months…and also don’t forget to attend the session on Wednesday afternoon. I’m excited about some of the new features we will release as part of Update 4…and this session is a chance to see them working and get an understanding as to what they will be delivering.

If you would like to schedule a meeting with myself or any other member of the Veeam Product Strategy team attending, please reach out.

Automating the Creation of AWS VPC and Subnets for VMware Cloud on AWS

Yesterday I wrote about how to deploy a Single Host SDDC through the VMware Cloud on AWS web console. I mentioned some pre-requisites that where required in order for the deployment to be successful. Part of those is to setup an AWS VPC up with networking in place so that the VMC components can be deployed. While it’s not too hard a task to perform through the AWS console, in the spirit of the work I’m doing around automation I have gotten this done via a Terraform plan.

The max lifetime for a Single Instance deployment is 30 days from creation, but the reality is most people will/should be using this to test the waters and may only want to spin the SDDC up for a couple of hours a day, run some tests and then destroy it. That obviously has it’s disadvantages as well. The main one being that you have to start from scratch every time. Given the nature of the VMworld session around the automation and orchestration of Veeam and VMC, starting from scratch is not an issue however it was desirable to look for efficiencies during the re-deployment.

For those looking to save time and automate parts of the deployment beyond the AWS VPC, there are a number of PowerShell code example and modules available that along with the Terraform plan, reduce the time to get a new SDDC firing.

I’m using a combination of the above scripts to deploy a new SDDC once the AWS VPC has been created. The first one actually deploys the SDDC through PowerShell while the second one is a module that allows some interactivity via commandlets to do things such as export and import Firewall rules.

Using Terraform to Create AWS VPC for VMware Cloud on AWS:

The Terraform plan linked here on GitHub does a couple of things:

  • Creates a new VPC
  • Creates a VPC Network
  • Creates three VPC subnets across different Availability Zones
  • Associates the three VPN subnets to the main route table
  • Creates desired security group rules

https://github.com/anthonyspiteri/vmc_vpc_subnet_create

[Note] Even for the Single Instance Node SDDC it will take about 120 minutes to deploy…so that needs to be factored in in terms of the window to work on the instance.

Creating a Single Host SDDC for VMware Cloud on AWS

While preparing for my VMworld session with Michael Cade on automating and orchestrating the deployment of Veeam into VMware Cloud on AWS, we have been testing against the Single Host SDDC that’s been made available for on demand POCs for those looking to test the waters on VMware Cloud on AWS. The great thing about using the Single Host SDDC is it’s obviously cheaper to run than the four node production version, but also that you can spin it up and destroy the instance as many times as you like.

Single Host SDDC is our low-cost gateway into the VMware Cloud on AWS hybrid cloud solution. Typically purchased as a 4-host service, it is the perfect way to test your first workload and leverage the additional capability and flexibility of VMware Cloud on AWS for 30 days. You can seamlessly scale-up to Production SDDC, a 4-host service, at any time during the 30-days and get even more from the world’s leading private cloud provider running on the most popular public cloud platform.

To get started with the Single Host SDDC, you need to head to this page and sign up…you will get an Activation email and from there be able to go through the account setup. This big thing to note at the moment is that a US Based Credit Card is required.

There are a few pre-requisites before getting an SDDC spun up…mainly around VPC networking within AWS. There is a brilliant blog post here, that describes the networking that needs to be considered before kicking off a fresh deployment. The offical help files are a little less clear on what needs to be put into place from an AWS VPC perspective, but in a nutshell you need:

  • An AWS Account
  • A fresh VPC with a VPC Networking configured
  • At least three VPC Subnets configured
  • A Management Subnet for the VMware Objects to sit on

Once this has been configured in the AWS Region the SDDC will be deployed into the process can be started. First step is to select a region (this is dictated by the choices made at account creation) and then select a deployment type followed by a name for the SDDC.

The next step is to link an existing AWS account. This is not required at the time of setup however it is required to get the most out of the solution. This will go off and launch an AWS CloudFormation template to connect the SDDC to the AWS account. It creates IAM role to allow communication between the SDDC and AWS.

[Note] I ran into an issue initially where the default location for the CloudFormation template to be run out of was not set to the region where the SDDC was to be deployed into. Make sure that when you click on the Launch button you take not the the AWS region and change where appropriate by change the URL to the correct region.

After a minute or so, the VMware Cloud on AWS Create an SDDC page will automatically refresh as shown below

The next step is to select the VPC and the VPC subnets for the raw SDDC components to be deployed into. I ran into a few gotcha’s on this initially and what you need to have configured is the subnets configured to size as listed in the user guides and the post I linked to that covers networking, but you also need to make sure you have at least three subnets configured across different AWS Availability zones within the region. This was not clear, but I was told by support that it was required.

If the AWS side of things is not configured correctly you will see this error.

What you should see…all things being equal is this.

Finally you need to set the Management Subnet which is used for the vCenter, Hosts, NSX Manager and other VMware components being deployed into the SDDC. There is a default, but it’s important to consider that this should not overlap with any existing networks that you may look to extend the SDDC into.

From here, the SDDC can be deployed by clicking on the Deploy SDDC button.

[Note] Even for the Single Instance Node SDDC it will take about 120 minutes to deploy and you can not cancel the process once it’s started.

Once completed we can click into the details of the SDDC, which allows you to see all the relevant information relating to it and also allows you to configure the networking.

Finally, to access the vCenter you need to configure a Firewall rule to allow web access through the management gateway.

Once completed you can login to the vCenter that’s hosted on the VMware Cloud on AWS instance and start to create VMs and have a play around with the environment.

There is a way to automate a lot of what i’ve stepped through above…for that, i’ll go through the tools in another blog post later this week.

References:

Selecting IP Subnets for your SDDC

Video – Protecting AWS and Hybrid Workloads with Veeam and N2WS

Back in April, I was lucky enough to present at the AWS Summit in Singapore. The session was a joint one with Alex Thomson from N2WS on how Veeam and N2WS are protecting native workloads within AWS and also extending that out to protecting Hybrid workloads sitting on-premises back to AWS or within VMware Cloud on AWS. The session video is embedded below and goes for about 30 minutes.

Alex and I talk about Veeam’s vision for Intelligent Data Management, an introduction into N2WS, a look at VTL solutions for offsite backups and finish out with an introduction into how Veeam works natively with VMware Cloud on AWS.

Veeam has pioneered the market of Availability for the Always On Enterprise by helping enterprises meet recovery time and point objectives (RTPO) of less than 15 minutes on any cloud or hybrid platform. Veeam recently acquired N2WS, a leading provider of cloud native backup and DR solutions providing backup automation and instant recovery for AWS workloads. Come and hear how N2WS is leading the backup and recovery of EC2 instances and native AWS workloads, how Veeam VTL technology leveraging the AWS Storage Gateway offers offsite cloud repositories as well as how Veeam is offering leading availability solutions for VMware Cloud on AWS.

Speakers

– Anthony Spiteri, Global Technologist, Product Strategy, Veeam.
– Alexander Thomson, Sales Director EMEA & APAC, N2WS

Using Terraform to Deploy and Configure a Ready to use Backup Repo into an AWS VPC

A month of so ago I wrote a post on deploying Veeam Powered Network into an AWS VPC as a way to extend the VPC network to a remote site to leverage a Veeam Linux Repository running as an EC2 instance. During the course of deploying that solution I came across a lot of little check boxes and settings that needed to by tweaked in order to get things working. After that, I set myself the goal of trying to automate and orchestrate the deployment end to end.

For an overview of the intended purpose behind the solution head to the original blog post here. That post was mainly focused around the Veeam PN component, however I was using that as a mechanism to create a site-to-site connection to allow Veeam Backup & Replication to talk to the other EC2 instance which was the Veeam Linux Repository.

Terraform by HashiCorp:

In order to automate the deployment into AWS, I looked at Cloudformation first…but found that learning curve to be a little steep…so I went back to HashiCorp’s Terraform which I have been familiar with for a number of years, but never gotten my hands dirty with. HashiCorp specialise in Cloud Infrastructure Automation and their provisioning product is called Terraform.

Terraform is used to create, manage, and update infrastructure resources such as physical machines, VMs, network switches, containers, and more. Almost any infrastructure type can be represented as a resource in Terraform.

A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).

Terraform supports a host of providers and once you wrap your head around the basics and view some example code, provisioning Infrastructure as Code can be achieved with relatively no coding experience…however, as I did find out, you need to be careful in this world and not make the same initial mistake I did as explained in this post.

Going from Manual to Orchestrated with Automation:

The Terraform AWS provider is what I used to create the code required to deploy the required components. Like everything that’s automated, you need to understand the manual process first and that is where the previous experience came in handy. I knew what the end result was…I just needed to work backwards and make sure that the Terraform provider had all the instructions it needed to orchestrate the build.

the basic flow is:

  • Fetch AWS Access Key and Secret
  • Fetch AWS Key Pair
  • Create AWS VPC
    • Configure Networking and Routing for VPC
  • Create CentOS EC2 Instance for Veeam Linux Repo
    • Add new disk and set size
    • Execute configuration script
      • Install PERL modules
  • Create Ubuntu EC2 Instance for Veeam PN
    • Execute configuration script
      • Install VeeamPN modules from repo
  • Login to Veeam PN Web Console and Import Site Configuration.

I’ve uploaded the code to a GitHub project. An overview and instructions for the project can be found here. I’ve also posted a video to YouTube showing the end to end process which i’ve embedded below (best watched at 2x speed):

In order to get the Terraform plan to work there are some variables that need modifying in the GitHub Project and you will need to download, install and initialise Terraform. I’m intending to continue to tweak the project and complete the provisioning end to end, including the Veeam PN site configuration part at the end. The remote execution feature of Terraform allows some pretty cool things by way of script initiation.

References:

https://github.com/anthonyspiteri/automation/aws_create_veeamrepo_veeampn

https://www.terraform.io/intro/getting-started/install.html

 

Quick Look – Backing up AWS Workloads with Cloud Protection Manager from N2WS

Earlier this year Veeam acquired N2WS after announcements last year of a technology partnership at VeeamON 2017. The more I tinker with Cloud Protection Manager the more I understand why we made the acquisition. N2WS was founded in 2012 with their first product shipping in 2013. Purpose built for AWS supporting all types of EC2 instances, EBS volumes, RDS, DynamoDB & Redshift and AMI creation and distributed as an AMI through the AWS Marketplace. The product is easy to deploy and has extended it’s feature set with the release of 2.3d announced during VeeamON 2018 a couple weeks ago.

From the datasheet:

Cloud Protection Manager (CPM) is an enterprise-class backup, recovery, and disaster recovery solution purpose-built for Amazon Web Services EC2 environments. CPM enhances AWS data protection with automated and flexible backup policies, application consistent backups, 1-click instant recovery, and disaster recovery to other AWS region or AWS accounts ensuring cloud resiliency for the largest production AWS environment. By extending and enhancing native AWS capabilities, CPM protects the valuable data and mission-critical applications in the AWS cloud.

In this post, I wanted to show how easy it is to deploy and install Cloud Protection Manager as well as look at some of the new features in the 2.3d release. I will do a follow up post going into more detail about how to protect AWS Instances and services with CPM.

What’s new with CPM 2.3:

  • Automated backup for Amazon DynamoDB: CPM provides backup and recovery for Amazon DynamoDB, you can now apply existing policies and schedules to backup and restore their DynamoDB tables and metadata.
  • RESTful API:  Completely automate backup and recovery operations with the new Cloud Protection Manager API. This feature provides seamless integration between CPM and other applications.
  • Enhanced reporting features: Enhancements include the ability to gather all reports in one tab, run as a CSV, view both protected and unprotected resources and include new filtering options as well.

Other new features that come as part of the CPM 2.3 release include full cross-region and cross-account disaster recovery for Aurora databases, enhanced permissions for users and a fast and efficient on boarding process using CloudFormation’s 1-click template.

Installing, Configuring and Managing CPM:

The process to install Cloud Protection Manager from the AWS Marketplace is seamless and can be done via a couple different methods including a 1-Click deployment. The offical install guide can be read here. The CPM EC2 instance is deployed into a new or existing VPC configured with a subnet and must be put into an existing, or new Security Group.

Once deployed you are given the details of the installation.

And you can see it from the AWS Console under the EC2 instances. I’ve added a name for the instance just for clarities sake.

One thing to note is that there is no public IP assigned to the instance as part of the deployment. You can create a new Elastic IP and attach it to the instance, or you can access the configuration website via it’s internal IP if you have access to the subnet via some form of VPN or network extension.

There is an initial configuration wizard that guides you through the registration and setup of CPM. Note that you do need internet connectivity to complete the process otherwise you will get this error.

The final step will allow you to configure a volume for CPM use. With that the wizard finalises the setup and you can log into the Cloud Protection Manager.

Conclusion: 

The ability to backup AWS services natively has it’s advantages over traditional methods such as agents. Cloud Protection Manager from N2WS can be installed and ready to go within 5 minutes. In the next post, i’ll walk through the CPM interface and show how you backup and recover AWS instances and services.

References:

https://n2ws.com/cpm-install-guide

https://support.n2ws.com/portal/kb/articles/release-notes-for-the-latest-v2-3-x-cpm-release

« Older Entries