Author Archives: Anthony Spiteri

VMworld 2019 Review – Project Pacific is a Stroke of Kubernetes Genius… but not without a catch!

Kubernetes Kubernetes, Kubernetes… say Kubernetes one more time… I dare you!

If it wasn’t clear what the key take away from VMworld 2019 was last week in San Francisco then I’ll repeat it one more time… Kubernetes! It was something which I predicted prior to the event in my session breakdown. And all jokes aside, with the amount of times we heard Kubernetes mentioned last week, we know that VMware signalled their intent to jump on the Kubernetes freight train and ride it all the way.

When you think about it, the announcement of Project Pacific isn’t a surprise. Apart from it being an obvious path to take to ensure VMware remains viable with IT Operations (IT Ops) and Developers (Devs) holistically, the more I learned about what it actually does under the hood, the more I came to belief that it is a stroke of genius. If it delivers technically on the its promise of full ESX and Kubernetes integration into the one vSphere platform, then it will be a huge success.

The whole premise of Project Pacific is to use Kubernetes to manage workloads via declarative specifications. Essentially allowing IT Ops and Devs to tell vSphere what they want and have it deploy and manage the infrastructure that ultimately serves as a platform for an application. This is all about the application! Abstracting all infrastructure and most of the platform to make the application work. We are now looking at a platform platform that controls all aspects of that lifecycle end to end.

By redesigning vSphere and implanting Kubernetes into the core of vSphere, VMware are able to take advantage of the things that make Kubernetes popular in todays cloud native world. A Kubernetes Namespace is effectively a tenancy in Kubernetes that will manage applications holistically and it’s at the namespace level where policies are applied. QoS, Security, Availability, Storage, Networking, Access Controls can all be applied top down from the Namespace. This gives IT Ops control, while still allowing devs to be agile.

I see this construct similar to what vCloud Director offers by way of a Virtual Datacenter with vApps used as the container for the VM workloads… in truth, the way in which vCD abstracted vSphere resources into tenancies and have policies applied was maybe ahead of it’s time?

DevOps Seperation:

DevOps has been a push for the last few years in our industry and the pressure to be a DevOp is huge. The reality of that is that both sets of disciplines have fundamentally different approaches to each others lines of work. This is why it was great to see VMware going out of their way to make the distinction between IT Ops and Devs.

Dev and IT Ops collaboration is paramount in todays IT world and with Project Pacific, when a Dev looks at the vSphere platform they see Kubernetes. When an IT Ops guy looks at vSphere he still sees vSphere and ESXi. This allows for integrated self service and allows more speed with control to deploy and manage the infrastructure and platforms the run applications.

Consuming Virtual Machines as Containers and Extensibility:

Kubernetes was described as a Platform Platform… meaning that you can run almost anything in Kubernetes as long as its declared. The above image shows a holistic application running in Project Pacific. The application is a mix of Kubernetes containers, VMs and other declared pieces… all of which can be controlled through vSphere and lives under that single Namespace.

When you log into the vSphere Console you can see a Kubernetes Cluster in vSphere and see the PODs and action on them as first class citizens. vSphere Native PODs are an optimized run time… apparently more optimized than baremetal… 8% faster than baremetal as we saw in the keynote on Monday. The way in which this is achieved is due to CPU virtualization having almost zero cost today. VMware has taken advantage of the advanced ESXi scheduler of which vSphere/ESXi have advanced operations across NUMA nodes along with the ability to strip out what is not needed when running containers on VMs so that there is optimal runtime for workloads.

vSphere will have two APIs with Project Pacific. The traditional vSphere API that has been refined over the years will remain and then, there will be the Kubernetes API. There is also be ability to create infrastructure with kubectl. Each ESXi Cluster becomes a Kubernetes cluster. The work done with vSphere Integrated Containers has not gone to waste and has been used in this new integrated platform.

PODs and VMs live side by side and declared through Kubernetes running in Kubernetes. All VMs can be stored in the container registry. Critical Venerability scans, encryption, signing can be leveraged at a container level that exist in the container ecosystem and applied to VMs.

There is obviously a lot more to Project Pacific, and there is a great presentation up on YouTube from Tech Field Day Extra at VMworld 2019 which I have embedded below. In my opinion, they are a must for all working in and around the VMware ecosystem.

The Catch!

So what is the catch? With 70 million workloads across 500,000+ customers VMware is thinking that with this functionality in place the current movement of refactoring of workloads to take advantage of cloud native constructs like containers, serverless or Kubernetes doesn’t need to happen… those, and existing workloads instantly become first class citizens on Kubernetes. Interesting theory.

Having been digging into the complex and very broad container world for a while now, and only just realising how far on it has become in terms of it being high on most IT agendas my currently belief is that the world of Kubernetes and containers is better placed to be consumed on public clouds. The scale and immediacy of Kubernetes platforms on Google, Azure or AWS without the need to ultimately still procure hardware and install software means that that model of consumption will still have an advantage over something like Project Pacific.

The one stroke of genius as mentioned is that by combining “traditional” workloads with Kubernetes as its control plane within vSphere the single, declarative, self service experience that it potentially offers might stop IT Operations from moving to public clouds… but is that enough to stop the developers forcing their hands?

It is going to be very interesting to see this in action and how well it is ultimately received!

More on Project Pacific

The videos below give a good level of technical background into Project Pacific, while Frank also has a good introductory post here, while Kit Colbert’s VMworld session is linked in the references.

References:

https://videos.vmworld.com/global/2019/videoplayer/28407

VMworld 2019 – Top Session Picks

VMworld 2019 is happening tomorrow (It is already Saturday here) and as I am just about to embark on the 20+ hour journey from PER to SFO I thought it was better late than never to share my top session picks. Now with sessions available online it doesn’t really matter that the actual sessions are fully booked. The theme this year is “Make Your Mark” …which does fall in line with themes of past VMworld events. It’s all about VMware empowering it’s customers to do great things with its technology.

I’ve already given a session breakdown and analysis for this years event… and as a recap here are some of the keyword numbers relating to what tech is in what session.

Out of all that, and the 1348 sessions total, that are currently in the catalog I have chosen then list below as my top session picks.

  • vSphere HA and DRS Hybrid Cloud Deep Dive [HBI2186BU]
  • 60 Minutes of Non-Uniform Memory Architecture [HBI2278BU]
  • vCloud Director.Next : Deliver Cloud Services on Any VMware Endpoint [HBI2452BU]
  • Why Cloud Providers Choose VMware vCloud Director as Their Cloud Platform [HBI1453PU]
  • VMware Cloud on AWS: Advanced Automation Techniques [HBI1463BU]
  • The Future of vSphere: What you Need to Know [HBI4937BU]
  • NSX-T for Service Providers [MTE6105U]
  • Kubernetes Networking with NSX-T [MTE6104U]
  • vSAN Best Practices [HCI3450BU]
  • Deconstructing vSAN: A Deep Dive into the internals of vSAN [HCI1342BU]
  • VMware in Any Cloud: Introducing Microsoft Azure and Google Cloud VMware Solutions [HBI4446BU]

Also wanted to again call out the Veeam Sessions.

  • Backups are just the start! Enhanced Data Mobility with Veeam [HBI3535BUS]
  • Enhancing Data Protection for vSphere with What’s Coming from Veeam [HBI3532BUS]

A lot of those sessions above relate to my ongoing interest in the service providers world and my continued passion in vCloud Director, NSX and vSAN as core VMware technologies. I also think the first two I put in the list are important because in this day of instant (gratification) services we still need to be mindful of what is happening underneath the surface…. it’s not just a case of some computer running some workload somewhere!

All in all it should be a great week in SFO and looking forward to the event… now to finish packing and get to the airport!

Veeam @VMworld 2019 Edition…

VMworld 2019 is now only a couple of days away, and I can’t wait to return to San Francisco for what will be my seventh VMworld and third one with Veeam. It has again been an interesting year since the last VMworld and the industry has shifted a little when it comes to the backup and recovery market. Data management and data analytics have become the new hot topic item and lots of vendors have jumped onto the messaging of data growth at more than exponential rates.

VMware still have a lot to say about where and how that data is processed and stored!

VMworld is still a destination event and Veeam recognises VMware’s continued influence in the IT industry by continuing to support VMworld. The ecosystem that VMware has built over the past ten to fifteen years is significant and has only been strengthened by the extension of their technologies to the public cloud.

Veeam continues to build out own own strong ecosystem backed by a software first, hardware agnostic platform which results in the greatest flexibility in the backup and recovery market. We continue to support VMware as our number 1 technology partner and this year we look to build on that with support for VMware Cloud on AWS and enhanced VMware features sets built into our core Backup & Replication product as we look to release v10 later in the year.

Veeam Sessions @VMworld:

Officially we have two breakout sessions this year, with Danny Allan and myself presenting a What’s Coming from Veeam featuring our long awaited CDP feature (#HBI3532BUS), and Michael Cade and David Hill presenting a around how Backups are just the start… with a look at how we offer Simplicity, Reliability and Portability as core differentiators (#HBI3535BUS).

There are also four vBrownBag Tech Talks where Veeam features including talks from Michael Cade, Tim Smith, Joe Hughes and myself. While we are also doing a couple of partner lunch events focused on Cloud and Datacenter Transformation.

https://my.vmworld.com/widget/vmware/vmworld19us/us19catalog?search=Veeam

Veeam @VMworld Solutions Exchange:

This year, as per usual we will have significant presence on the floor as main sponsor of the Welcome Reception, with our Booth (#627) Area featuring demo’s prize, giveaways, as well as having an Experts Bar. There will be a number of Booth presentations throughout the event.

Veeam Community Support @VMworld:

Veeam still gets the community and has been a strong supporter historically of VMworld community based events. This year again, we have come to the party are have gone all-in in terms of being front and center in supporting community events. Special mention goes to Rick Vanover who leads the charge in making sure Veeam is doing what it can to help make these events possible:

  • Opening Acts
  • VMunderground
  • vBrownBag
  • Spousetivities
  • vRockstar Party
  • LevelUp Career Cafe

Party with Veeam @VMworld:

Finally, it wouldn’t be VMworld without attending Veeam’s legendary party. While we are not in Vegas this year and can’t hold it at a super club, we have managed to book one of the best places in San Francisco… The Masonic. We have Andy Grammer performing and while it won’t be a Vegas style Veeam event… it is already sold out maxed at 2000 people so we know it’s going to be a success and will be one of the best parties of 2019!

While it’s officially sold out ticket wise, if people do want to attend we are suggesting they come to the venue in any case as there are sure to be no shows.

Final Word:

Again, this year’s VMworld is going to be huge and Veeam will be right there front and center of the awesomeness. Please stop by our sessions, visit our stand and attend our community sponsored events and feel free to chase me down for a chat…I’m always keen to meet other members of this great community. Oh, and don’t forget to get to the party!

VMworld 2019 – Session Breakdown and Analysis

Everything to do with VMworld this year feels like it’s arrived at lightning speed. I actually thought the event was two weeks away as the start of the week… but here we are… only five days away from kicking off in San Francisco. The content catalog for the US event has been live for a while now and as is recently the case, a lot of sessions were full just hours after it went live! At the moment there is huge 1348 sessions listed which include the #vBrownBag Tech Talks hosted by the VMTN Community.

As I do every year I like to filter through the content catalog and work out what technologies are getting the airplay at the event. It’s interesting going back since I first started doing this to see the catalog evolve with the times… certain topics have faded away while others have grown and some dominate. This ebs and flows with VMware’s strategies and makes for interesting comparison.

What first struck me as being interesting was the track names compared to just two years ago at the 2017 event:

I see less buzz words and more tracks that are tech specific. Yes, within those sub categories we have the usual elements of “digital transformation” and “disruption”, however VMware’s focus looks to be focuses more around the application of technology and not the high level messaging that usually plagues tech conferences. VMworld has for the most and remains a technical conference for techs.

By digging into the sessions by searching on key words alone, the list below shows you where most of the sessions are being targeted this year. If, in 2015 you where to take a guess at what particular technology was having the most coverage at a VMworld at 2019…the list below would be much different than what we see this year.

From looking back over previous years, there is a clear rise in the Containers world which is now dominated by Kubernetes. Thinking back to previous VMworld’s, you would never get the big public cloud providers with airtime. If you look at how that has changed for this year we now have 231 sessions alone that mention AWS… not to mention the ones mentioning Azure or Google.

Strategy wise it’s clear that NSX, VMC and Kubernetes are front of mind for VMware and their ecosystem partners.

I take this as an indication as to where the industry is… and is heading. VMware are still the main touch point for those that work in and around IT Infrastructure support and services. They own the ecosystem still… and even with the rise of AWS, Azure, GCP and alike, they still are working out ways to hook those platforms into their own technology and are moving with industry trends as to where workloads are being provisioned. Kubernetes and VMware Cloud on AWS are a big part of that, but underpinning it is the network… and NSX is still heavily represented with NSX-T becoming even more prominent.

One area that continues to warm my heart is the continued growth and support shown to the VMware Cloud Providers and vCloud Director. The numbers are well up from the dark days of vCD around the 2013 and 2014 VMworld’s. For anyone working on cloud technologies this year promises to be a bumper year for content and i’m looking forward to catching as much vCD and VCPP related sessions as I can.

It promises to be an interesting VMworld, with VMware hinting at a massive shift in direction… I think we all know in a round about way where that is heading… let’s see if we are right come next week.

https://my.vmworld.com/widget/vmware/vmworld19us/us19catalog

Quick Fix – Issues Upgrading VCSA due to Password Expiration

It seems like an interesting “condition” has worked its self into recent VCSA builds where upon completing upgrades, the process seems to reset the root account expiration flag. This blocked my proceeding with an upgrade and only worked when I followed the steps listed below.

The error I got is shown below:

“Appliance (OS) root password is expired or is going to expire soon. Please change the root password before installing an update.”

When this happened on the first vCenter I went to upgrade, I thought that maybe there was a chance I had forgotten to set that to never expires… but usually by default I check that setting and set it to never expires… not the greatest security practice, but for my environments it’s something I set almost automatically during initial configuration. After reaching out on Twitter, I got some immediate feedback saying to reset the root password by going into single user mode… which did work.

When this happened a second time on a second VCSA, on which I without question set the never expires flag to true, I took a slightly different approach to the problem and decided to try reset the password from the VCSA Console, however that process fails as well.

After going back through the Tweet responses, I did come across this VMwareKB which lays down the issue and offers the reason behind the errors.

This issue occurs when VAMI is not able to change an expired root password.

Fair enough… but I don’t have a reason for the password never expires option not being honoured? Some feedback and conversations suggest that maybe this is a bug that’s worked its way into recent builds during upgrade procedures. In any case the way to fix it is simple and doesn’t need console access to access the command line… you just need to SSH into the VCSA and reset the root password as shown below.

Once done, the VCSA upgrade proceeds as expected. As you can see there we have also confirmed that the Password Expires is set to never. If anyone can confirm the behaviour regarding that flag being reset, feel free to comment below.

Apart from that, there is the quick fix!

References:

https://kb.vmware.com/s/article/67414

VMworld 2019 – Still Time To Go for FREE*!

VMworld is rapidly approaching, and for those that have not secured their place at the event in San Francisco, and for whatever reason have been hindered in terms of purchasing an event ticket… there is still time and there is still a way!

We (Veeam) have been running a competition that gives away three FULL conference pass over the course of the last few months but ends on the 19th of August so time is running out!

Head here to register for the chance to win a FULL conference pass.

For a quick summary of what is happening at VMworld from a Veeam perspective including sessions, parties and more, click here to head to the main event page that contains details on what Veeam is doing at VMworld 2019.

*The Prize does not include any associated costs including but not limited to expenses, insurance, travel, food or additional accommodation costs unless otherwise specified above.

I went on Holiday (Vacation) and Managed to Switch Off!!

The jet lag has almost passed… I’ve nearly caught up with the backlog of messages on my various social platforms… expenses are almost done… and i’m about to hit Outlook and clear my email inbox. Yes, I’ve just come back from nearly 4.5 weeks away on holiday (vacation) and barring a few conversations on Teams with my team, I managed to switch off almost 100%… In fact this is my first blog post in more than a month!

To be honest, this is something that I thought I would ever do and on one side I feel somewhat ok about it… while on the other I have a case of mild agita over needing to catch up and get back into the groove of work life.

“Don’t F*cking Tell me to Switch Off!!!”

This was the original title of the post (modified to soften the blow) and mainly relates to the bucket load of messages and comments telling me to switch off while on leave as I started this trip. This is something I have experienced throughout my career and I see it all the time when people make the “mistake” of checking into Twitter, or posting in Slack when they are on vacation.

There is almost nothing I find more frustrating than people telling me (or others) how to spend my time… be it on holiday or otherwise and especially when it comes to work related matters. I’ve written before about work/life balance and how I have struggled to achieve that over the years and in fact the whole work/life balance in IT has become a real topic since then and there have been many people that have written about their own personal struggles.

To that end, when people tell me to switch off, I tend to respond with what is stated above and the immediate thought that resonates in my mind is that I’ll switch off when and if I damn well please! And if I don’t, then that is ok as well! If I feel balance and I am ok in myself, then it’s something that is in my control and not the place for others to try and dictate to me.

Regardless… I Did Switch Off

When it comes to my thoughts around switching off… it comes down to the fact that my hobby is also my job and my career. Tinkering is how I learn and an important component of learning is staying connected and engaged with the various online communities and content sources. This is why I find it hard to completely switch off. I don’t deny that there is a physiological side to this which equates to an addiction… it’s well documented that we thrive on the hits of dopamine that come from social reward.

For us as techies, that social reward is linked to emails, messages, Tweets, likes, hits, views etc. I’ll be honest and admit that I do crave all those things as well as social interaction with my workmates. However as I settled into my holiday I began to replace the need for technical reward with that of personal and family rewards that generated different types of dopamine hits.

 

The max hit came while at a local village feast in Gozo where memories of my childhood trips to Malta came flooding back… and as I ate my third Imaret I was at max switch off level and knew that I had succeeding in doing something I thought not possible! Total disconnect!

I captured that moment below in the fourth picture… this is for me as a reminder of where I can get to if I ever feel the need to switch off again.

Ultimately I was able to not touch my MBP for work all holiday and I let myself drift away from my connected world without much thought or fear of missing out… for the most 🙂

I still did a bit here and there, but not nearly as much as I had thought. Now that I am back, it’s time to get into the connected world and get back to what I do… stay engaged, stay connected and stay switched on!

Kubernetes Everywhere…Time to Take off the Blinkers!

This is more or less a follow up post to the one I wrote back in 2015 about the state of containers in the IT World as I saw it at the time. I started off that post talking about the freight train that was containerization along with a cheeky meme… fast forward four years and the narrative around containers has changed significantly, and now there is new cargo on that freight train… and it’s all about Kubernetes!

In my previous role working at a Cloud Provider, shortly after writing that 2015 post I started looking at ways to offer containers as a service. At the time there wasn’t much, but I dabbled a bit in Docker and if you remember at the time, VMware’s AppCatalyst… which I used to deploy basic Docker images on my MBP (think it’s still installed actually) with the biggest highlight for me at the time being able to play Docker Doom!

I also was involved in some of the very early alphas for what was at the time vSphere Integrated Containers (Docker containers as VMs on vCenter) which didn’t catch on compared to what is currently out there for the mass deployment and management of containers. VMware did evolve it’s container strategy with Pivotal Container Services, however those outside the VMware world where already looking elsewhere as the reality of containerised development along with serverless and cloud has taken hold and become accepted as a mainstream IT practice.

Even four or five years ago I was hearing the word Kubernetes often. I remember sitting in my last VMware vChampion session with where Kit Colbert was talking about Kuuuuuuuurbenites (the American pronunciation stuck in my mind) and how we all should be ready to understand how it works as it was about to take over the tech world. I didn’t listen… and now, I have a realisation that I should have started looking into Kubernetes and container management in general more seriously sooner.

Not because it’s fundamental to my career path…not because I feel like I was lagging technically and not because there have been those saying for years that Kubernetes will win the race. There is an opportunity to take off the blinkers and learn something that is being adopted by understanding the fundamentals about what makes it tick. In terms of discovery and learning, I see this much like what I have done over the past eighteen months with automation and orchestration.

From a backup and recovery point of view, we have been seeing an increase in the field of customers and partners asking how they backup containers and Kubernetes. For a long time the standard response was “why”. But it’s becoming more obvious that the initial stateless nature of containers is making way for more stateful persistent workloads. So now, it’s not only about backing up the management plane.. but also understanding that we need to protect the data that sits within the persistent volumes.

What I’ll Be Doing:

I’ve been interested for a long time superficially about Kubernetes, reading blogs here and there and trying to absorb information where possible. But as with most things in life, you best learn by doing! My intention is to create a series of blog posts that describe my experiences with different Kubernetes platforms to ultimately deploy a simple web application with persistent storage.

These posts will not be how-tos on setting up a Kubernetes cluster etc. Rather, I’ll look at general config, application deployment, usability, cost and whatever else becomes relevant as I go through the process of getting the web application online.

Off the top of my head, i’ll look to work with these platforms:

  • Google Kubernetes Engine (GKE)
  • Amazon Elastic Container Service for Kubernetes (EKS)
  • Azure Container Service (AKS)
  • Docker
  • Pivotal Container Service (PKS)
  • vCloud Director CSE
  • Platform9

The usual suspects are there in terms of the major public cloud providers. From a Cloud and Service Provider point of view, the ability to offer Kubernetes via vCloud Director is very exciting and if I was still in my previous role I would be looking to productize that ASAP. For a different approach, I have always likes what Platform 9 has done and I was also an early tester of their initial managed vSphere support, which has now evolved into managed OpenStack and Kubernetes. They also recently announced Managed Applications through the platform which i’ve been playing with today.

Wrapping Up:

This follow up post isn’t really about the state of containers today, or what I think about how and where they are being used in IT today. The reality is that we live in a hybrid world and workloads are created as-is for specific platforms on a need by need basis. At the moment there is nothing to say that virtualization in the form of Virtual Machines running on hypervisors on-premises are being replaced by containers. The reality is that between on-premises, public clouds and in between…workloads are being deployed in a variety of fashions… Kubernetes seems to have come to the fore and has reached some level of maturity that makes it a viable option… that could no be said four years ago!

It’s time for me (maybe you) to dig underneath the surface!

Link:

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Kubernetes is mentioned 18 times in this and on this page

Mapping vCloud Director Backup Jobs to Self Service Portal Tenants

Since version 7 of Backup & Replication, Veeam has lead the way in regard to the protection of workloads running in vCloud Director. With version 7 Veeam first released deep integration into vCD that talked directly to the vCD APIs to facilitate the backup and recovery of vCD workloads and their constructs. More recently in version 9.5, the vCD Self Service Portal was released which also taps into vCD for tenant authentication.

The portal leverages Enterprise Manager and allows service providers to grant their tenants self-service backup for their vCD workloads. More recently we have seen some VCSPs integrate the portal into the new vCD UI via the extensibility plugin which is a great example of the power that Veeam has with vCD today while we wait for deeper, native integration.

It’s possible that some providers don’t even know that this portal exists let alone the value it offers. I’ve covered the basics of the portal here…but in this post, I am going to quickly mention an extension to a project I released last year for the vCD Self Service Portal, that automatically enables a tenant, creates a default backup jobs based on policies, tie backup copy jobs to default job for longer retention and finally import the jobs into the vCD Self Service Portal ready for use.

Standalone Map and Unmap PowerShell Script:

From the above project, the job import part has been expanded to include its own standalone PowerShell script that can also be used to map or unmap existing vCD Veeam Backup jobs to a a tenant to manage from the vCD Self Service Portal. This is done using the Set-VBRvCloudOrganizationJobMapping commandlet.

As shown below, this tenant has already configured a number of jobs in the Portal.

There was another historical job that was created outside of the portal directly from the Veeam console. Seen below as TEST IMPORT.

To map the job, run the PowerShell script is with the -map parameter. A list of all existing vCloud Director Backup jobs will be listed. Once the corresponding number has been entered the commandlet within the script will be run and the job mapped to the tenant linked to the job.

Once that has been run, the tenant now has that job listed in the vCD Self Service Portal.

There is a little bit of error checking built into the script, to that it exits nicely on an exception as shown below.

Finally, if you want to unmap a job from the vCD Self Service portal, run the PowerShell script with the -unmap parameter. Conclusion:

Like most things I work on and then publish for general consumption, I had a request to wrap some logic around the Set-VBRvCloudOrganizationJobMapping commandlet from a service provider partner. The script can be taken and improved, but as-is, provides an easy way to retrieve all vCloud Jobs belonging to a Veeam Server, select the desired job and then have it mapped to a tenant using the vCD Self Service Portal.

References:

https://github.com/anthonyspiteri/powershell/blob/master/vCD-Create-SelfServiceTenantandPolicyJobs/vCD_job.ps1

https://helpcenter.veeam.com/docs/backup/powershell/set-vbrvcloudorganizationjobmapping.html?ver=95u4

First Look: On Demand Recovery with Cloud Tier and VMware Cloud on AWS

Since Veeam Cloud Tier was released as part of Backup & Replication 9.5 Update 4, i’ve written a lot about how it works and what it offers in terms of offloading data from more expensive local storage to what is fundamentally cheaper remote Object Storage. As with most innovative technologies, if you dig a little deeper… different use cases start to present themselves and unintended use cases find their way to the surface.

Such was the case when, together with AWS and VMware, we looked at how Cloud Tier could be used as a way to allow on demand recovery into a cloud platform like VMware Cloud on AWS. By way of a quick overview, the solution shown below has Veeam backing up to a Scale Out Backup Repository which has a Capacity Tier backed by an Object Storage repository in Amazon S3. There is a minimal operational restore window set which means data is offloaded quicker to the Capacity Tier.

Once there, if disaster happens on premises, an SDDC is spun up, a Backup & Replication Server deployed and configured into that SDDC. From there, a SOBR is configured with the same Amazon S3 credentials that connects to the Object Storage bucket which detects the backup data and starts a resync of the metadata back to the local performance tier. (as described here) Once the resync has finished workloads can be recovered, streamed directly from the Capacity Tier.

The diagram above has been published on the AWS Reference Architecture page, and while this post has been brief, there is more to come by way of an offical AWS Blog Post co-authored by myself Frank Fan from AWS around this solution. We will also look to automate the process as much as possible to make this a truely on demand solution that can be actioned with the click of a button.

For now, the concept has been validated and the hope is people looking to leverage VMware Cloud on AWS as a target for disaster and recovery look to leverage Veeam and the Cloud Tier to make that happen.

References: AWS Reference Architecture

« Older Entries Recent Entries »