Tag Archives: VMware

The Separation of Dev and Ops is Upon Us!

Apart from the K word, there was one other enduring message that I think a lot of people took home from VMworld 2019. That is, that Dev and Ops should be considered as seperate entities again. For the best part of the last five or so years the concept of DevOps, SecOps and other X-Ops has been perpetuated mainly due to the rise of consumable platforms outside the traditional control of IT operations people.

The pressure to DevOp has become very real in the IT communities that I am involved with. These circles are mainly made up of traditional infrastructure guys. I’ve written a few pieces around how the industry trend to try turn everyone into developers isn’t one that needs to be followed. Automation doesn’t equal development and there are a number of Infrastructure as Code tools that looks to bridge the gap between the developer and the infrastructure guy.

That isn’t to say that traditional IT guys shouldn’t be looking to push themselves to learn new things and improve and evolve. In fact, IT Ops needs to be able to code in slightly abstracted ways to work with APIs or leverage IaC tooling. However my view is that IT Ops number one role is to understand fundamentally what is happening within a platform, and be able to support infrastructure that developers can consume.

I had a bit of an aha moment this week while working on some Kubernetes (that word again!) automation work with Terraform which I’ll release later this week. The moment was when I was trying to get the Sock Shop demo working on my fresh Kubernetes cluster. I finally understood why Kubernetes had been created. Everything about the application was defined in the json files and deployed as is holistically through one command. It’s actually rather elegant compared to how I worked with developers back in the early days of web hosting on Windows and Linux web servers with their database backends and whatnot.

Regardless of the ease of deployment, I still had to understand the underlying networking and get the application to listen on external IPs and different ports. At this point I was doing dev and doing IT Ops in one. However this is all contained within my lab environment that has no bearing on the availability of the application, security or otherwise. This is where separation is required.

For developers, they want to consume services and take advantages of the constructs of a containerised platform like Docker paired together with the orchestrations and management of those resources that Kubernetes provides. They don’t care what’s under the hood and shouldn’t be concerned what their application runs on.

For IT Operations they want to be able to manage the supporting platforms as they did previously. The compute, the networking the storage… this is all still relevant in a hybrid world. They should absolutely still care about what’s under the hood and the impact applications can have to infrastructure.

VMware has introduced that (re)split of dev and ops with the introduction of Project Pacific and I applaude them for going against the grain and endorsing the separation of roles and responsibilities. Kubernetes and ESXi in one vSphere platform is where that vision lies. Outside of vSphere, it is still very true that devs can consume public clouds without a care about underlying infrastructure… but for me… it all comes back down to this…

Let devs be devs… and let IT Ops be IT Ops! They need to work together in this hybrid, multi-cloud world!

VMworld 2019 Review – Project Pacific is a Stroke of Kubernetes Genius… but not without a catch!

Kubernetes Kubernetes, Kubernetes… say Kubernetes one more time… I dare you!

If it wasn’t clear what the key take away from VMworld 2019 was last week in San Francisco then I’ll repeat it one more time… Kubernetes! It was something which I predicted prior to the event in my session breakdown. And all jokes aside, with the amount of times we heard Kubernetes mentioned last week, we know that VMware signalled their intent to jump on the Kubernetes freight train and ride it all the way.

When you think about it, the announcement of Project Pacific isn’t a surprise. Apart from it being an obvious path to take to ensure VMware remains viable with IT Operations (IT Ops) and Developers (Devs) holistically, the more I learned about what it actually does under the hood, the more I came to belief that it is a stroke of genius. If it delivers technically on the its promise of full ESX and Kubernetes integration into the one vSphere platform, then it will be a huge success.

The whole premise of Project Pacific is to use Kubernetes to manage workloads via declarative specifications. Essentially allowing IT Ops and Devs to tell vSphere what they want and have it deploy and manage the infrastructure that ultimately serves as a platform for an application. This is all about the application! Abstracting all infrastructure and most of the platform to make the application work. We are now looking at a platform platform that controls all aspects of that lifecycle end to end.

By redesigning vSphere and implanting Kubernetes into the core of vSphere, VMware are able to take advantage of the things that make Kubernetes popular in todays cloud native world. A Kubernetes Namespace is effectively a tenancy in Kubernetes that will manage applications holistically and it’s at the namespace level where policies are applied. QoS, Security, Availability, Storage, Networking, Access Controls can all be applied top down from the Namespace. This gives IT Ops control, while still allowing devs to be agile.

I see this construct similar to what vCloud Director offers by way of a Virtual Datacenter with vApps used as the container for the VM workloads… in truth, the way in which vCD abstracted vSphere resources into tenancies and have policies applied was maybe ahead of it’s time?

DevOps Seperation:

DevOps has been a push for the last few years in our industry and the pressure to be a DevOp is huge. The reality of that is that both sets of disciplines have fundamentally different approaches to each others lines of work. This is why it was great to see VMware going out of their way to make the distinction between IT Ops and Devs.

Dev and IT Ops collaboration is paramount in todays IT world and with Project Pacific, when a Dev looks at the vSphere platform they see Kubernetes. When an IT Ops guy looks at vSphere he still sees vSphere and ESXi. This allows for integrated self service and allows more speed with control to deploy and manage the infrastructure and platforms the run applications.

Consuming Virtual Machines as Containers and Extensibility:

Kubernetes was described as a Platform Platform… meaning that you can run almost anything in Kubernetes as long as its declared. The above image shows a holistic application running in Project Pacific. The application is a mix of Kubernetes containers, VMs and other declared pieces… all of which can be controlled through vSphere and lives under that single Namespace.

When you log into the vSphere Console you can see a Kubernetes Cluster in vSphere and see the PODs and action on them as first class citizens. vSphere Native PODs are an optimized run time… apparently more optimized than baremetal… 8% faster than baremetal as we saw in the keynote on Monday. The way in which this is achieved is due to CPU virtualization having almost zero cost today. VMware has taken advantage of the advanced ESXi scheduler of which vSphere/ESXi have advanced operations across NUMA nodes along with the ability to strip out what is not needed when running containers on VMs so that there is optimal runtime for workloads.

vSphere will have two APIs with Project Pacific. The traditional vSphere API that has been refined over the years will remain and then, there will be the Kubernetes API. There is also be ability to create infrastructure with kubectl. Each ESXi Cluster becomes a Kubernetes cluster. The work done with vSphere Integrated Containers has not gone to waste and has been used in this new integrated platform.

PODs and VMs live side by side and declared through Kubernetes running in Kubernetes. All VMs can be stored in the container registry. Critical Venerability scans, encryption, signing can be leveraged at a container level that exist in the container ecosystem and applied to VMs.

There is obviously a lot more to Project Pacific, and there is a great presentation up on YouTube from Tech Field Day Extra at VMworld 2019 which I have embedded below. In my opinion, they are a must for all working in and around the VMware ecosystem.

The Catch!

So what is the catch? With 70 million workloads across 500,000+ customers VMware is thinking that with this functionality in place the current movement of refactoring of workloads to take advantage of cloud native constructs like containers, serverless or Kubernetes doesn’t need to happen… those, and existing workloads instantly become first class citizens on Kubernetes. Interesting theory.

Having been digging into the complex and very broad container world for a while now, and only just realising how far on it has become in terms of it being high on most IT agendas my currently belief is that the world of Kubernetes and containers is better placed to be consumed on public clouds. The scale and immediacy of Kubernetes platforms on Google, Azure or AWS without the need to ultimately still procure hardware and install software means that that model of consumption will still have an advantage over something like Project Pacific.

The one stroke of genius as mentioned is that by combining “traditional” workloads with Kubernetes as its control plane within vSphere the single, declarative, self service experience that it potentially offers might stop IT Operations from moving to public clouds… but is that enough to stop the developers forcing their hands?

It is going to be very interesting to see this in action and how well it is ultimately received!

More on Project Pacific

The videos below give a good level of technical background into Project Pacific, while Frank also has a good introductory post here, while Kit Colbert’s VMworld session is linked in the references.

References:

https://videos.vmworld.com/global/2019/videoplayer/28407

VMworld 2019 – Session Breakdown and Analysis

Everything to do with VMworld this year feels like it’s arrived at lightning speed. I actually thought the event was two weeks away as the start of the week… but here we are… only five days away from kicking off in San Francisco. The content catalog for the US event has been live for a while now and as is recently the case, a lot of sessions were full just hours after it went live! At the moment there is huge 1348 sessions listed which include the #vBrownBag Tech Talks hosted by the VMTN Community.

As I do every year I like to filter through the content catalog and work out what technologies are getting the airplay at the event. It’s interesting going back since I first started doing this to see the catalog evolve with the times… certain topics have faded away while others have grown and some dominate. This ebs and flows with VMware’s strategies and makes for interesting comparison.

What first struck me as being interesting was the track names compared to just two years ago at the 2017 event:

I see less buzz words and more tracks that are tech specific. Yes, within those sub categories we have the usual elements of “digital transformation” and “disruption”, however VMware’s focus looks to be focuses more around the application of technology and not the high level messaging that usually plagues tech conferences. VMworld has for the most and remains a technical conference for techs.

By digging into the sessions by searching on key words alone, the list below shows you where most of the sessions are being targeted this year. If, in 2015 you where to take a guess at what particular technology was having the most coverage at a VMworld at 2019…the list below would be much different than what we see this year.

From looking back over previous years, there is a clear rise in the Containers world which is now dominated by Kubernetes. Thinking back to previous VMworld’s, you would never get the big public cloud providers with airtime. If you look at how that has changed for this year we now have 231 sessions alone that mention AWS… not to mention the ones mentioning Azure or Google.

Strategy wise it’s clear that NSX, VMC and Kubernetes are front of mind for VMware and their ecosystem partners.

I take this as an indication as to where the industry is… and is heading. VMware are still the main touch point for those that work in and around IT Infrastructure support and services. They own the ecosystem still… and even with the rise of AWS, Azure, GCP and alike, they still are working out ways to hook those platforms into their own technology and are moving with industry trends as to where workloads are being provisioned. Kubernetes and VMware Cloud on AWS are a big part of that, but underpinning it is the network… and NSX is still heavily represented with NSX-T becoming even more prominent.

One area that continues to warm my heart is the continued growth and support shown to the VMware Cloud Providers and vCloud Director. The numbers are well up from the dark days of vCD around the 2013 and 2014 VMworld’s. For anyone working on cloud technologies this year promises to be a bumper year for content and i’m looking forward to catching as much vCD and VCPP related sessions as I can.

It promises to be an interesting VMworld, with VMware hinting at a massive shift in direction… I think we all know in a round about way where that is heading… let’s see if we are right come next week.

https://my.vmworld.com/widget/vmware/vmworld19us/us19catalog

Quick Fix – Issues Upgrading VCSA due to Password Expiration

It seems like an interesting “condition” has worked its self into recent VCSA builds where upon completing upgrades, the process seems to reset the root account expiration flag. This blocked my proceeding with an upgrade and only worked when I followed the steps listed below.

The error I got is shown below:

“Appliance (OS) root password is expired or is going to expire soon. Please change the root password before installing an update.”

When this happened on the first vCenter I went to upgrade, I thought that maybe there was a chance I had forgotten to set that to never expires… but usually by default I check that setting and set it to never expires… not the greatest security practice, but for my environments it’s something I set almost automatically during initial configuration. After reaching out on Twitter, I got some immediate feedback saying to reset the root password by going into single user mode… which did work.

When this happened a second time on a second VCSA, on which I without question set the never expires flag to true, I took a slightly different approach to the problem and decided to try reset the password from the VCSA Console, however that process fails as well.

After going back through the Tweet responses, I did come across this VMwareKB which lays down the issue and offers the reason behind the errors.

This issue occurs when VAMI is not able to change an expired root password.

Fair enough… but I don’t have a reason for the password never expires option not being honoured? Some feedback and conversations suggest that maybe this is a bug that’s worked its way into recent builds during upgrade procedures. In any case the way to fix it is simple and doesn’t need console access to access the command line… you just need to SSH into the VCSA and reset the root password as shown below.

Once done, the VCSA upgrade proceeds as expected. As you can see there we have also confirmed that the Password Expires is set to never. If anyone can confirm the behaviour regarding that flag being reset, feel free to comment below.

Apart from that, there is the quick fix!

References:

https://kb.vmware.com/s/article/67414

First Look – Runecast Adding Support for VMware HCL

Two years ago at the 2017 Sydney and Melbourne UserCons, I spent time with a couple of the founders of Runecast, Stanimir Markov and Ched Smokovic and got to know a little more about their real time analytics platform for VMware based infrastructure. Fast forward to today and Runecast have continued to build on the their initial release and have continued to add features and enhancements. The most recent of those, which is the ability to report on a ESXi Hosts VMware Hardware Compatibility List (HCL) is currently in beta and will be released shortly.

Currently, Runecast checks hardware versions, drivers and firmware against existing VMware KB articles and provides proactive findings for known issues that could impact your servers. With this addition Runecast will now show the compliance status of hardware against the VMware HCL.

This feature alone literally replaces hours of work to extract the needed data and match each server from your environment against the HCL. Critically, it can inform you if, where, and why your vSphere environment is not supported by VMware because of Hardware Compatibility issues.

In terms of what it looks like, as from the screen shot above you can see the new menu item that give you the Compatibly Overview. Your hosts are listed in the main window pane and are shows as green or red depending on their status against the HCL.

Clicking on the details you are shows the details of the host against the HCL data. If the host is out of whack with the HCL you will get an explanation similar to what is seen below. (note in the BETA I have installed this was not

With this feature you can identify which component is incompatible and unsupported. From there it will also indicate what the supportability options are for you.

Runecast keep adding great features to their platform… and most of their features are ones which any vSphere admin would find very helpful. That is the essence of what they are trying achieve.

For more information and to apply for the beta head here:

References:

https://www.runecast.com/blog/announcements/runecast-analyzer-support-for-vmware-hcl-beta

 

NSX Bytes – What’s New in NSX-T 2.4

A little over two years ago in Feburary of 2017 VMware released NSX-T 2.0 and with it came a variety of updates that looked to continue to push NSX-T beyond that of NSX-v while catching up in some areas where the NSX-v was ahead. The NSBU has had big plans for NSX beyond vSphere for as long as I can remember, and during the NSX vExpert session we saw how this is becoming more of a reality with NSX-T 2.4. NSX-T is targeted at more cloud native workloads which also leads to a more devops focused marketing effort on VMware’s end.

NSX-T’s main drivers relate to new data centre and cloud architectures with more hetrogeneality driving a different set of requirements to that of vSphere that focuses around multi-domain environments leading to a multi-hypervisor NSX platform. NSX-T is highly extensible and will address more endpoint heterogeneity in future releases including containers, public clouds and other hypervisors.

What’s new in NSX-T 2.4:

[Update] – The Offical Release Notes for NSX-T 2.4 have been releases and can be found here. As mentioned by Anthony Burke

I only touch on the main features below…This is a huge release and I don’t think i’ve seen a larger set of release notes from VMware. There are also a lot of Resolved Issues in the release which are worth a look for those who have already deployed NSX-T in anger. [/Update]

While there are a heap of new features in NSX-T 2.4, for me one of the standout enhancements is the migration options that now exist to take NSX-v platforms and migrate them to NSX-T. While there will be ongoing support for both platforms, and in my opinion NSX-v still hold court in more traditional scenarios, there is clear direction on the migration options.

In terms of the full list of what’s new:

  • Policy Management
    • Simplified UI with rich visualisations
    • Declarative Policy API to configure networking, security and services
  • Advanced Network Services
    • IPv6 (L2, L3, BGP, FW)
    • ENS Support for Edge and DFW
    • VPN (L2, L3)
    • BGP Enhancements (allow-as in, multi-path-asn relax, iBGP support, Inter-SR routing)
  • Intrinsic Security
    • Identity Based FW
    • FQDN/URL whitelisting for DFW
    • L7 based application signatures for DFW
    • DFW operational enhancements
  • Cloud and Container Updates
    • NSX Containers (Scale, CentOS support, NCP 2.4 updates)
    • NSX Cloud (Shared NSX gateway placement in Transit VPC/VNET, VPN, N/S Service Insertion, Hybrid Overlay support, Horizon Cloud on Azure integration)
  • Platform Enhancements
    • Converged NSX Manager appliance with 3 node clustering support
    • Profile based installs, Reboot-less maintenance mode upgrades, in-place mode upgrades for vSphere Compute Clusters, n-VDS visualization, Traceflow support for centralized services like Edge Firewall, NAT, LB, VPN
    • v2T Migration: In-built UI wizards for “vDS to N-vDS” as well as “NSX-v to NSX-T” in-place migrations
    • Edge Platform: Proxy ARP support, Bare Metal: Multi-TEP support, In-band management, 25G Intel NIC support
Infrastructure as Code and NSX-T:

As mentioned in the introduction, VMware is targeting cloud native and devops with NSX-T and there is a big push for being able to deploy and consume networking services across multiple platforms with multiple tools via the NSX API. At it’s heart, we see here the core of what was Nicira back in the day. NSX (even NSX-v) has always been underpinned by APIs and as you can see below, the idea of consuming those APIs with IaC, no matter what the tool is central to NSX-T’s appeal.

Conclusion:

It’s time to get into NSX-T! Lots of people who work in and around the NSBU have been preaching this for the last three to four years, but it’s now apparent that this is the way of the future and that anyone working on virtualization and cloud platforms needs to get familiar with NSX-T. There has been no better time to set it up in the lab and get things rolling.

For a more in depth look at the 2.4 release, head to the official launch blog post here.

References:

vExpert NSX Briefing

https://blogs.vmware.com/networkvirtualization/2019/02/introducing-nsx-t-2-4-a-landmark-release-in-the-history-of-nsx.html/

AWS Outposts and VMware…Hybridity Defined!

Now that AWS re:Invent 2018 has well and truly passed…the biggest industry shift to come out of the event from my point of view was the fact that AWS are going full guns blazing into the on-premises world. With the announcement of AWS Outposts the long held belief that the public cloud is the panacea of all things became blurred. No one company has pushed such a hard cloud only message as AWS…no one company had the power to change the definition of what it is to run cloud services…AWS did that last week at re:Invent.

Yes, Microsoft have had the Azure Stack concept for a number of years now, however they have not executed on the promise of that yet. Azure Stack is seen by many as a white elephant even though it’s now in the wild and (depending on who you talk to) doing relatively well in certain verticals. The point though is that even Microsoft did not have the power to make people truely believe that a combination of a public cloud and on premises platform was the path to hybridity.

AWS is a Juggernaut and it’s my belief that they now have reached an inflection point in mindshare and can now dictate trends in our industry. They had enough power for VMware to partner with them so VMware could keep vSphere relevant in the cloud world. This resulted in VMware Cloud on AWS. It seems like AWS have realised that with this partnership in place, they can muscle their way into the on-premises/enterprise world that VMware have and still dominate…at this stage.

Outposts as a Product Name is no Accident

Like many, I like the product name Outposts. It’s catchy and straight away you can make sense of what it is…however, I decided to look up the offical meaning of the word…and it makes for some interesting reading:

  • An isolated or remote branch
  • A remote part of a country or empire
  • A small military camp or position at some distance from the main army, used especially as a guard against surprise attack

The first definition as per the Oxford Dictionary fits the overall idea of AWS Outposts. Putting a compute platform in an isolated or remote branch office that is seperate to AWS regions while also offering the ability to consume that compute platform like it was an AWS region. This represents a legitimate use case for Outposts and can be seen as AWS fulling a gap in the market that is being craved for by shifting IT sentiment.

The second definition is an interesting one when taken in the context of AWS and Amazon as a whole. They are big enough to be their own country and have certainly built up an empire over the last decade. All empires eventually crumble, however AWS is not going anywhere fast. This move does however indicate a shift in tactics and means that AWS can penetrate the on-premises market quicker to extend their empire.

The third definition is also pertinent in context to what AWS are looking to achieve with Outposts. They are setting up camp and positioning themselves a long way from their traditional stronghold. However my feeling is that they are not guarding against an attack…they are the attack!

Where does VMware fit in all this?

Given my thoughts above…where does VMware fit into all this? At first when the announcement was made on stage I was confused. With Pat Gelsinger on stage next to Andy Jessy my first impression was that VMware had given in. Here was AWS announcing a direct competitive platform to on-premises vSphere installations. Not only that, but VMware had announced Project Dimension at VMworld a few months earlier which looked to be their own on-premises managed service offering…though the wording around that was for edge rather than on-premises.

With the initial dust settled and after reading this blog post from William Lam, I came to understand the VMware play here.

VMware and Amazon are expanding their partnership to deliver a new, as-a-service, on-premises offering that will include the full VMware SDDC stack (vSphere, NSX, vSAN) running on AWS Outposts, a fully managed and configurable server and network installation built with AWS-designed hardware. VMware Cloud in AWS Outposts is VMware’s new As-a-Service offering in partnership with AWS to run on AWS Outposts – it will leverage the innovations we’ve developed with Project Dimension and apply them on top of AWS Outposts. VMware Cloud on AWS Outposts will be a subscription-based service and will support existing VMware payment options.

The reality is that on-premises environments are not going away any time soon but customers like the operating model of the cloud. More and more they don’t care about where infrastructure lives as long as a services outcome is achieved. Customers are after simplicity and cost efficiency. Outposts delivers all this by enabling convenience and choice…the choice to run VMware for traditional workloads using the familiar VMware SDDC stack all while having access to native AWS services.

A Managed Service Offering means a Mind shift

The big shift here from VMware that began with VMware Cloud on AWS is a shift towards managed services. A fundamental change in the mindset of the customer in the way in which they consume their infrastructure. Without needing to worry about the underlying platform, IT can focus on the applications and the availability of those applications. For VMware this means from the VM up…for AWS, this means from the platform up.

VMware Cloud on AWS is a great example of this new managed services world, with VMware managing most of the traditional stack. VMware can now extend VMware Cloud on AWS to Outposts to boomerang the management of on-premises as well. Overall Outposts is a win win for both AWS and VMware…however proof will be in the execution and uptake. We won’t know how it all pans out until the product becomes available…apparently in the later half of 2019.

IT admins have some contemplating to do as well…what does a shift to managed platforms mean for them? This is going to be an interesting ride as it pans out over the next twelve months!

References:

VMware Cloud on AWS Outposts: Cloud Managed SDDC for your Data Center

Quick Fix – VCSA 6.7.0.10000 Can’t Update via URL from Management Interface

I had an issue with my VCSA today trying to upgrade to vCenter 6.7 Update 1 whereby the Management Interface Upgrade option was not detecting the update to upgrade the appliance to 6.7 Update 1. It was a similar issue to this VMwareKB, however the URL that is mentioned in that instance was already in the VCSA Settings.

My first instinct was to check the disk space and see if there where any pressures in that area. I did find that the /dev/sda3 partition was low on space, so I expanded the disk following advice given by Mark Ukotic. After a reboot and resize I had plenty of storage left, but still couldn’t trigger an update from the URL. At this point I did download the Update patch ISO from the VMware Patch center and loaded it up manually…however the issue of it not popping up automatically was annoying me.

As mentioned, the settings of the VCSA Update window has the following URL listed:

https://vapp-updates.vmware.com/vai-catalog/valm/vmw/8d167796-34d5-4899-be0a-6daade4005a3/6.7.0.10000.latest/

Having asked around a little the quick fix was provided by Matt Allford who provided me with the URL that was present in his VCSA after he upgraded successfully via the CLI.

https://vapp-updates.vmware.com/vai-catalog/valm/vmw/8d167796-34d5-4899-be0a-6daade4005a3/6.7.0.20000.latest/

I added that as a custom repository as shown below…

I was then able to rescan and choose from the list of updates for the VCSA.

And perform the upgrade from the Management Interface as first desired.

Interestingly enough, after the upgrade the default Update Repository was set to the one Matt provided for me.

This is the first time i’ve seen this behavior from the VCSA but I had reports of people being able to upgrade without issue. I’m wondering if it might be the particular build I was on, though that in it’s self was not picking up any patches to update to either. If anyone has any ideas, feel free to comment below.

Quick Fix: Terraform Plan Fails on Guest Customizations and VMware Tools

Last week I was looking to add the deployment of a local CentOS virtual machine to the Deploy Veeam SDDC Toolkit project so that it included the option to deploy and configure a local Linux Repository. This could then can be added to the Backup & Replication server. As part of the deployment I call the Terraform vSphere Provider to clone and configure the virtual machine from a pre loaded CentOS template.

As shown below, I am using the Terraform customization commands to configure VM name, domain details as well as network configuration.

In configuring the CentOS template i did my usual install of Open VM Tools. When the Terraform plan executes we applied the VM was cloned without issue, but it failed at the Guest Customizations part.

The error is pretty clear and to test the error and fix, I tried applying the plan without any VMware Tools installed. In fact without VMware Tools the VM will not finish the initial deployment after the clone and be deleted by Terraform. I next installed open-vm-tools but ended up with the same scenario of the plan failing and the VM not being deployed. For some reason it does not like this version of the package being deployed.

Next test was to deploy the open-vm-tools-deploypkg as described in this VMwareKB. Now the Terraform plan executed to the point of cloning the VM and setting up the desired VM hardware and virtual network port group settings but still failed on the custom IP and hostname components of the customisation. This time with a slightly different error.

The final requirement is to pre-install the perl package onto the template. This allows for the in guest customizations to take place together with VMware Tools. Once I added that to the template the Terraform Plan succeeded without issue.

References:

https://kb.vmware.com/s/article/2075048

 

 

Quick Fix – Backing up vCenter Content Library Content with Veeam

A question came up in the Veeam Forums this week about how you would backup the contents of a Content Library. As a refresher, content libraries are container objects for VM templates, vApp templates, and other types of files. Administrators can use the templates in the library to deploy virtual machines and vApps via vCenter. Using Content libraries results in consistency, compliance, efficiency, and automation when deploying workloads at scale.

Content Libraries are created and managed from a single vCenter, but can be shared to other vCenter Server instances. VM templates and vApps templates are stored as OVF file formats in the content library. You can also upload other file types, such as ISO images, text files, and so on, in a content library. It’s possible to create content libraries that are 3rd party hosted, such as the example here by William Lam looking at how to create and manage an AWS S3 based content library.

For those looking to store them locally on an ESXi datastore there is a way to backup the contents of the content library with a Veeam Backup & Replication File Copy job. This is a basic solution to the question posed in the Veeam Forums however it does work. With the File Copy, you can choose any file or folder contained in any connected infrastructure in Backup & Replication. For a Content Library stored on an ESXi datastore you just need to browse to the location as shown below.

The one caveat is that the destination can’t be a Veeam Repository. There is no versioning or incremental copy so every time the job is executed a full backup of the files is performed.   

One way to work around this is to set the destination to a location that is being backed up in a Veeam Job or an Agent Job. However if the intention is to just protect the immediate contents of the library than have a full once off backup shouldn’t be an issue.

You can also create/add to a File Copy job from the Files view as shown above.

In terms of recovery, The File Copy job is doing a basic file copy and doesn’t know about the fact the files are part of a Content Library and as you can see, the folder structure that vCenter creates uses UIDs for identification. Because of this, if there was a situation where a whole Content Library was lost, it would have to be recreated in vCenter and then the imported back in directly from the File Copy Job destination folder location.

Again, this is a quick and nasty solution and it would be a nice feature addition to have this backed up natively…naming and structure in place. For the moment, this is a great way of utilizing a cool feature of Veeam Backup & Replication to achieve the goal.

« Older Entries