Category Archives: VMware

Quick Fix – ESXi loses all Network Configuration… but still runs?

I had a really strange situation pop up in one of my lab environments over the weekend. vSAN Health was reporting that one of the hosts had lost networking connectivity to the rest of the cluster. This is something i’ve seen intermittently at times so waited for the condition to clear up. When it didn’t clear up, I went to look at the host to put it into maintenance mode, but found that I wasn’t getting the expected vSAN options.

I have seen situations recently where the enable vSAN option on the VMkernel interface had been cleared and vCenter thinks there are networking issue. I thought maybe it was this again. Not that that situation in its self was normal, but what I found when I went to view the state of the VMkernel Adapters from the vSphere Web Client was even stranger.

No adapters listed!

The host wasn’t reported as being disconnected and there was still connectivity to it via the Web UI and SSH. To make sure this wasn’t a visual error from the Web Client I SSH’ed into the host and ran esxcli to get a list of the VMkernel interfaces.

Unable to find vmknic for dvsID: xxxxx

So from the cli, I couldn’t get a list of interfaces either. I tried restarting the core services without luck and still had a host that was up with VMs running on it without issue, yet reporting networking issues and having no network interfaces configured per the running state.

Going to the console… the situation was not much better.

Nothing… no network or host information at all 🙂

Not being bale to reset the management network my only option from here was to reboot the server. Upon reboot the host did come back up online, however the networking was reporting as being 0.0.0.0/0 from the console and now the host was completely offline.

I decided to reboot using last know good configuration as shown below:

Upon reboot using the last known good configuration all previous network settings where restored and I had a list of VMkernel interfaces again present from the Web Client and from the cli.

Because of the “dirty” vSAN reboot, as is usual with anything that disrupts vSAN, the cluster needed some time to get its self back into working order and while some VMs where in an orphaned or unavailable state after reboot, once the vSAN re-sync had completed all VMs where back up and operational.

Cause and Offical Resolution:

The workaround to bring back the host networking seemed to do the trick however I don’t know what the root cause was for the host to lose all of its network config. I have an active case going with VMware Support at the moment with the logs being analysed. I’ll update this post with the results when they come through.

ESXi Version: 6.7.0.13006603
vSphere Version: 6.7.0.30000
NSX-v: 6.4.4.11197766

The Separation of Dev and Ops is Upon Us!

Apart from the K word, there was one other enduring message that I think a lot of people took home from VMworld 2019. That is, that Dev and Ops should be considered as seperate entities again. For the best part of the last five or so years the concept of DevOps, SecOps and other X-Ops has been perpetuated mainly due to the rise of consumable platforms outside the traditional control of IT operations people.

The pressure to DevOp has become very real in the IT communities that I am involved with. These circles are mainly made up of traditional infrastructure guys. I’ve written a few pieces around how the industry trend to try turn everyone into developers isn’t one that needs to be followed. Automation doesn’t equal development and there are a number of Infrastructure as Code tools that looks to bridge the gap between the developer and the infrastructure guy.

That isn’t to say that traditional IT guys shouldn’t be looking to push themselves to learn new things and improve and evolve. In fact, IT Ops needs to be able to code in slightly abstracted ways to work with APIs or leverage IaC tooling. However my view is that IT Ops number one role is to understand fundamentally what is happening within a platform, and be able to support infrastructure that developers can consume.

I had a bit of an aha moment this week while working on some Kubernetes (that word again!) automation work with Terraform which I’ll release later this week. The moment was when I was trying to get the Sock Shop demo working on my fresh Kubernetes cluster. I finally understood why Kubernetes had been created. Everything about the application was defined in the json files and deployed as is holistically through one command. It’s actually rather elegant compared to how I worked with developers back in the early days of web hosting on Windows and Linux web servers with their database backends and whatnot.

Regardless of the ease of deployment, I still had to understand the underlying networking and get the application to listen on external IPs and different ports. At this point I was doing dev and doing IT Ops in one. However this is all contained within my lab environment that has no bearing on the availability of the application, security or otherwise. This is where separation is required.

For developers, they want to consume services and take advantages of the constructs of a containerised platform like Docker paired together with the orchestrations and management of those resources that Kubernetes provides. They don’t care what’s under the hood and shouldn’t be concerned what their application runs on.

For IT Operations they want to be able to manage the supporting platforms as they did previously. The compute, the networking the storage… this is all still relevant in a hybrid world. They should absolutely still care about what’s under the hood and the impact applications can have to infrastructure.

VMware has introduced that (re)split of dev and ops with the introduction of Project Pacific and I applaude them for going against the grain and endorsing the separation of roles and responsibilities. Kubernetes and ESXi in one vSphere platform is where that vision lies. Outside of vSphere, it is still very true that devs can consume public clouds without a care about underlying infrastructure… but for me… it all comes back down to this…

Let devs be devs… and let IT Ops be IT Ops! They need to work together in this hybrid, multi-cloud world!

VMworld 2019 Review – Project Pacific is a Stroke of Kubernetes Genius… but not without a catch!

Kubernetes Kubernetes, Kubernetes… say Kubernetes one more time… I dare you!

If it wasn’t clear what the key take away from VMworld 2019 was last week in San Francisco then I’ll repeat it one more time… Kubernetes! It was something which I predicted prior to the event in my session breakdown. And all jokes aside, with the amount of times we heard Kubernetes mentioned last week, we know that VMware signalled their intent to jump on the Kubernetes freight train and ride it all the way.

When you think about it, the announcement of Project Pacific isn’t a surprise. Apart from it being an obvious path to take to ensure VMware remains viable with IT Operations (IT Ops) and Developers (Devs) holistically, the more I learned about what it actually does under the hood, the more I came to belief that it is a stroke of genius. If it delivers technically on the its promise of full ESX and Kubernetes integration into the one vSphere platform, then it will be a huge success.

The whole premise of Project Pacific is to use Kubernetes to manage workloads via declarative specifications. Essentially allowing IT Ops and Devs to tell vSphere what they want and have it deploy and manage the infrastructure that ultimately serves as a platform for an application. This is all about the application! Abstracting all infrastructure and most of the platform to make the application work. We are now looking at a platform platform that controls all aspects of that lifecycle end to end.

By redesigning vSphere and implanting Kubernetes into the core of vSphere, VMware are able to take advantage of the things that make Kubernetes popular in todays cloud native world. A Kubernetes Namespace is effectively a tenancy in Kubernetes that will manage applications holistically and it’s at the namespace level where policies are applied. QoS, Security, Availability, Storage, Networking, Access Controls can all be applied top down from the Namespace. This gives IT Ops control, while still allowing devs to be agile.

I see this construct similar to what vCloud Director offers by way of a Virtual Datacenter with vApps used as the container for the VM workloads… in truth, the way in which vCD abstracted vSphere resources into tenancies and have policies applied was maybe ahead of it’s time?

DevOps Seperation:

DevOps has been a push for the last few years in our industry and the pressure to be a DevOp is huge. The reality of that is that both sets of disciplines have fundamentally different approaches to each others lines of work. This is why it was great to see VMware going out of their way to make the distinction between IT Ops and Devs.

Dev and IT Ops collaboration is paramount in todays IT world and with Project Pacific, when a Dev looks at the vSphere platform they see Kubernetes. When an IT Ops guy looks at vSphere he still sees vSphere and ESXi. This allows for integrated self service and allows more speed with control to deploy and manage the infrastructure and platforms the run applications.

Consuming Virtual Machines as Containers and Extensibility:

Kubernetes was described as a Platform Platform… meaning that you can run almost anything in Kubernetes as long as its declared. The above image shows a holistic application running in Project Pacific. The application is a mix of Kubernetes containers, VMs and other declared pieces… all of which can be controlled through vSphere and lives under that single Namespace.

When you log into the vSphere Console you can see a Kubernetes Cluster in vSphere and see the PODs and action on them as first class citizens. vSphere Native PODs are an optimized run time… apparently more optimized than baremetal… 8% faster than baremetal as we saw in the keynote on Monday. The way in which this is achieved is due to CPU virtualization having almost zero cost today. VMware has taken advantage of the advanced ESXi scheduler of which vSphere/ESXi have advanced operations across NUMA nodes along with the ability to strip out what is not needed when running containers on VMs so that there is optimal runtime for workloads.

vSphere will have two APIs with Project Pacific. The traditional vSphere API that has been refined over the years will remain and then, there will be the Kubernetes API. There is also be ability to create infrastructure with kubectl. Each ESXi Cluster becomes a Kubernetes cluster. The work done with vSphere Integrated Containers has not gone to waste and has been used in this new integrated platform.

PODs and VMs live side by side and declared through Kubernetes running in Kubernetes. All VMs can be stored in the container registry. Critical Venerability scans, encryption, signing can be leveraged at a container level that exist in the container ecosystem and applied to VMs.

There is obviously a lot more to Project Pacific, and there is a great presentation up on YouTube from Tech Field Day Extra at VMworld 2019 which I have embedded below. In my opinion, they are a must for all working in and around the VMware ecosystem.

The Catch!

So what is the catch? With 70 million workloads across 500,000+ customers VMware is thinking that with this functionality in place the current movement of refactoring of workloads to take advantage of cloud native constructs like containers, serverless or Kubernetes doesn’t need to happen… those, and existing workloads instantly become first class citizens on Kubernetes. Interesting theory.

Having been digging into the complex and very broad container world for a while now, and only just realising how far on it has become in terms of it being high on most IT agendas my currently belief is that the world of Kubernetes and containers is better placed to be consumed on public clouds. The scale and immediacy of Kubernetes platforms on Google, Azure or AWS without the need to ultimately still procure hardware and install software means that that model of consumption will still have an advantage over something like Project Pacific.

The one stroke of genius as mentioned is that by combining “traditional” workloads with Kubernetes as its control plane within vSphere the single, declarative, self service experience that it potentially offers might stop IT Operations from moving to public clouds… but is that enough to stop the developers forcing their hands?

It is going to be very interesting to see this in action and how well it is ultimately received!

More on Project Pacific

The videos below give a good level of technical background into Project Pacific, while Frank also has a good introductory post here, while Kit Colbert’s VMworld session is linked in the references.

References:

https://videos.vmworld.com/global/2019/videoplayer/28407

VMworld 2019 – Top Session Picks

VMworld 2019 is happening tomorrow (It is already Saturday here) and as I am just about to embark on the 20+ hour journey from PER to SFO I thought it was better late than never to share my top session picks. Now with sessions available online it doesn’t really matter that the actual sessions are fully booked. The theme this year is “Make Your Mark” …which does fall in line with themes of past VMworld events. It’s all about VMware empowering it’s customers to do great things with its technology.

I’ve already given a session breakdown and analysis for this years event… and as a recap here are some of the keyword numbers relating to what tech is in what session.

Out of all that, and the 1348 sessions total, that are currently in the catalog I have chosen then list below as my top session picks.

  • vSphere HA and DRS Hybrid Cloud Deep Dive [HBI2186BU]
  • 60 Minutes of Non-Uniform Memory Architecture [HBI2278BU]
  • vCloud Director.Next : Deliver Cloud Services on Any VMware Endpoint [HBI2452BU]
  • Why Cloud Providers Choose VMware vCloud Director as Their Cloud Platform [HBI1453PU]
  • VMware Cloud on AWS: Advanced Automation Techniques [HBI1463BU]
  • The Future of vSphere: What you Need to Know [HBI4937BU]
  • NSX-T for Service Providers [MTE6105U]
  • Kubernetes Networking with NSX-T [MTE6104U]
  • vSAN Best Practices [HCI3450BU]
  • Deconstructing vSAN: A Deep Dive into the internals of vSAN [HCI1342BU]
  • VMware in Any Cloud: Introducing Microsoft Azure and Google Cloud VMware Solutions [HBI4446BU]

Also wanted to again call out the Veeam Sessions.

  • Backups are just the start! Enhanced Data Mobility with Veeam [HBI3535BUS]
  • Enhancing Data Protection for vSphere with What’s Coming from Veeam [HBI3532BUS]

A lot of those sessions above relate to my ongoing interest in the service providers world and my continued passion in vCloud Director, NSX and vSAN as core VMware technologies. I also think the first two I put in the list are important because in this day of instant (gratification) services we still need to be mindful of what is happening underneath the surface…. it’s not just a case of some computer running some workload somewhere!

All in all it should be a great week in SFO and looking forward to the event… now to finish packing and get to the airport!

Veeam @VMworld 2019 Edition…

VMworld 2019 is now only a couple of days away, and I can’t wait to return to San Francisco for what will be my seventh VMworld and third one with Veeam. It has again been an interesting year since the last VMworld and the industry has shifted a little when it comes to the backup and recovery market. Data management and data analytics have become the new hot topic item and lots of vendors have jumped onto the messaging of data growth at more than exponential rates.

VMware still have a lot to say about where and how that data is processed and stored!

VMworld is still a destination event and Veeam recognises VMware’s continued influence in the IT industry by continuing to support VMworld. The ecosystem that VMware has built over the past ten to fifteen years is significant and has only been strengthened by the extension of their technologies to the public cloud.

Veeam continues to build out own own strong ecosystem backed by a software first, hardware agnostic platform which results in the greatest flexibility in the backup and recovery market. We continue to support VMware as our number 1 technology partner and this year we look to build on that with support for VMware Cloud on AWS and enhanced VMware features sets built into our core Backup & Replication product as we look to release v10 later in the year.

Veeam Sessions @VMworld:

Officially we have two breakout sessions this year, with Danny Allan and myself presenting a What’s Coming from Veeam featuring our long awaited CDP feature (#HBI3532BUS), and Michael Cade and David Hill presenting a around how Backups are just the start… with a look at how we offer Simplicity, Reliability and Portability as core differentiators (#HBI3535BUS).

There are also four vBrownBag Tech Talks where Veeam features including talks from Michael Cade, Tim Smith, Joe Hughes and myself. While we are also doing a couple of partner lunch events focused on Cloud and Datacenter Transformation.

https://my.vmworld.com/widget/vmware/vmworld19us/us19catalog?search=Veeam

Veeam @VMworld Solutions Exchange:

This year, as per usual we will have significant presence on the floor as main sponsor of the Welcome Reception, with our Booth (#627) Area featuring demo’s prize, giveaways, as well as having an Experts Bar. There will be a number of Booth presentations throughout the event.

Veeam Community Support @VMworld:

Veeam still gets the community and has been a strong supporter historically of VMworld community based events. This year again, we have come to the party are have gone all-in in terms of being front and center in supporting community events. Special mention goes to Rick Vanover who leads the charge in making sure Veeam is doing what it can to help make these events possible:

  • Opening Acts
  • VMunderground
  • vBrownBag
  • Spousetivities
  • vRockstar Party
  • LevelUp Career Cafe

Party with Veeam @VMworld:

Finally, it wouldn’t be VMworld without attending Veeam’s legendary party. While we are not in Vegas this year and can’t hold it at a super club, we have managed to book one of the best places in San Francisco… The Masonic. We have Andy Grammer performing and while it won’t be a Vegas style Veeam event… it is already sold out maxed at 2000 people so we know it’s going to be a success and will be one of the best parties of 2019!

While it’s officially sold out ticket wise, if people do want to attend we are suggesting they come to the venue in any case as there are sure to be no shows.

Final Word:

Again, this year’s VMworld is going to be huge and Veeam will be right there front and center of the awesomeness. Please stop by our sessions, visit our stand and attend our community sponsored events and feel free to chase me down for a chat…I’m always keen to meet other members of this great community. Oh, and don’t forget to get to the party!

VMworld 2019 – Session Breakdown and Analysis

Everything to do with VMworld this year feels like it’s arrived at lightning speed. I actually thought the event was two weeks away as the start of the week… but here we are… only five days away from kicking off in San Francisco. The content catalog for the US event has been live for a while now and as is recently the case, a lot of sessions were full just hours after it went live! At the moment there is huge 1348 sessions listed which include the #vBrownBag Tech Talks hosted by the VMTN Community.

As I do every year I like to filter through the content catalog and work out what technologies are getting the airplay at the event. It’s interesting going back since I first started doing this to see the catalog evolve with the times… certain topics have faded away while others have grown and some dominate. This ebs and flows with VMware’s strategies and makes for interesting comparison.

What first struck me as being interesting was the track names compared to just two years ago at the 2017 event:

I see less buzz words and more tracks that are tech specific. Yes, within those sub categories we have the usual elements of “digital transformation” and “disruption”, however VMware’s focus looks to be focuses more around the application of technology and not the high level messaging that usually plagues tech conferences. VMworld has for the most and remains a technical conference for techs.

By digging into the sessions by searching on key words alone, the list below shows you where most of the sessions are being targeted this year. If, in 2015 you where to take a guess at what particular technology was having the most coverage at a VMworld at 2019…the list below would be much different than what we see this year.

From looking back over previous years, there is a clear rise in the Containers world which is now dominated by Kubernetes. Thinking back to previous VMworld’s, you would never get the big public cloud providers with airtime. If you look at how that has changed for this year we now have 231 sessions alone that mention AWS… not to mention the ones mentioning Azure or Google.

Strategy wise it’s clear that NSX, VMC and Kubernetes are front of mind for VMware and their ecosystem partners.

I take this as an indication as to where the industry is… and is heading. VMware are still the main touch point for those that work in and around IT Infrastructure support and services. They own the ecosystem still… and even with the rise of AWS, Azure, GCP and alike, they still are working out ways to hook those platforms into their own technology and are moving with industry trends as to where workloads are being provisioned. Kubernetes and VMware Cloud on AWS are a big part of that, but underpinning it is the network… and NSX is still heavily represented with NSX-T becoming even more prominent.

One area that continues to warm my heart is the continued growth and support shown to the VMware Cloud Providers and vCloud Director. The numbers are well up from the dark days of vCD around the 2013 and 2014 VMworld’s. For anyone working on cloud technologies this year promises to be a bumper year for content and i’m looking forward to catching as much vCD and VCPP related sessions as I can.

It promises to be an interesting VMworld, with VMware hinting at a massive shift in direction… I think we all know in a round about way where that is heading… let’s see if we are right come next week.

https://my.vmworld.com/widget/vmware/vmworld19us/us19catalog

Quick Fix – Issues Upgrading VCSA due to Password Expiration

It seems like an interesting “condition” has worked its self into recent VCSA builds where upon completing upgrades, the process seems to reset the root account expiration flag. This blocked my proceeding with an upgrade and only worked when I followed the steps listed below.

The error I got is shown below:

“Appliance (OS) root password is expired or is going to expire soon. Please change the root password before installing an update.”

When this happened on the first vCenter I went to upgrade, I thought that maybe there was a chance I had forgotten to set that to never expires… but usually by default I check that setting and set it to never expires… not the greatest security practice, but for my environments it’s something I set almost automatically during initial configuration. After reaching out on Twitter, I got some immediate feedback saying to reset the root password by going into single user mode… which did work.

When this happened a second time on a second VCSA, on which I without question set the never expires flag to true, I took a slightly different approach to the problem and decided to try reset the password from the VCSA Console, however that process fails as well.

After going back through the Tweet responses, I did come across this VMwareKB which lays down the issue and offers the reason behind the errors.

This issue occurs when VAMI is not able to change an expired root password.

Fair enough… but I don’t have a reason for the password never expires option not being honoured? Some feedback and conversations suggest that maybe this is a bug that’s worked its way into recent builds during upgrade procedures. In any case the way to fix it is simple and doesn’t need console access to access the command line… you just need to SSH into the VCSA and reset the root password as shown below.

Once done, the VCSA upgrade proceeds as expected. As you can see there we have also confirmed that the Password Expires is set to never. If anyone can confirm the behaviour regarding that flag being reset, feel free to comment below.

Apart from that, there is the quick fix!

References:

https://kb.vmware.com/s/article/67414

Orchestration of NSX by Terraform for Cloud Connect Replication with vCloud Director

That is probably the longest title i’ve ever had on this blog, however I wanted to highlight everything that is contained in this solution. Everything above works together to get the job done. The job in this case, is to configure an NSX Edge automatically using the vCloud Director Terraform provider to allow network connectivity for VMs that have been replicated into a vCloud Director tenant organization with Cloud Connect Replication.

With the release of Update 4 for Veeam Backup & Replication we enhanced Cloud Connect Replication to finally replicate into a Service Providers vCloud Director platform. In doing this we enabled tenants to take advantage of the advanced networking features of the NSX Edge Services Gateway. The only caveat to this was that unlike the existing Hardware Plan mechanism, where tenants where able to configure basic networking on the Network Extension Appliance (NEA), the configuration of the NSX Edge had to be done directly through the vCloud Director Tenant UI.

The Scenario:

When VMs are replicated into a vCD organisation with Cloud Connect Replication the expectation in a full failover is that if a disaster happened on-premises, workloads would be powered on in the service provider cloud and work exactly as if they where still on-premises. Access to services needs to be configured through the edge gateway. The edge gateway is then connected to the replica VMs via the vOrg Network in vCD.

In this example, we have a LAMP based web server that is publishing a WordPress site over HTTP and HTTPs.

The VM is being replicated to a Veeam Cloud Service Provider vCloud Director backed Cloud Connect Replication service.

During a disaster event at the on-premises end, we want to enact a failover of the replica living at in the vCloud Director Virtual Datacenter.

The VM replica will be fired up and the NSX Edge (the Network Extension Appliance pictured is used for partial failovers) associated to the vDC will allow the HTTP and HTTPS to be accessed from the outside world. The internal IP and Subnet of the VM is as it was on-premises. Cloud Connect Replication handles the mapping of the networks as part of the replication job.

Even during the early development days of this feature I was thinking about how this process could be automated somehow. With our previous Cloud Connect Replication networking, we would use the NEA as the edge device and allow basic configuration through the Failover Plan from the Backup & Replication console. That functionality still exists in Update 4, but only for non vCD backed replication.

The obvious way would be to tap into the vCloud Director APIs and configure the Edge directly. Taking that further, we could wrap that up in PowerShell and invoke the APIs from PowerShell, which would allow a simpler way to pass through variables and deal with payloads. However with the power that exists with the Terraform vCloud Director provider, it became a no brainer to leverage this to get the job done.

Configuring NSX Edge with Terraform:

In my previous post around Infrastructure as Code vs APIs I went through a specific example where I configured an NSX Edge using Terraform. I’m not going to go over that again, but what I have done is published that Terraform plan with all the code to GitHub.

The GitHub Project can be found here.

The end result after running the Terraform Plan is:

  • Allowed HTTP, HTTPS, SSH and ICMP access to a VM in a vDC
    • Defined as a variable as the External IP
    • Defined as a variable as the Internal IP
    • Defined as a variable as the vOrg Subnet
  • Configure DNAT rules to allow HTTP, HTTPS and SSH
  • Configure SNAT rule to allow outbound from the vOrg subnet

The variables that align with the VM and vORG network are defined in the terraform.tfvars file and need to be modified to match the on-premises network configuration. The variables are defined in the variables.tf file.

To add additional VMs and/or vOrg networks you will need to define additional variables in both files and add additional entires under the firewall_rules.tf and nat_fules.tf. I will look at ways to make this more elegant using Terraform arrays/lists and programatic constructs in future.

Creating PowerShell for Execution:

The Terraform plan can obviously be run standalone and the NSX Edge configuration can be actioned at any time, but the idea here is to take advantage of the script functionality that exists with Veeam backup and replication jobs and have the Terraform plan run upon completion of the Cloud Connect Replication job every time it is run.

To achieve this we need to create a PowerShell script:

GitHub – configure_vCD_VCCR_NSX_Edge.ps1

The PowerShell script initializes Terraform and downloads the Provider, ensures there is an upgrade in the future and then executes the Terraform plan. Remembering that that variables will change within the Terraform Plan its self, meaning these scripts remain unchanged.

Adding Post Script to Cloud Connect Replication Job:

The final step is to configure the PowerShell script to execute once the Cloud Connect Replication job has been run. This is done via a post script settings that can be found in Job Settings -> Advanced -> Scripts. Drop down to selected ps1 files and choose the location of the script.

That’s all that is required to have the PowerShell script executed once the replication job completes.

End Result:

Once the replication component of the job is complete, the post job script will be executed by the job.

This triggers the PowerShell, which runs the Terraform plan. It will check the existing state of the NSX Edge configuration and work out what configuration needs to be added. From the vCD Tenant UI, you should see the recent tasks list modifications to the NSX Edge Gateway by the user configured to access the vCD APIs via the Provider.

Taking a look at the NSX Edge Firewall and NAT configuration you should see that it has been configured as specified in the Terraform plan.

Which will match the current state of the Terraform plan

Conclusion:

At the end of the day, what we have done is achieved the orchestration of Veeam Cloud Connect Replication together with vCloud Director and NSX… facilitated by Terraform. This is something that Service Providers offering Cloud Connect Replication can provide to their clients as a way for them to define, control and manage the configuration of the NSX edge networking for their replicated infrastructure so that there is access to key services during a DR event.

While there might seem like a lot happening, this is a great example of leveraging Infrastructure as Code to automated as otherwise manual task. Once the Terraform is understood and the variables applied, the configuration of the NSX Edge will be consistent and in a desired state with the config checked and applied on every run of the replication job. The configuration will not fall out of line with what is required during a full failover and will ensure that services are available if a disaster occurs.

References:

https://github.com/anthonyspiteri/automation/tree/master/vccr_vcd_configure_nsx_edge

Quick Post – vCloud Director 9.5.0.3 Released as Critical Update

Late last week, on the same day as vCloud Director 9.7 was released to GA, an update was also released for vCloud Director 9.5.x which has been marked are critical. Specifically it relates to a vulnerability in previous vCloud Director 9.5.x with identifier CVE-2019-5523. Ironically this threat targets the new Tenant and Provider Portals.

VMware vCloud Director for Service Providers update resolves a Remote Session Hijack vulnerability in the Tenant and Provider Portals. Successful exploitation of this issue may allow a malicious actor to access the Tenant or Provider Portals by impersonating a currently logged in session.

Obviously given that vCloud Director 9.7 has just been release it’s unlikely that most Service Providers will upgrade right away, therefore the majority will be running vCloud Director 9.5.x for some time yet.

vCloud Director 9.0.x and 9.1.x are not affected.

References:

https://docs.vmware.com/en/vCloud-Director/9.5/rn/vCloud-Director-9503-for-Service-Providers-Release-Notes.html

https://www.vmware.com/security/advisories/VMSA-2019-0004.html

Quick Post – vCloud Director 9.5.0.2 Released

While we wait for the upcoming release of vCloud Director 9.7 next month (after the covers where torn off the next release in a blog post last week by the vCloud Team), VMware have released a new build (9.5.0.2 Build 12810511) of vCloud Director 9.5 that contains a number of resolved issues and is a recommended patch update.

Looking through the resolved issues it seems like the majority of fixes are around networking and to do with NSX Edge Gateway deployments as well as a few fixes around OVF template importing and API interactions.

While looking through the new layout of the VMware Docs page for vCloud Director I noticed that a few new builds for 9.1, 9.0 and 8.20 had shipped out over the past few months or so. I updated the vCloud Director Release History to reflect all the latest builds across all versions.

References:

https://docs.vmware.com/en/vCloud-Director/9.5/rn/vCloud-Director-9502-for-Service-Providers-Release-Notes.html

https://blogs.vmware.com/vcloud/2019/03/the-hybrid-cloud-gets-better-meet-vcloud-director-9-7.html

 

« Older Entries