Tag Archives: vSphere

AWS Outposts and VMware…Hybridity Defined!

Now that AWS re:Invent 2018 has well and truly passed…the biggest industry shift to come out of the event from my point of view was the fact that AWS are going full guns blazing into the on-premises world. With the announcement of AWS Outposts the long held belief that the public cloud is the panacea of all things became blurred. No one company has pushed such a hard cloud only message as AWS…no one company had the power to change the definition of what it is to run cloud services…AWS did that last week at re:Invent.

Yes, Microsoft have had the Azure Stack concept for a number of years now, however they have not executed on the promise of that yet. Azure Stack is seen by many as a white elephant even though it’s now in the wild and (depending on who you talk to) doing relatively well in certain verticals. The point though is that even Microsoft did not have the power to make people truely believe that a combination of a public cloud and on premises platform was the path to hybridity.

AWS is a Juggernaut and it’s my belief that they now have reached an inflection point in mindshare and can now dictate trends in our industry. They had enough power for VMware to partner with them so VMware could keep vSphere relevant in the cloud world. This resulted in VMware Cloud on AWS. It seems like AWS have realised that with this partnership in place, they can muscle their way into the on-premises/enterprise world that VMware have and still dominate…at this stage.

Outposts as a Product Name is no Accident

Like many, I like the product name Outposts. It’s catchy and straight away you can make sense of what it is…however, I decided to look up the offical meaning of the word…and it makes for some interesting reading:

  • An isolated or remote branch
  • A remote part of a country or empire
  • A small military camp or position at some distance from the main army, used especially as a guard against surprise attack

The first definition as per the Oxford Dictionary fits the overall idea of AWS Outposts. Putting a compute platform in an isolated or remote branch office that is seperate to AWS regions while also offering the ability to consume that compute platform like it was an AWS region. This represents a legitimate use case for Outposts and can be seen as AWS fulling a gap in the market that is being craved for by shifting IT sentiment.

The second definition is an interesting one when taken in the context of AWS and Amazon as a whole. They are big enough to be their own country and have certainly built up an empire over the last decade. All empires eventually crumble, however AWS is not going anywhere fast. This move does however indicate a shift in tactics and means that AWS can penetrate the on-premises market quicker to extend their empire.

The third definition is also pertinent in context to what AWS are looking to achieve with Outposts. They are setting up camp and positioning themselves a long way from their traditional stronghold. However my feeling is that they are not guarding against an attack…they are the attack!

Where does VMware fit in all this?

Given my thoughts above…where does VMware fit into all this? At first when the announcement was made on stage I was confused. With Pat Gelsinger on stage next to Andy Jessy my first impression was that VMware had given in. Here was AWS announcing a direct competitive platform to on-premises vSphere installations. Not only that, but VMware had announced Project Dimension at VMworld a few months earlier which looked to be their own on-premises managed service offering…though the wording around that was for edge rather than on-premises.

With the initial dust settled and after reading this blog post from William Lam, I came to understand the VMware play here.

VMware and Amazon are expanding their partnership to deliver a new, as-a-service, on-premises offering that will include the full VMware SDDC stack (vSphere, NSX, vSAN) running on AWS Outposts, a fully managed and configurable server and network installation built with AWS-designed hardware. VMware Cloud in AWS Outposts is VMware’s new As-a-Service offering in partnership with AWS to run on AWS Outposts – it will leverage the innovations we’ve developed with Project Dimension and apply them on top of AWS Outposts. VMware Cloud on AWS Outposts will be a subscription-based service and will support existing VMware payment options.

The reality is that on-premises environments are not going away any time soon but customers like the operating model of the cloud. More and more they don’t care about where infrastructure lives as long as a services outcome is achieved. Customers are after simplicity and cost efficiency. Outposts delivers all this by enabling convenience and choice…the choice to run VMware for traditional workloads using the familiar VMware SDDC stack all while having access to native AWS services.

A Managed Service Offering means a Mind shift

The big shift here from VMware that began with VMware Cloud on AWS is a shift towards managed services. A fundamental change in the mindset of the customer in the way in which they consume their infrastructure. Without needing to worry about the underlying platform, IT can focus on the applications and the availability of those applications. For VMware this means from the VM up…for AWS, this means from the platform up.

VMware Cloud on AWS is a great example of this new managed services world, with VMware managing most of the traditional stack. VMware can now extend VMware Cloud on AWS to Outposts to boomerang the management of on-premises as well. Overall Outposts is a win win for both AWS and VMware…however proof will be in the execution and uptake. We won’t know how it all pans out until the product becomes available…apparently in the later half of 2019.

IT admins have some contemplating to do as well…what does a shift to managed platforms mean for them? This is going to be an interesting ride as it pans out over the next twelve months!

References:

VMware Cloud on AWS Outposts: Cloud Managed SDDC for your Data Center

vSphere 6.7 Update 1 – Top New Features and Platform Supportability

Last week VMware released vSphere 6.7 Update 1. While the buzz around this release was less than the previous release it still contains a ton of enhancements for vCenter, ESXi and vSAN. Like 6.7 before it, this is a lot more than a point release and represents a significant upgrade from vSphere 6.7.

Looking through the release notes, there appears to be less for service providers in this release though I still feel like it’s important to highlight the base hypervisor (ESXi) as well as the management platform (vCenter). vSAN has had another significant update and that will warrant a post on it’s on. I’ll also talk about current interoperability with vCloud Director and NSX as well as current Veeam supportability for vSphere 6.7 Update 1 as well as touch on Veeam’s current supportability.

  • New (almost 100%) Fully functional HTML5 client
  • Upgrade path from vSphere 6.5 U2 to vSphere 6.7 Update 1
  • Enhanced support for NVIDIA Quadro vDWS VMs and support for Intel FPGA
  • New vCenter Convergence Tool
  • Updated vSAN
  • Enhanced vSphere Content Library
Fully Functional HTML5 Client

Most functions have now been ported across to the HTML5 vSphere Client. This results in administrators not having to switch back and forth between the FLEX Web Client and the HTML5 client. Update 1 features:

  • vCenter High Availability (VCHA)
  • Auto Deploy
  • Host Profiles
  • vSphere Update Manager
  • Network Topology Diagrams
  • Performance Charts
  • Improved Searching
  • Dark Theme

Emad Younis has a detailed post here that goes through the new features.

Upgrade Path from vSphere 6.5 Update 2 to vSphere 6.7 Update 1

One of the issues with vSphere 6.7 was the fact that the vSphere 6.5 Update 2 release would not be able to be upgraded to vSphere 6.7.  With the release of vSphere 6.7 Update 1. vSphere 6.5 Update 2 to vSphere 6.7 Update 1 is now a fully supported.

Enhanced Content Library

New improvements to the content library in vSphere 6.7 Update 1 enables the importing of OVA templates from a HTTPS endpoint and also local storage.  Importing now verifies the certificate of the OVA bundle and also now natively supports VM templates (VMTX) and associated operations such as deploying a VM directly from Content Library.

vCenter Specific Enhancements

With vCenter Server 6.7 Update 1, you can move a vCenter Server with an Embedded Platform Services Controller from one vSphere domain to another vSphere domain. Services such as tagging and licensing are retained and migrated to the new domain.

There is a new Burst Filter to manage event bursts and prevent the database of vCenter Server from flooding with identical events over a short period of time.

vCenter Server 6.7 Update 1 supports VMware vSphere vMotion between on-premises vCenter’s and VMware Cloud on AWS. You can use either the vSphere Client or vSphere Web Client, or the API. Both sides need to be at 6.7 Update 1.

you can import Open Virtual Appliance (OVA) files in a Content Library. The OVA files are unzipped during the import, providing manifest and certificate validations, and create an OVF library item that enables deployment of virtual machines from a Content Library.

With vCenter Server 6.7 Update 1, you can use the Appliance Management User Interface to configure and edit the firewall settings of the vCenter Server Appliance.

ESXi Specific Enhancements

There are a few vendor/hardware related features and enhancements in Update 1 for ESXi 6.7. The release notes cover them in detail here. But as mentioned above, probably the biggest addition here is the ability to upgrade from ESXi 6.5 Update 2 which I know a few service providers where stuck on. In terms of known issues the release notes also contain a good list. There are some here that impact Service Providers so it’s worth reading through them.

vCD and NSX Supportability:

Shifting from new features and enhancements to an important subject to talk about when talking service provider platform…VMware product compatibility. For those VCPP Service Providers running a Hybrid Cloud you should be running a combination of vCloud Director SP or/and NSX-v of which the 6.4.3 and 6.4.2 versions are supported at release. Most providers should be on these releases so that’s good news.

Looking at vCloud Director, it looks like 9.5 is the only supported version at the moment

Veeam Backup & Replication Supportability: 

Veeam commits to supporting major version releases within 90 days or sooner of GA. There has been many discussions going round whether an Update is a major release these days…and general consensus now is that VMware is releasing these updates with enough changes to potentially impact backup supportability.

So with that, those Service Provider that are also VCSPs using Veeam to backup their infrastructure should not upgrade to vSphere 6.7 until Backup & Replication Update 4 is released. For those that are bleeding edge and have updated your only is to go with the workaround that is detailed here. It works…but again, it’s a work around.

Wrapping Up:

Rounding off this post, in the Known Issues section there is a fair bit to be aware of for 6.7 Update 1. it’s worth reading through all the known issues just in case there are any specific issues that might impact you.

Happy upgrading!

References:

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-671-release-notes.html

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-671-release-notes.html

vSphere 6.7 – What’s in it for Service Providers Part 1

A few weeks ago after much anticipation VMware released vSphere 6.7. Like 6.5 before it, this is a lot more than a point release and represents a major upgrade from vSphere 6.5. There is so much packed into this new release that there is an official page with separate blog posts talking about the features and enhancements. As usual, I will go through some of the key features and enhancements that are included in the latest versions of vCenter and ESXi and as they relate back to the Service Providers that use vSphere as the foundation of their Infrastructure as a Service offerings.

There is a lot go get through though and like the vSphere 6.5 release the “whats new” will not fit into one post so i’ll split the highlights between a couple posts and I’ll cover ESXi specifically in a follow-up. I still feel like it’s important to highlight the base hypervisor as well as the management platform. I’ll also talk about current interoperability with vCloud Director and NSX as well as Veeam supportability for vSphere 6.7.

The major features and enhancements as listed in the What’s New PDF are:

  • Scalability Enhancements
  • VMware vCenter Server Appliance Linked Mode
  • VMware vCenter Server Appliance Back Up Scheduler
  • Single Reboot
  • Quick Boot
  • Support for 4K Native Storage
  • Improved HTML 5 based vSphere Client
  • Security-at-Scale
  • Support for Trusted Platform Module (TPM) 2.0 and virtual TPM
  • Cross-vCenter Encrypted vMotion
  • Support for Microsoft’s Virtualization Based Security (VBS)
  • NVIDIA GRID vGPU Enhancements
  • vSphere Persistent Memory
  • Hybrid Linked Mode
  • Per-VM Enhanced vMotion Compatibility (EVC)
  • Cross-vCenter Mixed Version Provisioning – Simplify provisioning across hybrid cloud environments that have diferent vCenter versions

Below are the ones in red fleshed out in the context of Service Providers.

Enhanced vCenter Server Appliance:

The VCSA has been enhanced significantly in this release. Having used the VCSA exclusively for the past year in all my environments I have a love hate relationship with it. I still feel it’s nowhere as stable as vCenter running ontop of Windows and is prone to more issues than a Windows based vCenter…however this 6.7 release will be the last one supporting or offering a Windows based vCenter. With that VMware have had to work hard on making the VCSA more resilient.

Compared to the 6.5 VCSA, 6.7 offers twice the performance in vCenter operations per second with a three times reduction in memory usage and three times faster DRS operations meaning that power on and other VM operations are performed quicker. This is great on a service provider platform with potentially lots of those operations happening during the course of a day. Hopefully this improves the responsiveness overall of the VCSA which I have felt at times to be poor under load or after an extended period of appliance uptime.

There has also been a number of updates to the APIs offered in vSphere, the VCSA and ESXi. William Lam has a great post on what’s new for APIs here, but all Service Providers should have teams looking at the API Explorer as it’s a great way to explore and learn what’s available.

Single Reboot and Quick Reboot:

For Service Providers who need upgrade their platforms to maintain optimal compatibility, upgrading hosts can be time consuming at scale. vSphere 6.7 reduces ESXi host upgrades, by eliminating one of the two reboots normally required for major version upgrades. This is the single reboot feature. There is also vSphere Quick Boot that restarts the ESXi hypervisor without rebooting the physical host. This skips time-consuming server hardware initialization and post boot operation wait times. Both of these significantly reduce maintenance times.

This blog post covers both features in more detail.

Improved HTML 5 based vSphere Client:

While minor in terms of actual under the hood improvements, the efficiencies that are gained when it comes to a decent user interface are significant. When managing Service Provider platforms at scale, having a reliable client is important and with the decommissioning of the VI client and the often frustrating performance of the Flex client a near complete and workable HTML vSphere Client is a big plus for those who work day to day on vCenter.

The vSphere 6.7 vSphere Client has support for vSAN as well as having Update Manager fully built in. As per the last NSX 6.4 update there is also limited management of that. There is also a new vROps plugin…this plugin is available out-of-the-box once vROps has been linked with vCenter and offers dashboards directly in the vSphere client that can view, cluster view, and alerts for both vCenter and vSAN views. This is extremely handy for Service Providers who use vROps dashboard not needing to go to two different locations to get the info.

vCD and NSX Supportability:

Shifting from new features and enhancements to an important subject to talk about when talking service provider platform…VMware product compatibility. For those VCPP Service Providers running a Hybrid Cloud you should be running a combination of vCloud Director SP or/and NSX-v of which, at the moment there is no support for either in vSphere 6.7.

Looking at vCloud Director, it looks like 9.1 is supported however given the fact you need to be running NSX-v with vCD these days and NSX is not yet supported, it doesn’t make too much sense to suggest that there is total compatability.

I suspect we will see NSX-v come out with a supported build shortly…though I’m only expecting vCloud Director SP to support 6.7 form version 9.1 which will mean upgrades.

Veeam Backup & Replication Supportability: 

Veeam commits to supporting major version releases within 90 days or sooner of GA. So with that, those Service Provider that are also VCSPs using Veeam to backup their infrastructure should not upgrade to vSphere 6.7 until Backup & Replication Update 3a is released. For those that are bleeding edge and have updated your only option at that point is our Agents for Windows and Linux until Update 3a is released.

Wrapping up Part 1:

Rounding off this post, in the Known Issues section there is a fair bit to be aware of for 6.7. it’s worth reading through all the known issues just in case there are any specific issues that might impact you. In upcoming posts around vSphere 6.7 for Service Providers series I will cover more vCenter features as well as ESXi enhancements and what’s new in Core Storage.

Happy upgrading!

References:

https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-vcenter-server-67-release-notes.html

Introducing Faster Lifecycle Management Operations in VMware vSphere 6.7

vSphere 6.5 Update 1 – What’s in it for Service Providers

Late last week VMware released vSphere 6.5 Update 1 which included updated builds of both vCenter and ESXi and as per usual I will go through some of the key features and fixes that are included in the latest versions of vCenter and ESXi. When looking through the release notes I generally keep an eye out for improvements that relate back to Service Providers who use vSphere as the foundation of their Managed or Infrastructure as a Service offerings. This update also contains an update to vSAN which is now at 6.6.1 so I’ll spend some time looking at what’s been added there.

 

New Features and Enhancements:

Without question this is a significant patch release for vCenter and ESXi and the length of the release notes is testament to that point. In terms of new features there isn’t anything groundbreaking but there are a few nice additions like being able to run the VCSA GUI and CLI installers on Windows 2012 and 2012 R2 as well as 2016 and macOS Sierra and Ubuntu 17.04 OS is supported for Guest OS Customization. vCenter now supports Microsoft SQL Server 2014 SP2 2016 and SP1 as well as some increased configuration maximums supporting Linked Mode with 15 vCenter Instances, 5000 ESXi hosts and 50,000 powered on virtual machines.

Ability to Upgrade or Migrate from vCenter 6.0 Update 3:

This release addresses the previous limitation in the upgrade and migration path for those running vSphere 6.0 U3 in going to vSphere 6.5. I know this will make a lot of providers happy as I know a lot that had to go to 6.0 Update 3 to address existing bug in the platform but where not yet ready or able to go to 6.5 at the time.

HTML5 Client Update:

The HTML5 Web Client has gotten it’s own update that brings it up to speed with the 3.15 Fligng version however it’s still partially functional which remains somewhat frustrating…The online documentation for supported functionality has been updated to vSphere 6.5U1 and is available here.

The list below is of the main updates in this release.

  • DRS/HA VM overrides
  • SDRS rules
  • Content Library – further actions
  • Roles and Global Permissions
  • Download multiple files as zip
  • Distributed Switch – further actions
  • Fault Tolerance
  • SPBM
  • VM Hardware – further items
  • Apply Customize Guest OS during Clone
  • VM Migration – further actions (compute+storage, Cross VC, batch)
vSAN Features:

For service providers, vSAN 6.6 was another major release that sured up vSANs status as a serious storage platform for service provider platforms.

vSAN 6.6.1 introduces three key new features:

  • VMware vSphere Update Manager (VUM) integration
  • Performance Diagnostics in vSAN Cloud Analytics
  • Storage Device Serviceability enhancement

The ability to upgrade with VUM is a nice touch and continues to improve on the usability and manageability of vSAN. For a full look at what’s new in this release for vSAN 6.6.1 head to this blog post.

Resolved Issues:

There are a bunch of resolved issues in this release and I’ve gone through the rather extensive list to pull out the biggest fixes that relate to my experience in service provider operations and have also extended this to include fixes that relate to backup operations. The majority of what I pick out related to storage, networking hosts and VM operations…the core of any platform, but even more important in the service provider world. The ones in red are specific fixes that relate to issues that iv’e come across…good to see them addressed!

vCenter:
  • First-boot failure occurs when upgrading from vSphere 5.5 or 6.0 to vSphere 6.5 on Windows If an older version of the OpеnSSL DLLs are installed, upgrading to vSphere 6.5 fails to run because the older DLL versions are loaded
  • Affinity rules configured on vCenter Server 5.5 can cause crashes after upgrading to vCenter Server 6.5 Migrating a VM with affinity rules configured while on vCenter Server 5.5 to a cluster that has affinity rules configured on vCenter Server 6.0 or 6.5 can cause vCenter Server to crash.
  • VM Snapshot Size (GB) alarm is not triggered after the VM is powered on. VM Snapshot Size (GB) alarm is reset if the virtual machine is shut down. Alarm fails to trigger after the VM is powered on. This issue occurs in alarms based on VM Snapshot (GB) and Vm Total Size on Disk because their status is altered when the power state of the VM is changed. This issue occurs because disk usage of a VM is the same regardless of the VM power state.
  • When you add ports to a vSphere Distributed Switch you get an error Because of a race condition, when you add ports to a vSphere Distributed Switch you get the error message: Cannot create a new port because number of ports exceeds 2147483647, maximum number of ports allowed on vDS.
  • A runtime exception “Unable to retrieve data about the distributed switch” might occur while upgrading vSphere Distributed Switch (vDS) from 5.0 to 6.5 version When you try to upgrade an existing distributed switch after the vCenter upgrade is completed, the runtime exception Unable to retrieve data about the distributed switch might occur in the wizard and the distributed switch cannot be upgraded. The exception is a result of unexpected value NULL for a LACP property of the distributed switch, instead of TRUE or FALSE, as LACP is not supported for the current version of vSphere Distributed Switch.
  • Host configuration might not be available after vCenter Server restarts After a vCenter Server restart, the host configuration might not be available if vCenter Server cannot communicate with the host. After connectivity is restored, the configuration becomes available.
  • OVF tool fails to upload OVF or OVA files larger than 10 GB If you use OVF tool fails to upload OVF or OVA files larger than 10 GB, the upload might fail.

ESXi:

  • Virtual machine crashes on ESXi 6.5 when multiple users log on to Windows Terminal Server VM Windows 2012 terminal server running VMware tools 10.1.0 on ESXi 6.5 stops responding when many users are logged in.vmware.log will show similar messages to2017-03-02T02:03:24.921Z| vmx| I125: GuestRpc: Too many RPCI vsocket channels opened.
    2017-03-02T02:03:24.921Z| vmx| E105: PANIC: ASSERT bora/lib/asyncsocket/asyncsocket.c:5217
    2017-03-02T02:03:28.920Z| vmx| W115: A core file is available in "/vmfs/volumes/515c94fa-d9ff4c34-ecd3-001b210c52a3/h8-
    ubuntu12.04x64/vmx-debug-zdump.001"
    2017-03-02T02:03:28.921Z| mks| W115: Panic in progress... ungrabbing 
  • An ESXi host might fail with purple diagnostic screen when collecting performance snapshots
    An ESXi host might fail with purple diagnostic screen when collecting performance snapshots with vm-support due to calls for memory access after the data structure has already been freed.An error message similar to the following is displayed:
  • Full duplex configured on physical switch may cause duplex mismatch issue with igb native Linux driver supporting only auto-negotiate mode for nic speed/duplex setting
    If you are using the igb native driver on an ESXi host, it always works in auto-negotiate speed and duplex mode. No matter what configuration you set up on this end of the connection, it is not applied on the ESXi side. The auto-negotiate support causes a duplex mismatch issue if a physical switch is set manually to a full-duplex mode.
  • An ESXi host might fail with a purple screen and a Spin count exceeded (refCount) – possible deadlock with PCPU error An ESXi host might fail with a purple screen and a Spin count exceeded (refCount) - possible deadlock with PCPU error, when you reboot the ESXi host under the following conditions:
    • You use the vSphere Network Appliance (DVFilter) in an NSX environment
    • You migrate a virtual machine with vMotion under DVFilter control
  • A Virtual Machine (VM) with e1000/e1000e vNIC might have network connectivity issues For a VM with e1000/e1000e vNIC, when the e1000/e1000e driver tells the e1000/e1000e vmkernel emulation to skip a descriptor (the transmit descriptor address and length are 0), a loss of network connectivity might occur.
  • An ESXi host might stop responding when you migrate a virtual machine with Storage vMotion between ESXi 6.0 and ESXi 6.5 hosts The vmxnet3 device tries to access the memory of the guest OS while the guest memory preallocation is in progress during the migration of virtual machine with Storage vMotion. This results in an invalid memory access and the ESXi 6.5 host failure.
  • Modification of IOPS limit of virtual disks with enabled Changed Block Tracking (CBT) fails with errors in the log files To define the storage I/O scheduling policy for a virtual machine, you can configure the I/O throughput for each virtual machine disk by modifying the IOPS limit. When you edit the IOPS limit and CBT is enabled for the virtual machine, the operation fails with an error The scheduling parameter change failed. Due to this problem, the scheduling policies of the virtual machine cannot be altered. The error message appears in the vSphere Recent Tasks pane.You can see the following errors in the /var/log/vmkernel.log file:2016-11-30T21:01:56.788Z cpu0:136101)VSCSI: 273: handle 8194(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000
    2016-11-30T21:01:56.788Z cpu0:136101)ScsiSched: 2760: Invalid Bandwidth Cap Configuration
    2016-11-30T21:01:56.788Z cpu0:136101)WARNING: VSCSI: 337: handle 8194(vscsi0:0):Failed to invert policy
  • When you hot-add an existing or new virtual disk to a CBT (Changed Block Tracking) enabled virtual machine (VM) residing on VVOL datastore, the guest operation system might stop responding When you hot-add an existing or new virtual disk to a CBT enabled VM residing on VVOL datastore, the guest operation system might stop responding until the hot-add process completes. The VM unresponsiveness depends on the size of the virtual disk being added. The VM automatically recovers once hot-add completes.
  • When you use vSphere Storage vMotion, the UUID of a virtual disk might change When you use vSphere Storage vMotion on vSphere Virtual Volumes storage, the UUID of a virtual disk might change. The UUID identifies the virtual disk and a changed UUID makes the virtual disk appear as a new and different disk. The UUID is also visible to the guest OS and might cause drives to be misidentified.
  • An ESXi host might become unresponsive if the VMFS-6 volume has no space for the journal When opening a VMFS-6 volume, it allocates a journal block. Upon successful allocation, a background thread is started. If there is no space on the volume for the journal, it is opened in read-only mode and no background thread is initiated. Any intent to close the volume, results in attempts to wake up a nonexistent thread. This results in the ESXi host failure.
  • SSD congestion might cause multiple virtual machines to become unresponsiv Depending on the workload and the number of virtual machines, diskgroups on the host might go into permanent device loss (PDL) state. This causes the diskgroups to not admit further IOs, rendering them unusable until manual intervention is performed.
  • Unable to collect vm-support bundle from an ESXi 6.5 host Unable to collect vm-support bundle from an ESXi 6.5 host because when generating logs in ESXi 6.5 by using the vSphere Web Client, the select specific logs to export text box is blank. The options: network, storage, fault tolerance, hardware etc. are blank as well. This issue occurs because the rhttpproxy port for /cgi-bin has a value different from 8303.This issue is resolved in this release.
  • vSphere Storage vMotion might fail with an error message if it takes more than 5 minutes The destination virtual machine of the vSphere Storage vMotion is incorrectly stopped by a periodic configuration validation for the virtual machine. vSphere Storage vMotion that takes more than 5 minutes fails with the The source detected that the destination failed to resume message.
    The VMkernel log from the ESXi host contains the message D: Migration cleanup initiated, the VMX has exited unexpectedly. Check the VMX log for more details.

vSAN:

  • Hosts in a vSAN cluster have high congestion which leads to host disconnects When vSAN components with invalid metadata are encountered while an ESXi host is booting, a leak of reference counts to SSD blocks can occur. If these components are removed by policy change, disk decommission, or other method, the leaked reference counts cause the next I/O to the SSD block to get stuck. The log files can build up, which causes high congestion and host disconnects.
  • vSAN cluster becomes partitioned after the member hosts and vCenter Server reboot If the hosts in a unicast vSAN cluster and the vCenter Server are rebooted at the same time, the cluster might become partitioned. The vCenter Server does not properly handle unstable vpxd property updates during a simultaneous reboot of hosts and vCenter Server.
  • Large File System overhead reported by the vSAN capacity monitor When deduplication and compression are enabled on a vSAN cluster, the Used Capacity Breakdown (Monitor > vSAN > Capacity) incorrectly displays the percentage of storage capacity used for file system overhead. This number does not reflect the actual capacity being used for file system activities. The display needs to correctly reflect the File System overhead for a vSAN cluster with deduplication and compression enabled.

It’s also worth reading through the Known Issues section as there is a fair bit to be aware of in Update 1 and that remain from the GA.

Happy upgrading!

References:

https://docs.vmware.com/en/VMware-vSphere/6.5/rn/vsphere-esxi-651-release-notes.html

https://docs.vmware.com/en/VMware-vSphere/6.5/rn/vsphere-vcenter-server-651-release-notes.html

Second vSphere Client (HTML5) update in vSphere 6.5U1

Introducing vSAN 6.6.1 and New Operational Savings

ESXI 6.5 Storage Performance Issues Resolved in Update 1

I originally came across the issue of slow storage performance with the native vmw_ahci driver that comes bundled with ESXi 6.5 just as I was first playing with my SuperMicro SYS-5028D-TN4T in my homelab. After publishing a couple of posts about the workaround shortly afterwards the issue become quiet prevalent in the community and the post continues to get decent traffic, meaning that the issues impacted quiet a few people out there.

The good news is that with the release of vSphere 6.5 Update 1 there is a fix for the problem in the form of updated drivers for the AHCI module. William Lam has been quick to blog about the fix and if you had previously disabled the driver you will need to re-enable it.

This VMwareKB covers the specific patch as listed in the release notes:

No confirmation as of yet if it actually does the trick, but the release notes look promising as the assumption is that it will resolve the issues so that homelabbers and people using the driver in production systems can rest easy.

References:

https://docs.vmware.com/en/VMware-vSphere/6.5/rn/vsphere-esxi-651-release-notes.html

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2149910

http://www.virtuallyghetto.com/2017/07/ahci-vmw_ahci-performance-issue-resolved-in-esxi-6-5-update-1.html

Quick Fix – Unable to Upgrade Distributed Switch After vCenter Upgrade

This week I upgraded (and migrated) my SliemaLabs NestedESXi vCenter from a Windows 6.0 server to a 6.5 VCSA …everything went well, but ran into an issue when I went to upgrade my distributed switch to 6.5.0. Even though everything appeared to be working with regards to the host and VM networking associated with the switch, when I went to upgrade it I got the following error:

Doing a quick Google for Unable to retrieve data about the distributed switch came up with nothing and clicking on next didn’t do anything actionable. A restart of the Web Client and a reboot of the VCSA didn’t resolve the issue either.The distributed switch in question was still on version 5.5 as I forgot to upgrade it to 6.0 during the upgrade to vCenter 6.0. Weather that condition somehow caused the error I am not sure…regardless the quick fix or better said…work around is pretty simple; Use PowerCLI.

Interestingly the Vendor is different…though not sure this caused the issue. In any case the work around is to upgrade the distributed switch using the Set-VDSwitch command.

And success!

I’m not sure what caused the error to appear in the Web Client but the workaround meant that it became a moot point. Suffice to say if you come across this error in your Web Client when trying to upgrade a distributed switch…head over the PowerCLI.

 

migrate2vcsa – Migrating vCenter 6.0 to 6.5 VCSA

Over the past few years i’ve written a couple of articles on upgrading vCenter from 5.5 to 6.0. Firstly an in place upgrade of the 5.5 VCSA to 6.0 and then more recently an in place upgrade of a Windows 5.5 vCenter to 6.0. This week I upgraded and migrated my NestedESXi SliemaLab vCenter using the migrate2vcsa tool that’s now bundled into the vCenter 6.5 ISO. The process worked first time and even though I held some doubts about the migration working without issue and my Windows vCenter is now in retirement.

The migration tool that’s part of vSphere 6.5 was actually first released as a VMware fling after it was put forward as an idea in 2013. It was then officially to GA with the release of vSphere 6.0 Update 2m…where m stood for migration. Over it’s development it has been championed by William Lam who has written a number of articles on his blog and more recently Emad Younis has been the technical marketing lead on the product as it was enhanced for vSphere 6.5.

Upgrade Options:

You basically have two options to upgrade a Windows based 6.0 vCenter:

My approach for this particular environment was to ensure a smooth upgrade to vSphere 6.0 Update 2 and then look to upgrade again to 6.5 once is thaws outs in the market. The cautious approach will still be undertaken by many and a stepped upgrade to 6.5 and migration to the VCSA will still be common place. For those that wish to move away from their Windows vCenter, there is now a very reliable #migrate2vcsa path…as a side note it is possible to migrate directly from 5.5 to 6.5.

Existing Component Versions:

  • vCenter 6.0 (4541947)
    • NSX Registered
    • vCloud Director Registered
    • vCO Registered
  • ESXi 6.0 (3620759)
  • Windows 2008 (RTM)
  • SQL Server 2008 R2 (10.50.6000.34)

All vCenter components where installed on the Windows vCenter instance including Upgrade Manager. There where also a number of external services registered agains’t the vCenter of which the NSX Manager needed to be re-registered for the SSO to allow/trust the new SSL certificate thumbprint. This is common, and one to look out for after migration.

Migration Process:

I’m not going to go through the whole process as it’s been blogged about a number of times, but in a nutshell you need to

  • Take a backup of your existing Windows vCenter
  • I took a snapshot as well before I began the process
  • Download the vCenter Server Appliance 6.5 ISO and mount the ISO
  • Copy the migration-assistant folder to the Windows vCenter
  • Start the migration-assistant tool and work through the pre-checks

If all checks complete successfully the migration assistant will finish at waiting for migration to start. From here you start the VCSA 6.5 installer and click on the Migrate menu option.

Work through the wizard which asks you for detail on the source and target servers, lets you select the compute, storage and appliance size as well as the networking settings. Once everything is entered we are ready to start Stage 1 of the process.

When Stage 1 finishes you are taken to Stage 2 where is asks you to select the migration data as shown below. This will give you some idea as to how much storage you will need and what the initial foot print of the over and above the actual VCSA VM storage.

There are a couple more steps the migration assistant goes through to complete the process…which for me took about 45 minutes to complete but this will vary depending on the amount of date you want to transfer across.

If there are any issues or if the migration failed at any of the steps you do have the option to power down/remove the new VCSA and power back on the old Windows vCenter as is. The old Windows vCenter would have been shutdown by the migration process just as the copying of the key data finished and the VCSA was rebooted with network settings and machine name copied across. There is proper roll back series of steps listed in this VMwareKB.

The only external service that I needed to re-register against vCenter was NSX. vCloud Director carried on without issue, but it’s worth checking out all registered services just in case.

Conclusion and Thoughts:

As mentioned at the start, I was a bit skeptical that this process would work as flawlessly as it did…and on it’s first time! It’s almost a little disappointing to have this as automated and hands off as it is, but it’s a testament to the engineering effort the team at VMware has done around this tool to make it a very viable and reliable way to remove dependancies on Windows and MSSQL. It also allows those with older version of Windows that are well past their used by date the ability to migrate to the VSCA with absolute confidence.

References:

http://www.virtuallyghetto.com/page/2?s=migrate2vcsa

https://github.com/younise/migrate2vcsa-resources

VMware vSphere 6.5 Host Resources Deep Dive – A Must Have!

Just after I joined Zettagrid in June of 2013 I decided to load up vSphere 5.1 Clustering Deepdive by Duncan Epping and Frank Denneman on my iPad to read on my train journey to and from work. Reading that book allowed me to gain a deeper understanding of vSphere through the in depth content that Duncan and Frank had produced. Any VMware administrator worth their salt would be familiar with the book (or the ones that proceeded it) and it’s still a brilliant read.

Fast forward a few versions of vSphere and we finally have follow up:

VMware vSphere 6.5 Host Resources Deep Dive

This time around Frank has been joined by Niels Hagoort and together they have produced another must have virtualization book…though it goes far beyond VMware virtualization. I was lucky enough to review a couple of chapters of the book and I can say without question that this book will make your brain hurt…but in a good way. It’s the deepest of deep dives and it goes beyond the previous books best practice and dives into a lot of the low level compute, storage and networking fundamentals that a lot of us have either forgotten about, never learnt or never bothered to learn about.

This book explains the concepts and mechanisms behind the physical resource components and the VMkernel resource schedulers, which enables you to:

  • Optimize your workload for current and future Non-Uniform Memory Access (NUMA) systems.
  • Discover how vSphere Balanced Power Management takes advantage of the CPU Turbo Boost functionality, and why High Performance does not.
  • How the 3-DIMMs per Channel configuration results in a 10-20% performance drop.
  • How TLB works and why it is bad to disable large pages in virtualized environments.
  • Why 3D XPoint is perfect for the vSAN caching tier.
  • What queues are and where they live inside the end-to-end storage data paths.
  • Tune VMkernel components to optimize performance for VXLAN network traffic and NFV environments.
  • Why Intel’s Data Plane Development Kit significantly boosts packet processing performance.

If any of you have read Frank’s NUMA Deep Dive blog series you will start to get an appreciation of the level of technical detail this book covers, however it is written in a way that allows you absorb the information in a way that is digestible, though some parts may need to be read twice over. Well done to Frank and Niels on getting this book out and again, if you are working in and around anything to do with computers this is a must read so do yourself a favour and grab a copy.

The current Amazon locals that have access to purchase the book can be found below:

Amazon US: http://www.amazon.com/dp/1540873064
Amazon France: https://www.amazon.fr/dp/1540873064
Amazon Germany: https://www.amazon.de/dp/1540873064
Amazon India: http://www.amazon.in/dp/1540873064
Amazon Japan: https://www.amazon.co.jp/dp/1540873064
Amazon Mexico: https://www.amazon.com.mx/dp/1540873064
Amazon Spain: https://www.amazon.es/dp/1540873064
Amazon UK: https://www.amazon.co.uk/dp/1540873064

Released: vCenter and ESXi 6.0 Update 3 – What’s in It for Service Providers

Last month I wrote a blog post on upgrading vCenter 5.5 to 6.0 Update 2 and during the course of writing that blog post I conducted a survey on which version of vSphere most people where seeing out in the wild…overwhelmingly vSphere 6.0 was the most popular version with 5.5 second and 6.5 lagging in adoption for the moment. It’s safe to assume that vCenter 6.0 and ESXi 6.0 will be common deployments for some time in brownfield sites and with the release of Update 3 for vCenter and ESXi I thought it would be good to again highlight some of the best features and enhancements as I see them from a Service Provider point of view.

vCenter 6.0 Update 3 (Build 5112506)

This is actually the eighth build release of vCenter 6.0 and includes updated TLS support for v1.0 1.1 and 1.2 which is worth a look in terms of what it means for other VMware products as it could impact connectivity…I know that vCloud Director SP now expects TLSv 1.1 by default as an example. Other things listed in the What’s New include support for MSSQL 2012 SP3, updated M2VCSA support, timezone updates and some changes to the resource allocation for the platform services controller.

Looking through the Resolved Issue there are a number of networking related fixes in the release plus a few annoying problems relating to vMotion. The ones below are the main ones that could impact on Service Provider operations.

  • Upgrading vCenter Server from version 6.0.0b to 6.0.x might fail. 
    Attempts to upgrade vCenter Server from version 6.0.0b to 6.0.x might fail. This issue occurs while starting service An error message similar to the following is displayed in the run-updateboot-scripts.log file.
    “Installation of component VCSServiceManager failed with error code ‘1603’”
  • Managing legacy ESXi from the vCenter Server with TLSv1.0 disabled is impacted.
    vCenter Server with TLSv1.0 disabled supports management of legacy ESXi versions in 5.5.x and 6.0.x. ESXi 5.5 P08 and ESXi 6.0 P02 onwards is supported for 5.5.x and 6.0.x respectively.
  • x-VC operations involving legacy ESXi 5.5 host succeeds.
    x-VC operations involving legacy ESXi 5.5 host succeeds. Cold relocate and clone have been implicitly allowed for ESXi 5.5 host.
  • Unable to use End Vmware Tools install option using vSphere Client.
    Unable to use End VMware Tools install option while installing VMware Tools using vSphere Client. This issue occurs after upgrading to vCenter Server 6.0 Update 1.
  • Enhanced vMotion fails to move the vApp.VmConfigInfo property to destination vCenter Server.
    Enhanced vMotion fails to move the vApp.VmConfigInfo property to destination vCenter Server although virtual machine migration is successful.
  • Storage vMotion fails if the VM is connected with a CD ISO file.
    If the VM is connected with a CD ISO file, Storage vMotion fails with an error similar to the following:
  • Unregistering an extension does not delete agencies created by a solution plug-in.
    The agencies or agents created by a solution such as NSX, or any other solution which uses EAM is not deleted from the database when the solution is unregistered as an extension in vCenter Server.

ESXi 6.0 Update 3 (Build 5050593)

The what’s new in ESXi is a lot more exciting than what’s new with vCenter highlighted by a new Host Client and fairly significant improvements in vSAN performance along with similar TLS changes that are included in the vCenter update 3. With regards to the Host Client the version is now 1.14.0. and includes bug fixes and brings it closer to the functionality provided by the vSphere Client. It’s also worth mentioning that new versions of the Host Client continue to be released through the VMware Labs Flings site. but, those versions are not officially supported and not recommended for production environments.

For vSAN, multiple fixes have been introduced to optimize I/O path for improved vSAN performance in All Flash and Hybrid configurations and there is a seperate VMwareKB that address the fixes here.

  • More Logs Much less Space vSAN now has efficient log management strategies that allows more logging to be packed per byte of storage. This prevents the log from reaching its assigned limit too fast and too frequently. It also provides enough time for vSAN to process the log entries before it reaches it’s assigned limit thereby avoiding unnecessary I/O operations
  • Pre-emptive de-staging vSAN has built in algorithms that de-stages data on periodic basis. The de-staging operations coupled with efficient log management significantly improves performance for large file deletes including performance for write intensive workloads
  • Checksum  Improvements vSAN has several enhancements that made the checksum code path more efficient. These changes are expected to be extremely beneficial and make a significant impact on all flash configurations, as there is no additional read cache look up. These enhancements are expected to provide significant performance benefits for both sequential and random workloads.

As with vCenter, I’ve gone through and picked out the most significant bug fixes as they relate to Service Providers. The first one listed below is important to think about as it should significantly reduce the number of failures that people have been seeing with ESXi installed on SD-Flash Card and not just for VDI environments as the release notes suggest.

  • High read load of VMware Tools ISO images might cause corruption of flash media  In VDI environment, the high read load of the VMware Tools images can result in corruption of the flash media.
    You can copy all the VMware Tools data into its own ramdisk. As a result, the data can be read from the flash media only once per boot. All other reads will go to the ramdisk. vCenter Server Agent (vpxa) accesses this data through the /vmimages directory which has symlinks that point to productLocker.
  • ESXi 6.x hosts stop responding after running for 85 days
    When this problem occurs, the /var/log/vmkernel log file displays entries similar to the followingARP request packets might drop.
  • ARP request packets between two VMs might be dropped if one VM is configured with guest VLAN tagging and the other VM is configured with virtual switch VLAN tagging, and VLAN offload is turned off on the VMs.
  • Physical switch flooded with RARP packets when using Citrix VDI PXE boot
    When you boot a virtual machine for Citrix VDI, the physical switch is flooded with RARP packets (over 1000) which might cause network connections to drop and a momentary outage. This release provides an advanced option /Net/NetSendRARPOnPortEnablement. You need to set the value for /Net/NetSendRARPOnPortEnablementto 0 to resolve this issue.
  • Snapshot creation task cancellation for Virtual Volumes might result in data loss
    Attempts to cancel snapshot creation for a VM whose VMDKs are on Virtual Volumes datastores might result in virtual disks not getting rolled back properly and consequent data loss. This situation occurs when a VM has multiple VMDKs with the same name and these come from different Virtual Volumes datastores.
  • VMDK does not roll back properly when snapshot creation fails for Virtual Volumes VMs
    When snapshot creation attempts for a Virtual Volumes VM fail, the VMDK is tied to an incorrect data Virtual Volume. The issue occurs only when the VMDK for the Virtual Volumes VM comes from multiple Virtual Volumes datastores.
  • ESXi host fails with a purple diagnostic screen due to path claiming conflicts
    An ESXi host displays a purple diagnostic screen when it encounters a device that is registered, but whose paths are claimed by a two multipath plugins, for example EMC PowerPath and the Native Multipathing Plugin (NMP). This type of conflict occurs when a plugin claim rule fails to claim the path and NMP claims the path by default. NMP tries to register the device but because the device is already registered by the other plugin, a race condition occurs and triggers an ESXi host failure.
  • ESXi host fails with a purple diagnostic screen due to path claiming conflicts
    An ESXi host displays a purple diagnostic screen when it encounters a device that is registered, but whose paths are claimed by a two multipath plugins, for example EMC PowerPath and the Native Multipathing Plugin (NMP). This type of conflict occurs when a plugin claim rule fails to claim the path and NMP claims the path by default. NMP tries to register the device but because the device is already registered by the other plugin, a race condition occurs and triggers an ESXi host failure.
  • ESXi host fails to rejoin VMware Virtual SAN cluster after a reboot
    Attempts to rejoin the VMware Virtual SAN cluster manually after a reboot might fail with the following error:
    Failed to join the host in VSAN cluster (Failed to start vsantraced (return code 2)
  • Virtual SAN Disk Rebalance task halts at 5% for more than 24 hours
    The Virtual SAN Health Service reports Virtual SAN Disk Balance warnings in the vSphere Web Client. When you click Rebalance disks, the task appears to halt at 5% for more than 24 hours.

It’s also worth reading through the Known Issues section as there is a fair bit to be aware of especially if running NFS 4.1 and worth looking through the general storage issues.

Happy upgrading!

References:

http://pubs.vmware.com/Release_Notes/en/vsphere/60/vsphere-vcenter-server-60u3-release-notes.html

http://pubs.vmware.com/Release_Notes/en/vsphere/60/vsphere-esxi-60u3-release-notes.html

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2149127

Upgrading Windows vCenter 5.5 to 6.0 In-Place: Issues and Fixes

Yes that’s not a typo…this post is focusing on upgrading Windows vCenter 5.5 to 6.0 via an in-place upgrade. There is the option to use the vSphere 6.0 Update2M build with the included Migrate to VCSA tool to achieve this and move away from Windows, but I thought it was worth documenting my experiences with a mature vCenter that’s at version 5.5 Update 2 and upgrade that to 6.0 Update 2. Eventually this vCenter will need to move off the current Windows 2008 RTM server which will bring into play the VCSA migration however for the moment it’s going to be upgraded to 6.0 on the same server.

With VMware releasing vSphere 6.5 in November there should be an increased desire for IT shops to start seriously thinking about moving on from there existing vSphere versions and upgrading to the latest 6.5 release however many people I know where still running vSphere 5.5, so the jump to 6.5 directly might not be possible due to internal policies or other business reasons. Interestingly in the rough numbers, I’ve got an active Twitter Poll out at the moment which after 100 votes shows that vSphere 5.5 makes up 53% of the most common vCenter version, followed by 6.0 with 44% and 6.5 with only 3%.

Upgrade Options:

You basically have two options to upgrade a Windows based 5.5 vCenter:

My approach for this particular environment (which is a NestedESXi lab environment) was to ensure a smooth upgrade to vSphere 6.0 Update 2 and then look to upgrade again to 6.5 once is thaws outs in the market. That said, I haven’t read too many issues with vSphere 6.5 and VMware have been excellent in ensuring that the 6.5 release was the most stable for years. The cautious approach will still be undertaken by many and a stepped upgrade to 6.5 and migration to the VCSA will be common place. For those that wish to move away from their Windows vCenter, there is nothing stopping you from going down the Migrate2VCSA path, and it is possible to migrate directly from 5.5 to 6.5.

Existing Component Versions:

  • vCenter 5.5 (2001466)
  • ESXi 5.5 (3116895)

SQL Version Requirements:

vCenter 6.0 Update 2 requires at least SQL Server 2008 R2 SP1 or higher, so if you are running anything lower than that you will need to upgrade to a later service pack or upgrade to later versions of SQL Server. For a list of all compatible databases click here.

vCenter Upgrade Pre-Upgrade Checks:

First step is to make sure you have a backup of the vCenter environment meaning VM state (Snapshot) and vCenter database backup. Once that’s done there are a few pre-requisites that need to be met and that will be checked by the upgrade process before the actual upgrade occurs. The first thing the installer will do after asking for the SSO and VC service account password is run the Pre-Upgrade Checker.

vCenter SSL and SSO SSL System Name Mismatch Error:

A common issue that may pop up from the pre-upgrade checker is the warning below talking about an issue with the system name of the vCenter Server certificate and the SSO certificate. As shown below it’s a hard stop and tells you to replace one or the other certificate so that the same system name is used.

If you have a publicly signed SSL Certificate you will need to generate a new cert request and submit that through the public authority of choice. The quickest way to achieve this for me was to generate a new self signed certificate by following the VMwareKB article here. Once that’s been generated you can replace the existing certificate by following a previous post I did using the VMware SSL Certificate Updater Tool.

After all that, in any case I got the warning below saying that the 5.5 SSL Certificates do not meet security requirements, and so new SSL certificates will need to be generated for vCenter Server 6.0.0.

With that, my suggestion would be to generate a temporary self signed certificate for the upgrade and then apply a public certificate after that’s completed.

Ephemeral TCP Port Error:

Once the SSL mismatch error has been sorted you can run the pre-upgrade checker again. Once that completes successfully you move onto the Configure Ports window. I ran into the error shown below that states that the range of port is too large and the system must be reconfigured to use a smaller ephemeral port range before the install can continue.

The fix is presented in the error message so after running netsh.exe int ipv4 set dynamicportrange tcp 49152 16384 you should be ok to hit Next again and continue the upgrade.

Export of 5.x Data:

During the upgrade the 5.5 data is stored in a directory and then migrated to 6.0. You need to ensure that you have enough room on the drive location to cater for your vCenter instance. While I haven’t seen any offical rules around the storage required, I would suggest having enough storage free and the size of your vCenter SQL database data file.

vCenter Upgrade:

Once you have worked through all the upgrade screens you are ready for upgrade. Confirm the settings, take note of the fact that once updated the vCenter will be in evaluation mode, meaning you need to apply a new vCenter 6.x license once completed, check the checkbox that states you have a backup of the vCenter machine and database and you should be good to go.

Depending on the size of you vCenter instance and the speed of your disks the upgrade can take anywhere from 30 to 60 minutes or longer. If at any time the upgrade process fails during the initial export of the 5.5 data a roll back via the installer is possible…however if there is an issue while 6.0 is being installed the likelihood is that you will need to recover from backups.

Post Upgrade Checks:

Apart from making sure that the upgrade has gone through smoothly by ensuring all core vCenter services are up and running, it’s important to check any VMware or third party services that where registered against the vCenter especially given that the SSL Certificate has been replaced a couple of times. Server applications like NSX-v, vCloud Director and vCO explicitly trust SSL certificates so the registration needs to be actioned again. Also if you are running Veeam Backup & Replication you will need to go through the setup process again to accept the new SSL Certificate otherwise your backup jobs will fail.

If everything has gone as expected you will have a functional vCenter 6.0 Update 2 instance and planning can now take place for the 6.5 upgrade and in my case…the migration from Windows to the VCSA.

References:

http://www.vmware.com/resources/compatibility/sim/interop_matrix.php#db&2=998

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1029944

 

« Older Entries