Last month I wrote a blog post on upgrading vCenter 5.5 to 6.0 Update 2 and during the course of writing that blog post I conducted a survey on which version of vSphere most people where seeing out in the wild…overwhelmingly vSphere 6.0 was the most popular version with 5.5 second and 6.5 lagging in adoption for the moment. It’s safe to assume that vCenter 6.0 and ESXi 6.0 will be common deployments for some time in brownfield sites and with the release of Update 3 for vCenter and ESXi I thought it would be good to again highlight some of the best features and enhancements as I see them from a Service Provider point of view.

vCenter 6.0 Update 3 (Build 5112506)

This is actually the eighth build release of vCenter 6.0 and includes updated TLS support for v1.0 1.1 and 1.2 which is worth a look in terms of what it means for other VMware products as it could impact connectivity…I know that vCloud Director SP now expects TLSv 1.1 by default as an example. Other things listed in the What’s New include support for MSSQL 2012 SP3, updated M2VCSA support, timezone updates and some changes to the resource allocation for the platform services controller.

Looking through the Resolved Issue there are a number of networking related fixes in the release plus a few annoying problems relating to vMotion. The ones below are the main ones that could impact on Service Provider operations.

  • Upgrading vCenter Server from version 6.0.0b to 6.0.x might fail. 
    Attempts to upgrade vCenter Server from version 6.0.0b to 6.0.x might fail. This issue occurs while starting service An error message similar to the following is displayed in the run-updateboot-scripts.log file.
    “Installation of component VCSServiceManager failed with error code ‘1603’”
  • Managing legacy ESXi from the vCenter Server with TLSv1.0 disabled is impacted.
    vCenter Server with TLSv1.0 disabled supports management of legacy ESXi versions in 5.5.x and 6.0.x. ESXi 5.5 P08 and ESXi 6.0 P02 onwards is supported for 5.5.x and 6.0.x respectively.
  • x-VC operations involving legacy ESXi 5.5 host succeeds.
    x-VC operations involving legacy ESXi 5.5 host succeeds. Cold relocate and clone have been implicitly allowed for ESXi 5.5 host.
  • Unable to use End Vmware Tools install option using vSphere Client.
    Unable to use End VMware Tools install option while installing VMware Tools using vSphere Client. This issue occurs after upgrading to vCenter Server 6.0 Update 1.
  • Enhanced vMotion fails to move the vApp.VmConfigInfo property to destination vCenter Server.
    Enhanced vMotion fails to move the vApp.VmConfigInfo property to destination vCenter Server although virtual machine migration is successful.
  • Storage vMotion fails if the VM is connected with a CD ISO file.
    If the VM is connected with a CD ISO file, Storage vMotion fails with an error similar to the following:
  • Unregistering an extension does not delete agencies created by a solution plug-in.
    The agencies or agents created by a solution such as NSX, or any other solution which uses EAM is not deleted from the database when the solution is unregistered as an extension in vCenter Server.

ESXi 6.0 Update 3 (Build 5050593)

The what’s new in ESXi is a lot more exciting than what’s new with vCenter highlighted by a new Host Client and fairly significant improvements in vSAN performance along with similar TLS changes that are included in the vCenter update 3. With regards to the Host Client the version is now 1.14.0. and includes bug fixes and brings it closer to the functionality provided by the vSphere Client. It’s also worth mentioning that new versions of the Host Client continue to be released through the VMware Labs Flings site. but, those versions are not officially supported and not recommended for production environments.

For vSAN, multiple fixes have been introduced to optimize I/O path for improved vSAN performance in All Flash and Hybrid configurations and there is a seperate VMwareKB that address the fixes here.

  • More Logs Much less Space vSAN now has efficient log management strategies that allows more logging to be packed per byte of storage. This prevents the log from reaching its assigned limit too fast and too frequently. It also provides enough time for vSAN to process the log entries before it reaches it’s assigned limit thereby avoiding unnecessary I/O operations
  • Pre-emptive de-staging vSAN has built in algorithms that de-stages data on periodic basis. The de-staging operations coupled with efficient log management significantly improves performance for large file deletes including performance for write intensive workloads
  • Checksum  Improvements vSAN has several enhancements that made the checksum code path more efficient. These changes are expected to be extremely beneficial and make a significant impact on all flash configurations, as there is no additional read cache look up. These enhancements are expected to provide significant performance benefits for both sequential and random workloads.

As with vCenter, I’ve gone through and picked out the most significant bug fixes as they relate to Service Providers. The first one listed below is important to think about as it should significantly reduce the number of failures that people have been seeing with ESXi installed on SD-Flash Card and not just for VDI environments as the release notes suggest.

  • High read load of VMware Tools ISO images might cause corruption of flash media  In VDI environment, the high read load of the VMware Tools images can result in corruption of the flash media.
    You can copy all the VMware Tools data into its own ramdisk. As a result, the data can be read from the flash media only once per boot. All other reads will go to the ramdisk. vCenter Server Agent (vpxa) accesses this data through the /vmimages directory which has symlinks that point to productLocker.
  • ESXi 6.x hosts stop responding after running for 85 days
    When this problem occurs, the /var/log/vmkernel log file displays entries similar to the followingARP request packets might drop.
  • ARP request packets between two VMs might be dropped if one VM is configured with guest VLAN tagging and the other VM is configured with virtual switch VLAN tagging, and VLAN offload is turned off on the VMs.
  • Physical switch flooded with RARP packets when using Citrix VDI PXE boot
    When you boot a virtual machine for Citrix VDI, the physical switch is flooded with RARP packets (over 1000) which might cause network connections to drop and a momentary outage. This release provides an advanced option /Net/NetSendRARPOnPortEnablement. You need to set the value for /Net/NetSendRARPOnPortEnablementto 0 to resolve this issue.
  • Snapshot creation task cancellation for Virtual Volumes might result in data loss
    Attempts to cancel snapshot creation for a VM whose VMDKs are on Virtual Volumes datastores might result in virtual disks not getting rolled back properly and consequent data loss. This situation occurs when a VM has multiple VMDKs with the same name and these come from different Virtual Volumes datastores.
  • VMDK does not roll back properly when snapshot creation fails for Virtual Volumes VMs
    When snapshot creation attempts for a Virtual Volumes VM fail, the VMDK is tied to an incorrect data Virtual Volume. The issue occurs only when the VMDK for the Virtual Volumes VM comes from multiple Virtual Volumes datastores.
  • ESXi host fails with a purple diagnostic screen due to path claiming conflicts
    An ESXi host displays a purple diagnostic screen when it encounters a device that is registered, but whose paths are claimed by a two multipath plugins, for example EMC PowerPath and the Native Multipathing Plugin (NMP). This type of conflict occurs when a plugin claim rule fails to claim the path and NMP claims the path by default. NMP tries to register the device but because the device is already registered by the other plugin, a race condition occurs and triggers an ESXi host failure.
  • ESXi host fails with a purple diagnostic screen due to path claiming conflicts
    An ESXi host displays a purple diagnostic screen when it encounters a device that is registered, but whose paths are claimed by a two multipath plugins, for example EMC PowerPath and the Native Multipathing Plugin (NMP). This type of conflict occurs when a plugin claim rule fails to claim the path and NMP claims the path by default. NMP tries to register the device but because the device is already registered by the other plugin, a race condition occurs and triggers an ESXi host failure.
  • ESXi host fails to rejoin VMware Virtual SAN cluster after a reboot
    Attempts to rejoin the VMware Virtual SAN cluster manually after a reboot might fail with the following error:
    Failed to join the host in VSAN cluster (Failed to start vsantraced (return code 2)
  • Virtual SAN Disk Rebalance task halts at 5% for more than 24 hours
    The Virtual SAN Health Service reports Virtual SAN Disk Balance warnings in the vSphere Web Client. When you click Rebalance disks, the task appears to halt at 5% for more than 24 hours.

It’s also worth reading through the Known Issues section as there is a fair bit to be aware of especially if running NFS 4.1 and worth looking through the general storage issues.

Happy upgrading!

References:

http://pubs.vmware.com/Release_Notes/en/vsphere/60/vsphere-vcenter-server-60u3-release-notes.html

http://pubs.vmware.com/Release_Notes/en/vsphere/60/vsphere-esxi-60u3-release-notes.html

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2149127