Category Archives: VMware

First Look: ManageIQ vCloud Director Orchestration

Welcome to 2017! To kick off the year I thought I’d do a quick post on a little known product (at least in my circles) from Red Hat Inc called ManageIQ. I stumbled across ManageIQ by chance having caught wind that they where soon to have vCloud Director support added to the product. Reading through some of the history behind ManageIQ I found out that in December of 2012 Red Hat acquired ManageIQ and integrated it into its CloudForms cloud management program…they then made it open source in 2014.

ManageIQ is the open source project behind Red Hat CloudForms. The latest product features are implemented in the upstream community first, before eventually making it downstream into Red Hat CloudForms. This process is similar for all Red Hat products. For example, Fedora is the upstream project for Red Hat Enterprise Linux and follows the same upstream-first development model.

CloudForms is a cloud management platform that also manages traditional server virtualization products such as vSphere and oVirt. This broad capability makes it ideal as a hybrid cloud manager as its able to manage both public clouds and on-premises private clouds and virtual infrastructures. This acts as a single management interface into hybrid environments that enables cross platform orchestration to be achieved with relative ease. This is backed by a community that contributes workflows and code to the project.

The supported platforms are shown below.

The October release was the first iteration for the vCloud provider which supports authentication, inventory (including vApps), provisioning, power operations and events all done via the use of the API provided by vCloud Director. First and foremost I see this as a client facing tool rather than an internal orchestration tool for vCAN SPs however given it can go cross platform there can be a use for VM or Container orchestration that SPs could tap into.

While it’s still relatively immature compared to the other platforms it supports, I see great potential in this and I think all vCAN Service Providers running vCloud Director should look at this as a way for their customers to better consume and operate vCD coming from a more modern approach, rather than depending on the UI.

Adding vCloud Director as a Cloud Provider:

Once the Appliance is deployed, head to Compute and Add New Cloud Provider. From the Type dropdown select VMware vCloud

Depending on which version of vCD SP your Service Provider is running, select the appropriate API Version. For vCD SP 8.x it should be vCloud API 9.0

Next add in the URL of the vCloud Director endpoint with it’s port…which is generally 443. For the username, you use the convention of [email protected] which allows you to login specifically to your vCD Organization. If you want to login at an admin enter in [email protected] to get top level access.

Once connected you can add as many vCD endpoints as you have. As you can see below I am connected to four seperate instances of vCloud.

Clicking through you get a Summary of the vCloud Zone with it’s relationships.

Clicking on the Instances you get a list of your VM’s, but this also has views for Virtual Datacenter, vApps and other vCD objects. As you can see below there is detailed views on the VM and it does have basic Power functions in this build.

I’ve just started to look into the power of CloudForms and have been reading through the ManageIQ automation guide. It’s one of those things that needs a little research plus some trial and error to master, but I see this form of cloud consumption where the end user doesn’t have to directly manipulate the various API endpoints as the future. I’m looking forward to how the vCloud Director provider matures and I’ll be keeping an eye on the forums and ManageIQ GitHub page for more examples.

Resources:

http://manageiq.org/docs/get-started/
http://manageiq.org/docs/reference/
https://pemcg.gitbooks.io/mastering-automation-in-cloudforms-and-manageiq/content/chapter1.html

Top Posts 2016

2016 is pretty much done and dusted and it’s been an good year for Virtualization is Life! There was a more modest 70% increase in site visits this year compared to 2015 and a 2600% increase in visits since I began blogging in 2012. In 2016 I managed to produce 124 posts (including this one) which was slightly up on the 110 I produced in 2015 and in doing so passed 300 total blogs since I started here. I was fairly consistent in getting out at least eight blogs per month with June being my most prolific month with sixteen blog posts published.

Looking back through the statistics generate via JetPack, I’ve listed the Top 10 Blog Posts from the last 12 months. This year the opinion pieces seemed to be of interest to my readers and there is still vCloud Director and NSX representation in the top ten with my Veeam articles doing well. Again it was interesting to see that two of the most generic (older posts) and certainly basic posts took out two of the top three spots. It shows that bloggers should not be afraid of blogging around simple topics as there is an audience that will appreciate the content and get value out of the post.

  1. NSX Edge vs vShield Edge: Part 1 – Feature and Performance Matrix
  2. Quick Post: E1000 vs VMXNET3
  3. vSphere 6.0 vCenter Server Appliance: Upgrading from 5.x
  4. ESXi Bugs – VMware Can’t Keep Letting This Happen!
  5. Nutanix Buying PernixData: My Critical Analysis
  6. New NSX License Tier Thoughts and Transformers
  7. CBT Bugs – VMware Can’t Keep Letting This Happen!
  8. Veeam 9 Released: Top New Features
  9. Veeam’s Next Big Thing – Veeam has Arrived!
  10. vCloud Director 8: New Features And A New UI Addition…

I was honoured to have this blog voted #44 in the TopvBlog2016 and even with all the controversy around the voting I still hold that as a significant outcome of which I am very proud and I’d like to thank the readers and supporters of this blog for voting for me! And thanks must also go to my site sponsors who are all listed on the right hand side of this page.

With me moving across to vendor land it’s going to be interesting to see if I can keep up the variety of posts as I “narrow” down my core focus…however I fully intend to keep on pushing this blog by keeping it strong to it’s roots of vCloud Director and core VMware technologies like NSX and vSAN. I have the Home lab and the drive to continue to produce content around the things I am passionate about…and that includes all things hosting and cloud now with a touch of availability 🙂

Stay tuned for an even bigger 2017!

#LongLivevCD

OVFTool: vCloud Director OVA Upload PowerShell Script

Earlier this year I put together a quick and nasty PowerShell Script that exports a vApp from vCloud Director using the OVFTool …for those that don’t know the OVFTool is a command line tool that has a powerful set of functions to import/export VMs and vApps from vCenter, ESXi and vCloud Director weather it be from a vCloud Air or a vCloud Air Network Provider.

You can Download and install the tool from here:

This week I needed to upload an Virtual Machine that was in OVA format and for those that have worked with vCloud Director you would know that the OVA format is not supported using the upload functionality in the current web interface. With that I thought it was a good time to round out the export using OVTTool post with an import using OVFTool post. Again, doing some research I found a bunch of posts relating to importing OVAs into vCloud Director and after working through the Admin Guide and some examples I was ready to build out a basic import command and start work on the PowerShell Script. On Windows you can run the tool from CMD but I would suggest using PowerShell/CLI as in the example below I go through building a variable.

What Info is Required:

  • vCloud URL
  • vCloud Username and Password
  • Org Name
  • vDC Name
  • vApp Name
  • Catalog Name
  • Path to OVA

Command Line Example:

Below is a basic example of how to construct the vCloud String and use it as a variable to execute the tool.

PowerShell Script:

Again, I’ve taken it a step further to make it easier for people to import OVAs into vCloud Director and put together another, slightly improved PowerShell Script that I have coded in to work with my old companies vCloud Zones…though this can be easily modified to use any vCloud Air Network vCD endpoint.

The output of the script can be seen below:

It’s a very basic script that gathers all the required components that make up the vCloud Source Connection String and then exports the OVA into the vCD vApp. I’ve even done a little more PowerShell improvements around password security and added a little colour.

Save the code snippet as a .ps1 into the OFVTool Windows Folder and execute the script from the same location. If there are any errors with the inputs provided the OVFTool will fail with an error, but apart from that it’s a very simple straight forward way to import OVAs into any vCloud Director enabled endpoint.

Additional Reading:

http://www.virtuallyghetto.com/tag/ovftool

http://www.vmwarebits.com/content/import-and-export-virtual-machines-command-line-vmwares-ovf-tool 

vCloud Director SP 8.10.1 UI Additions – Boot Options

Last week VMware released vCloud Director SP 8.10.1 Build 4655197 and while it was mainly a patch release there was one new feature added which was a couple of additional UI settings under the General Tab of a Virtual Machine.

  • New boot customization options added to delay the boot time and to enter into the BIOS setup screen. You can use the vCloud Director Web console or the vCloud API to set Boot Delay and EnterBIOS mode options.

This might seem like a small and meaningless setting, but you would be surprised how many times I experienced customers frustrated at the fact they could not get into the BIOS easily via the VM Console or have a long enough boot delay to trigger a boot from alternative media option.

The previous General Tab looked like this:

The 8.10.1 General Tab looks like this:

You can see that you now have an check box to Enter BIOS Setup and set the Boot Delay. These settings follow the rules of vSphere meaning the Boot delay is in milliseconds and can only be modified if the Virtual Machine is powered off. I had this image open with the System Administrator account which explains why you see the a few more VM related bits of information telling you what Host and Datastore the VM is residing on and what the name of the VM is in vSphere.

Again, this is a simple but extremely useful addition but continues to show VMware’s commitment to improving the vCD platform even before the big UI enhancements start to filter through next year.

#LongLivevCD

Released: vCloud Director SP 8.10.1 Important Upgrade for Zerto Clients

This week VMware released vCloud Director SP 8.10.1 Build 4655197. This is the sister build for vCD SP 8.0.2 and like that release, while there a a number of minor bug fixes in this release there is one important fix that will make service providers who offer replication services built upon Zerto happy, as it resolves a bug that had stopped many service providers upgrading from vCD SP 5.6.x…however unlike the release notes in 8.0.2 it doesn’t mention the specific fix in the notes. By all acounts the hot-fix that was released prior to this offical build is in this build…if you still have issues after this build please let VMware know through GSS.

 Apart from the bug fixes, there is one new feature in this build and that is something that will be welcomed by a lot of vCD users and that is Enhanced Boot Options.

  • New boot customization options added to delay the boot time and to enter into the BIOS setup screen. You can use the vCloud Director Web console or the vCloud API to set Boot Delay and EnterBIOS mode options.

There is also official support for NSX-v 6.2.4 and that’s now covered by all the latest vCD SP versions as you can see below.

As usual I’ve gone through the Resolved Issues list and highlighted the ones I feel are most relevant…the ones in red are issues we had seen my old employers vCloud Zones and Zettagrid Labs.

  • Deployment of vApp template in My Cloud with Hardware Modification fails with null UI Error
    Attempts to deploy vApp in My Cloud from the vApp template with hardware modificat
  • After vCloud Director upgrade, the vCloud Director version does not change in vCenter Solutions Manager
    After successful upgrade of the vCloud Director from version 8.0.1 to 8.10.0, the vCloud Director version in vCenter Solutions Manager does not update and remains 8.0.1.
  • Uploading ISO media file does not consume quota that is set after the storage policy is configured to organization vDC
    When you configure the storage policy to organization virtual datacenter (vDC) and set a quota limit, the quota is not consumed while uploading the ISO media file.
  • vCloud Director database upgrade takes long time to complete when the audit_event table contains millions of records
    Database upgrade of vCloud Director from versions 5.5.x, 5.6.x to versions 8.0, 8.0.x, 8.10 might take up to 8 hours time to complete if the audit_event table contains millions of records. This issue is resolved in vCloud Director 8.10.1. The database upgrade might now take up to 20 minutes.
  • VMware vCloud Director (vmware-vcd) services do not start automatically upon a reboot
    The VMware vCloud Director (vmware-vcd) services do not start automatically after a reboot because of an issue in the systemd-219-19.el7 module of Red Hat Enterprise Linux 7.2 that includes the upgrade to Red Hat Enterprise Linux 7.3.

This will more than likely be the last build of the current 8.0 and 8.10 releases with a closed BETA of the next vCD SP currently underway. This next major release of vCD SP promised to deliver on new UI enhancements (HTML5) and deep NSX-v integration.

References:

http://pubs.vmware.com/Release_Notes/en/vcd/8-10/rel_notes_vcloud_director_8-10-1.html

HomeLab – SuperMicro 5028D-TNT4 Unboxing and First Thoughts

While I was at Zettagrid I was lucky enough to have access to a couple of lab environments that where sourced from retired production components and I was able to build up a lab that could satisfy the requirements of R&D, Operations and the Development team. By the time I left Zettagrid we had a lab that most people envied and I took advantage of it in terms of having a number of NestedESXi instances to use as my own lab instances but also, we had an environment that ensured new products could be developed without impacting production while having multiple layers of NestedESXi instances to test new builds and betas.

With me leaving Zettagrid for Veeam, I lost access to the lab and even though I would have access to a nice shiny new lab within Veeam I thought it was time to bite the bullet and go about sourcing a homelab of my own. The main reasons for this was to have something local that I could tinker with which would allow me to continue playing with the VMware vCloud suite as well as continue to look out for new products allowing me to engage and continue to create content.

What I Wanted:

For me, my requirements where simple; I needed a server that was powerful enough to run at least two NestedESXi lab stacks, which meant 128GB of RAM and enough CPU cores to handle approx. twenty to thirty VMs. At the same time I needed to not not blow the budget and spend thousands upon thousands, lastly I needed to make sure that the power bill was not going to spiral out of control…as a supplementary requirement, I didn’t want a noisy beast in my home office. I also wasn’t concerned with any external networking gear as everything would be self contained in the NestedESXi virtual switching layer.

What I Got:

To be honest, the search didn’t take that long mainly thanks to a couple of Homelab Channels that I am a member of in the vExpert and Homelabs-AU Slack Groups. Given my requirements it quickly came down to the SYS-5028D-TN4T Xeon D-1541 Mini-tower or the SYS-5028D-TN4T-12C Xeon D-1567 Mini-tower. Paul Braren at TinkerTry goes through in depth why the Xeon D processors in these SuperMicro Super Servers are so well suited to homelabs so I won’t repeat what’s been written already but for me the combination of a low power CPU (45w) that still has either 8 or 12 cores that’s packaged up in such a small form factor meant that my only issue was trying to find a supplier that would ship the unit to Australia for a reasonable price.

Digicor came to the party and I was able to source a great deal with Krishnan from their Perth office. There are not too many SuperMicro dealers in Australia, and there was a lot of risk in getting the gear shipped from the USA or Europe and the cost of shipping plus import duties meant that going local was the only option. For those that are in Australia, looking for SuperMicro Homelab gear, please email/DM me and I can get you in touch with the guys at Digicor.

What’s Inside:

I decided to go for the 8 core CPU mainly because I knew that my physical to virtual CPU ratio wasn’t going to exceed the processing power that it had to offer and as mentioned I went straight to 128GB of RAM to ensure I could squeeze a couple of NestedESXi instances on the host.

https://www.supermicro.com/products/system/midtower/5028/sys-5028d-tn4t.cfm

  • Intel® Xeon® processor D-1540, Single socket FCBGA 1667; 8-Core, 45W
  • 128GB ECC RDIMM DDR4 2400MHz Samsung UDIMM in 4 sockets
  • 4x 3.5 Hot-swap drive bays; 2x 2.5 fixed drive bays
  • Dual 10GbE LAN and Intel® i350-AM2 dual port GbE LAN
  • 1x PCI-E 3.0 x16 (LP), 1x M.2 PCI-E 3.0 x4, M Key 2242/2280
  • 250W Flex ATX Multi-output Bronze Power Supply

In addition to what comes with the Super Server bundle I purchased 2x Samsung EVO 850 512GB SSDs for initial primary storage and also got the SanDisk Ultra Fit CZ43 16GB USB 3.0 Flash Drive to install ESXi onto as well as a 128GB Flash Drive for extra storage.

Unboxing Pics:

Small package, that hardly weighs anything…not surprising given the size of the case.

Nicely packaged on the inside.

Came with a US and AU kettle cord which was great.

The RAM came separately boxed and well wrapped in anti-static bags.

You can see a size comparison with my 13″ MBP in the background.

The back is all fan, but that doesn’t mean this is a loud system. In fact I can barely hear it purring in the background as I sit and type less than a meter away from it.

One great feature is the IPMI Remote Management which is a brilliant and convenient edition for a HomeLab server…the network port is seen top left. On the right are the 2x10Gig and 2x1Gig network ports.

The X10SDV-TLN4F motherboard is well suited to this case and you can see how low profile the CPU fan is.

Installing the RAM wasn’t too difficult even through there isn’t a lot of room to work with inside the case.

Finally, taking a look at the HotSwap drive bays…I had to buy a 3.5 to 2.5 inch adapter to fit in the SSDs, however I did find that the lock in ports could hold the weight of the EVO’s with ease.

BIOS and Initialization’s boot screens

Overall First Thoughts:

This is a brilliant bit of kit and it’s perfect for anyone wanting to do NestedESXi at home without worrying about the RAM limits of NUCs or the noise and power draw of more traditional servers like the R710’s that seem to make their way out of datacenters and into homelabs. The 128GB of RAM means that unless you really want to go fully physical you should be able to nest most products and keep everything nicely contained within the ESXi Host compute, storage and networking.

Thanks again to Krishnan at Digicor for supplying the equipment and to Paul Braren for all the hard work he does up at TinkerTry. Special mention also to my work colleague, Michael White who was able to give me first hand experience of the Super Servers and help make it a no brainer to get the 5028D-TNT4.

I’ll follow this post up with a more detailed a look at how I went about installing ESXi and how the NestedESXi labs look like and what sort of performance I’m getting out the the system.

More 5028D Goodness:

 

Quick Look – vSphere 6.5 Storage Space Reclamation

One of the cool newly enabled features of vSphere 6.5 is the come back of VMFS storage space reclamation. This feature was enabled in a manual way for VMFS5 datastores and was able to be triggered when you free storage space inside a datastore when deleting or migrating a VM…or consolidate a snapshot. At a Guest OS level, storage space is freed when you delete files on a thinly provisioned VMDK and then exists as dead or stranded space. ESXi 6.5 supports automatic space reclamation (SCSI unmap) that originates from a VMFS datastore or a Guest OS…the mechanism reclaims unused space from VM disks that are thin provisioned.

When storage space is deleted without this automated feature the delete operation leaves blocks of unused space on the datastore. VMFS uses the SCSI unmap command to indicate to the array that the storage blocks contain deleted data, so that the array can unallocate these blocks.

On VMFS6 datastores, ESXi supports automatic asynchronous reclamation of free space. VMFS6 generally supports automatic space reclamation requests that generate from the guest operating systems, and passes these requests to the array. Many guest operating systems can send the unmap command and do not require any additional configuration. The guest operating systems that do not support automatic unmaps might require user intervention.

I was interested in seeing if this worked as advertised, so I went about formatting a new VMFS6 datastore with the default options via the Web Client as shown below:

Heading over the hosts command line I checked the reclamation config using the new esxcli namespace:

Through the Web Client you can only set the Reclamation Priority to None or Low, however through the esxcli command you can set that value to medium or high as well as low or none, but as I’ve literally just found out, these esxcli only settings don’t actually do anything in this release.

For the low setting in terms of reclaim priority and how long before the process kicks off on the datastore, the expectation is that any blocks that are no longer used will be reclaimed within 12 hours. I was keeping track of a couple of VMs and the datastore sizes in general and saw that after a day or so there was a difference in the available storage. 

You can see that I clawed back about 22GB and 14GB on both datastores in the first 24 hours. So my initial testing with this new feature shows that it’s a valued and welcomed edition to the new vSphere 6.5 release. I know that for Service Providers that thin provision but charge based on allocated storage, they will benefit greatly from this feature as it automates a mechanism that was complex at best in previous releases.

There is also a great section around UNMAP in the vSphere 6.5 Core Storage White Paper that’s literally just been released as well and can be found here:

References:

http://pubs.vmware.com/vsphere-65/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-65-storage-guide.pdf

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057513

vSphere 6.5 Core Storage White Paper Now Available

vSphere 6.5 – Whats in it for Service Providers Part 1

Last week after an extended period of development and beta testing VMware released vSphere 6.5. This is a lot more than a point release and is a major major upgrade from vSphere 6.0. In fact, there is so much packed into this new release that there is an official whitepaper listing all the features and enhancements that had been linked from the release notes.  I thought I would go through some of the key features and enhancements that are included in the latest versions of vCenter and ESXi and as per usual I’ll go through those improvements that relate back to the Service Providers that use vSphere as the foundation of their Managed or Infrastructure as a Service offerings.

Generally the “whats new” would fit into one post, however having gotten through just the vCenter features it became apparent that this would have to be a multi-post series…this is great news for vCloud Air Network Service Providers out there as it means there is a lot packed in for IaaS and MSPs to take advantage of.

With that, in this post will cover the following:

  • vCenter 6.5 New Features
  • vCD and NSX Compatibility
  • Current Known Issues

vCenter 6.5 New Features:

Without question the enhancements to the VCSA stand out as one of the biggest features of 6.5 and as mentioned in the whitepaper, the installer process has been overhauled and is a much smoother, streamlined experience than with previous versions. It’s also supported across more operating systems and the 6.5 version of vCenter now surpasses the Windows version offering the migration tool, native high availability and built in backup and restore. One interesting sidenote to the new VCSA is that the HTML5 vSphere Client has shipped, though it’s still very much a work in progress as a lot of unsupported functionality mentioned in the release notes…there is lots of work to do to bring it up to parity with the Flex Web Client.

In terms of the inbuilt PostGreSQL database I think it’s time that Service Providers feel confident in making the switch away from MSSQL (which was the norm with Windows based vCenters) as the enhanced VCSA Management Interface (found on port 5480) has a new monitoring screen showing information relating to disk space usage and also provides a way to gracefully start and stop the database engine.

Other vCenter enhancements that Service Providers will make use of is the High availability feature which is something a lot of people have been asking for a long time. For me, I always dealt with the no HA constraint in that vCenter may become unavailable for 5-10 minutes during maintenance or at worse an extended outage while recovering from a VM or OS level failure. Knowing that hosts and VMs are still working and responding with vCenter down leaving only core management functionality unavailable it was a risk myself and others were willing to take. However, in this day of the always on datacenter it’s expected that management functionality be as available at IaaS services…so with that, this HA feature is well welcomed for Service Providers.

This native HA solution is available exclusively for the VCSA and the solution consists of active, passive, and witness nodes that are cloned from the existing vCenter Server instance. The HA cluster can be enabled, disabled, or destroyed at any time. There is also a maintenance mode that prevents planned maintenance from causing an unwanted failover.

The VCSA Migration Tool that was previously released in 6.0 Update 2m is shipped in the VCSA ISO and can be used to migrate from Windows based 5.5 vCenter’s to the 6.5 VCSA. Again this is something that more and more service providers will take advantage of as the reliance on Windows based vCenters and MSSQL becomes more and more something that’s unwanted from a manageability and cost point of view. Throw in the enhanced features that have only been released for the VCSA and this is a migration that all service providers should be planning.

To complete the move away from any Windows based dependencies the vSphere Update Manager has also been fully integrated into the VCSA. VUM is now fully integrated into the Web Client UI and is enabled by default. For larger environments with a large numbers of hosts AutoDeploy is now fully manageable from the VCSA UI and doesn’t require PowerCLI to manage or configure it’s options. There is a new image builder included in the UI that can hit local or public repositories to pull images or drivers and there are performance enhancements during deployments of ESXi images to hosts.

vCD and NSX Compatibility:

Shifting from new features and enhancements to an important subject to talk about when talking service provider platform…VMware product compatibility. For those vCAN Service Providers running a Hybrid Cloud you should be running a combination of vCloud Director SP or/and NSX-v of which, at the moment there is no support for either in vSphere 6.5. No compatible versions of NSX are available for vSphere 6.5. If you attempt to prepare your vSphere 6.5 hosts with NSX 6.2.x, you receive an error message and cannot proceed.

I haven’t tested to see if vCloud Director SP will connect and interact with vCenter 6.5 or ESXi 6.5 however as it’s not supported I wouldn’t suggest upgrading production IaaS platforms until the interoperability matrix’s are updated.

At this stage there is no word on when either product will support vSphere 6.5 but I suspect we will see NSX-v come out with a supported build shortly…though I’m expecting vCloud Director SP to no support 6.5 until the next major version release, which is looking like the new year.

Installation and Upgrade Known Issues:

Having read through the release notes, there are also a number of known issues you should be aware of. I’ve gone through those and pulled the ones I consider the most likely to be impactful to IaaS platforms.

  • After upgrading to vCenter Server 6.5, the ESXi hosts in High Availability clusters appear as Not Ready in the VMware NSX UI
    If your vSphere environment includes NSX and clusters configured with vSphere High Availability, after you upgrade to vCenter Server 6.5, both NSX and vSphere High Availability start installing VIBs on all hosts in the clusters. This might cause installation of NSX VIBs on some hosts to fail, and you see the hosts as Not Ready in the NSX UI.
    Workaround: Use the NSX UI to reinstall the VIBs.
  • Error 400 during attempt to log in to vCenter Server from the vSphere Web Client
    You log in to vCenter Server from the vSphere Web Client and log out. If, after 8 hours or more, you attempt to log in from the same browser tab, the following error results.
    400 An Error occurred from SSO. urn:oasis:names:tc:SAML:2.0:status:Requester, sub status:nullWorkaround: Close the browser or the browser tab and log in again.
  • Using storage rescan in environments with the large number of LUNs might cause unpredictable problems
    Storage rescan is an IO intensive operation. If you run it while performing other datastore management operation, such as creating or extending a datastore, you might experience delays and other problems. Problems are likely to occur in environments with the large number of LUNs, up to 1024, that are supported in the vSphere 6.5 release.Workaround: Typically, storage rescans that your hosts periodically perform are sufficient. You are not required to rescan storage when you perform the general datastore management tasks. Run storage rescans only when absolutely necessary, especially when your deployments include a large set of LUNs.
  • In vSphere 6.5, the name assigned to the iSCSI software adapter is different from the earlier releases
    After you upgrade to the vSphere 6.5 release, the name of the existing software iSCSI adapter, vmhbaXX, changes. This change affects any scripts that use hard-coded values for the name of the adapter. Because VMware does not guarantee that the adapter name remains the same across releases, you should not hard code the name in the scripts. The name change does not affect the behavior of the iSCSI software adapter.Workaround: None.
  • The bnx2x inbox driver that supports the QLogic NetXtreme II Network/iSCSI/FCoE adapter might cause problems in your ESXi environment
    Problems and errors occur when you disable or enable VMkernel ports and change the failover order of NICs for your iSCSI network setup.Workaround: Replace the bnx2x driver with an asynchronous driver. For information, see the VMware Web site.
  • When you use the Dell lsi_mr3 driver version 6.903.85.00-1OEM.600.0.0.2768847, you might encounter errors
    If you use the Dell lsi_mr3 asynchronous driver version 6.903.85.00-1OEM.600.0.0.2768847, the VMkernel logs might display the following message ScsiCore: 1806: Invalid sense buffer.Workaround: Replace the driver with the vSphere 6.5 inbox driver or an asynchronous driver from Broadcom.
  • Storage I/O Control settings are not honored per VMDK
    Storage I/O Control settings are not honored on a per VMDK basis. The VMDK settings are honored at the virtual machine level.Workaround: None.
  • Cannot create or clone a virtual machine on a SDRS-disabled datastore cluster
    This issue occurs when you select a datastore that is part of a SDRS-disabled datastore cluster in any of the New Virtual Machine, Clone Virtual Machine (to virtual machine or to template), or Deploy From Template wizards. When you arrive at the the Ready to Complete page and click Finish, the wizard remains open and nothing appears to occur. The Datastore value status for the virtual machine might display “Getting data…” and does not change.Workaround: Use the vSphere Web Client for placing virtual machines on SDRS-disabled datastore clusters.

These are just a few, that I have singled out…it’s worth reading through all the known issues just in case there are any specific issues that might impact you.

In the next post in this vSphere 6.5 for Service Providers series I will cover, more vCenter features as well as ESXi enhancements and what’s new in Core Storage.

References:

http://pubs.vmware.com/Release_Notes/en/vsphere/65/vsphere-esxi-vcenter-server-65-release-notes.html

http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/vsphere/vmw-white-paper-vsphr-whats-new-6-5.pdf

http://pubs.vmware.com/Release_Notes/en/vsphere/65/vsphere-client-65-html5-functionality-support.html

VCSPs Please Note: Veeam 9.5 RTM to GA Upgrade

A couple of weeks ago Veeam released the RTM Build of Backup & Replication 9.5 to it’s Cloud and Service Provider partners. This was to ensure that any keen early adopters who have Cloud Connect services with VCSP providers are able to backup and replicate without issue when the GA dropped. The GA (build 9.5.0.711) was officially released yesterday and I know of a few keen early clients and partners that have already updated production which is a great testament to the trust people have in the product.

<UPDATE> 

The team have released the Zero Day Update:

This update is provided to enable upgrading existing installations of Veeam Backup & Replication 9.5 RTM partner preview (build 9.5.0.580) to generally available version of Veeam Backup & Replication 9.5 GA (build 9.5.0.711). This update addresses a number of issues reported by our partners on the preview build.

All new installations should be performed using Veeam Backup & Replication 9.5.0.711 ISO that is available for download starting November 16th, 2016. Please, delete the partner preview build ISO to ensure you don’t accidentally use it in the future.

https://www.veeam.com/kb2189

</UPDATE>

There is a small catch for those VCSPs that have upgraded to the RTM build in that you won’t be able to upgrade directly to the GA build that’s currently on the download site.

There is an important note about this release that specifically affects VCSP partners with Cloud Connect deployments. The GA build (9.5.0.711) is different than the RTM build (9.5.0.580). An update should be available in the next few days for service providers to upgrade their Cloud Connect environments from the RTM to the GA build. In the meantime, these service providers can continue to run the RTM build in their production Cloud Connect environments.

If you try to upgrade you will get the following splash screen. As you can see, there is no upgrade option.

So as mentioned above, there will be a patch update shortly which will be communicated through the VCSP channels and I will look to update this post when it becomes available.

 

HomeLab – SuperMicro 5028D-TNT4 Storage Driver Performance Issues and Fix

Ok, i’ll admit it…i’ve had serious lab withdrawals since having to give up the awesome Zettagrid Labs. Having a lab to tinker with goes hand in hand with being able to generate tech related content…point and case, my new homelab got delivered on Monday and I have been working to get things setup so that I can deploy my new NestedESXi lab environment.

By way of an quick intro (longer first impression post to follow) I purchased a SuperMicro SYS-5028D-TN4T that I based off this TinkerTry Bundle which has become a very popular system for vExpert homelabers. It’s got an Intel Xeon D-1541 CPU and I loaded it up with 128GB or RAM. The system comes with an embedded Lynx Point AHCI Controller that allows up to six SATA devices and is listed on the VMware Compatibility Guide for ESXi 6.5.

The issue that I came across was to do with storage performance and the native driver that comes bundled with ESXi 6.5. With the release of vSphere 6.5 yesterday, the timing was perfect to install ESXI 6.5 and start to build my management VMs. I first noticed some issues when uploading the Windows 2016 ISO to the datastore with the ISO taking about 30 minutes to upload. From there I created a new VM and installed Windows…this took about two hours to complete which I knew was not as I had expected…especially with the datastore being a decent class SSD.

I created a new VM and kicked off a new install, but this time I opened ESXTOP to see what was going on, and as you can see from the screen shots below, the Kernel and disk write latencies where off the charts topping 2000ms and 700-1000ms respectivly…In throuput terms I was getting about 10-20MB/s when I should have been getting 400-500MB/s. 

ESXTOP was showing the VM with even worse write latency.

I thought to myself if I had bought a lemon of a storage controller and checked the Queue Depth of the card. It’s listed with a QD of 31 which isn’t horrible for a homelab so my attention turned to the driver. Again referencing the VMware Compatability Guide the listed driver for the conrtoller the device driver is listed as ahci version 3.0.22vmw.

I searched for the installed device driver modules and found that the one listed above was present, however there was also a native VMware device drive as well.

I confirmed that the storage controller was using the native VMware driver and went about disabling it as per this VMwareKB (thanks to @fbuechsel who pointed me in the right direction in the vExpert Slack Homelab Channel) as shown below.

After the host rebooted I checked to see if the storage controller was using the device driver listed in the compatability guide. As you can see below not only was it using that driver, but it was now showing the six HBA ports as opposed to just the one seen in the first snippet above.

I once again created a new VM and installed Windows and this time the install completed in a little under five minutes! Quiet a difference! Upon running a crystal disk mark I was now getting the expected speeds from the SSDs and things are moving along quiet nicely.

Hopefully this post saves anyone else who might by this, or other SuperMicro SuperServers some time and not get caught out by poor storage performance caused by the native VMware driver packaged with ESXi 6.5.


References
:

http://www.supermicro.com/products/system/midtower/5028/SYS-5028D-TN4T.cfm

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2044993

« Older Entries Recent Entries »