Monthly Archives: November 2016

HomeLab – SuperMicro 5028D-TNT4 Unboxing and First Thoughts

While I was at Zettagrid I was lucky enough to have access to a couple of lab environments that where sourced from retired production components and I was able to build up a lab that could satisfy the requirements of R&D, Operations and the Development team. By the time I left Zettagrid we had a lab that most people envied and I took advantage of it in terms of having a number of NestedESXi instances to use as my own lab instances but also, we had an environment that ensured new products could be developed without impacting production while having multiple layers of NestedESXi instances to test new builds and betas.

With me leaving Zettagrid for Veeam, I lost access to the lab and even though I would have access to a nice shiny new lab within Veeam I thought it was time to bite the bullet and go about sourcing a homelab of my own. The main reasons for this was to have something local that I could tinker with which would allow me to continue playing with the VMware vCloud suite as well as continue to look out for new products allowing me to engage and continue to create content.

What I Wanted:

For me, my requirements where simple; I needed a server that was powerful enough to run at least two NestedESXi lab stacks, which meant 128GB of RAM and enough CPU cores to handle approx. twenty to thirty VMs. At the same time I needed to not not blow the budget and spend thousands upon thousands, lastly I needed to make sure that the power bill was not going to spiral out of control…as a supplementary requirement, I didn’t want a noisy beast in my home office. I also wasn’t concerned with any external networking gear as everything would be self contained in the NestedESXi virtual switching layer.

What I Got:

To be honest, the search didn’t take that long mainly thanks to a couple of Homelab Channels that I am a member of in the vExpert and Homelabs-AU Slack Groups. Given my requirements it quickly came down to the SYS-5028D-TN4T Xeon D-1541 Mini-tower or the SYS-5028D-TN4T-12C Xeon D-1567 Mini-tower. Paul Braren at TinkerTry goes through in depth why the Xeon D processors in these SuperMicro Super Servers are so well suited to homelabs so I won’t repeat what’s been written already but for me the combination of a low power CPU (45w) that still has either 8 or 12 cores that’s packaged up in such a small form factor meant that my only issue was trying to find a supplier that would ship the unit to Australia for a reasonable price.

Digicor came to the party and I was able to source a great deal with Krishnan from their Perth office. There are not too many SuperMicro dealers in Australia, and there was a lot of risk in getting the gear shipped from the USA or Europe and the cost of shipping plus import duties meant that going local was the only option. For those that are in Australia, looking for SuperMicro Homelab gear, please email/DM me and I can get you in touch with the guys at Digicor.

What’s Inside:

I decided to go for the 8 core CPU mainly because I knew that my physical to virtual CPU ratio wasn’t going to exceed the processing power that it had to offer and as mentioned I went straight to 128GB of RAM to ensure I could squeeze a couple of NestedESXi instances on the host.

https://www.supermicro.com/products/system/midtower/5028/sys-5028d-tn4t.cfm

  • Intel® Xeon® processor D-1540, Single socket FCBGA 1667; 8-Core, 45W
  • 128GB ECC RDIMM DDR4 2400MHz Samsung UDIMM in 4 sockets
  • 4x 3.5 Hot-swap drive bays; 2x 2.5 fixed drive bays
  • Dual 10GbE LAN and Intel® i350-AM2 dual port GbE LAN
  • 1x PCI-E 3.0 x16 (LP), 1x M.2 PCI-E 3.0 x4, M Key 2242/2280
  • 250W Flex ATX Multi-output Bronze Power Supply

In addition to what comes with the Super Server bundle I purchased 2x Samsung EVO 850 512GB SSDs for initial primary storage and also got the SanDisk Ultra Fit CZ43 16GB USB 3.0 Flash Drive to install ESXi onto as well as a 128GB Flash Drive for extra storage.

Unboxing Pics:

Small package, that hardly weighs anything…not surprising given the size of the case.

Nicely packaged on the inside.

Came with a US and AU kettle cord which was great.

The RAM came separately boxed and well wrapped in anti-static bags.

You can see a size comparison with my 13″ MBP in the background.

The back is all fan, but that doesn’t mean this is a loud system. In fact I can barely hear it purring in the background as I sit and type less than a meter away from it.

One great feature is the IPMI Remote Management which is a brilliant and convenient edition for a HomeLab server…the network port is seen top left. On the right are the 2x10Gig and 2x1Gig network ports.

The X10SDV-TLN4F motherboard is well suited to this case and you can see how low profile the CPU fan is.

Installing the RAM wasn’t too difficult even through there isn’t a lot of room to work with inside the case.

Finally, taking a look at the HotSwap drive bays…I had to buy a 3.5 to 2.5 inch adapter to fit in the SSDs, however I did find that the lock in ports could hold the weight of the EVO’s with ease.

BIOS and Initialization’s boot screens

Overall First Thoughts:

This is a brilliant bit of kit and it’s perfect for anyone wanting to do NestedESXi at home without worrying about the RAM limits of NUCs or the noise and power draw of more traditional servers like the R710’s that seem to make their way out of datacenters and into homelabs. The 128GB of RAM means that unless you really want to go fully physical you should be able to nest most products and keep everything nicely contained within the ESXi Host compute, storage and networking.

Thanks again to Krishnan at Digicor for supplying the equipment and to Paul Braren for all the hard work he does up at TinkerTry. Special mention also to my work colleague, Michael White who was able to give me first hand experience of the Super Servers and help make it a no brainer to get the 5028D-TNT4.

I’ll follow this post up with a more detailed a look at how I went about installing ESXi and how the NestedESXi labs look like and what sort of performance I’m getting out the the system.

More 5028D Goodness:

 

Quick Look – vSphere 6.5 Storage Space Reclamation

One of the cool newly enabled features of vSphere 6.5 is the come back of VMFS storage space reclamation. This feature was enabled in a manual way for VMFS5 datastores and was able to be triggered when you free storage space inside a datastore when deleting or migrating a VM…or consolidate a snapshot. At a Guest OS level, storage space is freed when you delete files on a thinly provisioned VMDK and then exists as dead or stranded space. ESXi 6.5 supports automatic space reclamation (SCSI unmap) that originates from a VMFS datastore or a Guest OS…the mechanism reclaims unused space from VM disks that are thin provisioned.

When storage space is deleted without this automated feature the delete operation leaves blocks of unused space on the datastore. VMFS uses the SCSI unmap command to indicate to the array that the storage blocks contain deleted data, so that the array can unallocate these blocks.

On VMFS6 datastores, ESXi supports automatic asynchronous reclamation of free space. VMFS6 generally supports automatic space reclamation requests that generate from the guest operating systems, and passes these requests to the array. Many guest operating systems can send the unmap command and do not require any additional configuration. The guest operating systems that do not support automatic unmaps might require user intervention.

I was interested in seeing if this worked as advertised, so I went about formatting a new VMFS6 datastore with the default options via the Web Client as shown below:

Heading over the hosts command line I checked the reclamation config using the new esxcli namespace:

Through the Web Client you can only set the Reclamation Priority to None or Low, however through the esxcli command you can set that value to medium or high as well as low or none, but as I’ve literally just found out, these esxcli only settings don’t actually do anything in this release.

For the low setting in terms of reclaim priority and how long before the process kicks off on the datastore, the expectation is that any blocks that are no longer used will be reclaimed within 12 hours. I was keeping track of a couple of VMs and the datastore sizes in general and saw that after a day or so there was a difference in the available storage. 

You can see that I clawed back about 22GB and 14GB on both datastores in the first 24 hours. So my initial testing with this new feature shows that it’s a valued and welcomed edition to the new vSphere 6.5 release. I know that for Service Providers that thin provision but charge based on allocated storage, they will benefit greatly from this feature as it automates a mechanism that was complex at best in previous releases.

There is also a great section around UNMAP in the vSphere 6.5 Core Storage White Paper that’s literally just been released as well and can be found here:

References:

http://pubs.vmware.com/vsphere-65/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-65-storage-guide.pdf

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057513

vSphere 6.5 Core Storage White Paper Now Available

Veeam 9.5 Released: Top New Features

Last week Veeam released to GA version 9.5 of our Backup & Replication product. Even though this is a point release it’s a significant release for Veeam and looks to build on the scalability and reliability that came with previous versions, on top of what was delivered in the v9 release. I’ve spent some time going through the What’s New document as well as the Release Notes and I’ve pulled out my top new features across all areas of the platform…Without question I believe that the features and enhancements listed should make existing Veeam customers upgrade at their first opportunity.

Scalability Enhancements:

In general there has been a doubling of I/O performance that can shorten backup windows by up to five times while reducing the load on core virtualisation platform components such as vCenter and storage arrays.

  • Advanced Data Fetcher: This improves backup performance for individual virtual disks while reducing the load on primary storage due to the reduced number of I/O operations required to complete a backup. This is a VMware feature and is used by Backup from Storage Snapshots, Hot Add and Direct NFS modes.
  • VMware vSphere Infrastructure Cache: This maintains an in RAM mirror of vSphere infrastructure hierarchy to accelerate the Building VM list operation when creating or modifying a job. This also removes load from vCenter. The cache is maintained up-to-date with real-time updates via a subscription to vCenter Server infrastructure change events.

Restore Acceleration Technologies:

Being able to recover from disaster quickly and efficiently is an important feature that shouldn’t be underestimated or understated and v9.5 has further improved this.

  • Instant VM Recovery: This has improved performance up to three times specifically around when recovering multiple VMs at once from per-VM backup file chains.
  • Full VM Parallel Processing: This restore restores multiple disks in parallel, similar to the way a backup is performed. This is automatically used for all disk based backup repositories except Data Domain deduplicating storage.

Engine Enhancements:

Version 9.5 includes a wide range of additional enhancements targeted at large environments to maintain efficiency when processing jobs containing thousands of VMs allowing for scalability improvements. Database optimisations have allowed queries to complete faster reducing back-end SQL Server load that improves user interface responsiveness and job performance.

Advanced Resilient File System:

ReFS is now the preferred disk data format for Windows Server 2016. This updated version provides many new capabilities including improvements data integrity, resiliency and availability as well as speed and efficiency.

Advanced ReFS integration coming in Veeam Availability Suite 9.5

  • Fast Clone Technology: This allows for the creation and transformation of synthetic full backup files to happen up to 20x faster for shorter backup windows and a significantly reduced backup storage load. Backup and restore performance can be further improved with automatic storage tiering provided by a Storage Spaces Direct-based backup repository with an SSD tier.
  • Reduced Backup Storage Consumption: Spaceless full backup technology prevents duplication from occurring resulting in raw disk space consumption by a GFS backup archive that rivals deduplicating appliances. By integrating software dedupe and encryption with ReFS capabilities these storage savings remain in play for encrypted backup files which is significant.
  • Backup Archive Integrity: This is addressing silent data corruption by monitoring and proactively reporting data corruption with ReFS data integrity streams, including automated and seamless healing of corrupted backup file data blocks inline during restore or during periodic scans of the ReFS data scrubber by leveraging Storage Spaces mirror and parity sets.

Instant Recovery From Any Backup: 

This is actually quiet huge and will leverage Veeam Agent technologies to extend on Instant VM Recovery by way of Instant Recovery for physical computers. Version 9.5 enables users to perform instant recovery of endpoints and physical servers into a Hyper-V VM and can:

  • Immediately spin up a failed physical server from a backup
  • Run lost devices directly from their last backup

Veeam Cloud Connect enabled VCSPs can manage the disaster recovery of remote offices and tenant locations beyond key servers and services by spinning up backups copied to the Veeam Cloud Connect repositories as Hyper-V VMs.

Veeam Backup Enterprise Manager

  • Scalability Enhancements: The Enterprise Manager engine was heavily optimised for large environments and tested against databases containing one million restore points. Reporting performance, Web UI responsiveness and new backup server registration times were significantly improved for large environments.
  • Improved Self-Service Capabilities: With the addition of a self-service backup and restore portal for vCloud Director the Enterprise Manager web UI was enhanced with new capabilities to perform Quick Backup operations on the VMs tab as well as delete backup jobs, backup files and erase individual VMs content from multi VM backup files.

Additional Enhancements In Brief

  • Proxy affinity: This new backup repository setting allows users to specify backup proxies which are allowed to perform backups to and restores from the chosen repository. (Enterprise Editions)
  • GFS retention enhancement: To reduce the requirements for archive repository disk space, the oldest GFS full backup will now be removed before a new GFS full backup file is sealed and a new synthetic full is created.
  • SOBR Temporary Expansion: You can add a fourth extent even though no more than three extents can be online at the same time with the fourth remaining in maintenance mode. This will help with upgrading SOBR capacity by attaching a larger storage unit, followed by evacuation of backups from the smallest one. (Enterprise Editions)
  • PowerShell: On going enhancements with new commands added to cover all new 9.5 functionality as well as multiple enhancements to existing commands based on user feedback. (http://helpcenter.veeam.com/docs/backup/powershell)

That’s a pretty significant list of the top enhancements as I see it! I haven’t gone into detail around the enhancements for Veeam Cloud Service Providers in this post but I will get a separate post out over the next few days going through the key enhancements for VCSPs.

If you have Veeam 9 running do yourself a favour and go through the required change controls to upgrade to v9.5…your backups will thank you! 🙂

References:

https://www.veeam.com/veeam_backup_9_5_whats_new_en_wn.pdf

vSphere 6.5 – Whats in it for Service Providers Part 1

Last week after an extended period of development and beta testing VMware released vSphere 6.5. This is a lot more than a point release and is a major major upgrade from vSphere 6.0. In fact, there is so much packed into this new release that there is an official whitepaper listing all the features and enhancements that had been linked from the release notes.  I thought I would go through some of the key features and enhancements that are included in the latest versions of vCenter and ESXi and as per usual I’ll go through those improvements that relate back to the Service Providers that use vSphere as the foundation of their Managed or Infrastructure as a Service offerings.

Generally the “whats new” would fit into one post, however having gotten through just the vCenter features it became apparent that this would have to be a multi-post series…this is great news for vCloud Air Network Service Providers out there as it means there is a lot packed in for IaaS and MSPs to take advantage of.

With that, in this post will cover the following:

  • vCenter 6.5 New Features
  • vCD and NSX Compatibility
  • Current Known Issues

vCenter 6.5 New Features:

Without question the enhancements to the VCSA stand out as one of the biggest features of 6.5 and as mentioned in the whitepaper, the installer process has been overhauled and is a much smoother, streamlined experience than with previous versions. It’s also supported across more operating systems and the 6.5 version of vCenter now surpasses the Windows version offering the migration tool, native high availability and built in backup and restore. One interesting sidenote to the new VCSA is that the HTML5 vSphere Client has shipped, though it’s still very much a work in progress as a lot of unsupported functionality mentioned in the release notes…there is lots of work to do to bring it up to parity with the Flex Web Client.

In terms of the inbuilt PostGreSQL database I think it’s time that Service Providers feel confident in making the switch away from MSSQL (which was the norm with Windows based vCenters) as the enhanced VCSA Management Interface (found on port 5480) has a new monitoring screen showing information relating to disk space usage and also provides a way to gracefully start and stop the database engine.

Other vCenter enhancements that Service Providers will make use of is the High availability feature which is something a lot of people have been asking for a long time. For me, I always dealt with the no HA constraint in that vCenter may become unavailable for 5-10 minutes during maintenance or at worse an extended outage while recovering from a VM or OS level failure. Knowing that hosts and VMs are still working and responding with vCenter down leaving only core management functionality unavailable it was a risk myself and others were willing to take. However, in this day of the always on datacenter it’s expected that management functionality be as available at IaaS services…so with that, this HA feature is well welcomed for Service Providers.

This native HA solution is available exclusively for the VCSA and the solution consists of active, passive, and witness nodes that are cloned from the existing vCenter Server instance. The HA cluster can be enabled, disabled, or destroyed at any time. There is also a maintenance mode that prevents planned maintenance from causing an unwanted failover.

The VCSA Migration Tool that was previously released in 6.0 Update 2m is shipped in the VCSA ISO and can be used to migrate from Windows based 5.5 vCenter’s to the 6.5 VCSA. Again this is something that more and more service providers will take advantage of as the reliance on Windows based vCenters and MSSQL becomes more and more something that’s unwanted from a manageability and cost point of view. Throw in the enhanced features that have only been released for the VCSA and this is a migration that all service providers should be planning.

To complete the move away from any Windows based dependencies the vSphere Update Manager has also been fully integrated into the VCSA. VUM is now fully integrated into the Web Client UI and is enabled by default. For larger environments with a large numbers of hosts AutoDeploy is now fully manageable from the VCSA UI and doesn’t require PowerCLI to manage or configure it’s options. There is a new image builder included in the UI that can hit local or public repositories to pull images or drivers and there are performance enhancements during deployments of ESXi images to hosts.

vCD and NSX Compatibility:

Shifting from new features and enhancements to an important subject to talk about when talking service provider platform…VMware product compatibility. For those vCAN Service Providers running a Hybrid Cloud you should be running a combination of vCloud Director SP or/and NSX-v of which, at the moment there is no support for either in vSphere 6.5. No compatible versions of NSX are available for vSphere 6.5. If you attempt to prepare your vSphere 6.5 hosts with NSX 6.2.x, you receive an error message and cannot proceed.

I haven’t tested to see if vCloud Director SP will connect and interact with vCenter 6.5 or ESXi 6.5 however as it’s not supported I wouldn’t suggest upgrading production IaaS platforms until the interoperability matrix’s are updated.

At this stage there is no word on when either product will support vSphere 6.5 but I suspect we will see NSX-v come out with a supported build shortly…though I’m expecting vCloud Director SP to no support 6.5 until the next major version release, which is looking like the new year.

Installation and Upgrade Known Issues:

Having read through the release notes, there are also a number of known issues you should be aware of. I’ve gone through those and pulled the ones I consider the most likely to be impactful to IaaS platforms.

  • After upgrading to vCenter Server 6.5, the ESXi hosts in High Availability clusters appear as Not Ready in the VMware NSX UI
    If your vSphere environment includes NSX and clusters configured with vSphere High Availability, after you upgrade to vCenter Server 6.5, both NSX and vSphere High Availability start installing VIBs on all hosts in the clusters. This might cause installation of NSX VIBs on some hosts to fail, and you see the hosts as Not Ready in the NSX UI.
    Workaround: Use the NSX UI to reinstall the VIBs.
  • Error 400 during attempt to log in to vCenter Server from the vSphere Web Client
    You log in to vCenter Server from the vSphere Web Client and log out. If, after 8 hours or more, you attempt to log in from the same browser tab, the following error results.
    400 An Error occurred from SSO. urn:oasis:names:tc:SAML:2.0:status:Requester, sub status:nullWorkaround: Close the browser or the browser tab and log in again.
  • Using storage rescan in environments with the large number of LUNs might cause unpredictable problems
    Storage rescan is an IO intensive operation. If you run it while performing other datastore management operation, such as creating or extending a datastore, you might experience delays and other problems. Problems are likely to occur in environments with the large number of LUNs, up to 1024, that are supported in the vSphere 6.5 release.Workaround: Typically, storage rescans that your hosts periodically perform are sufficient. You are not required to rescan storage when you perform the general datastore management tasks. Run storage rescans only when absolutely necessary, especially when your deployments include a large set of LUNs.
  • In vSphere 6.5, the name assigned to the iSCSI software adapter is different from the earlier releases
    After you upgrade to the vSphere 6.5 release, the name of the existing software iSCSI adapter, vmhbaXX, changes. This change affects any scripts that use hard-coded values for the name of the adapter. Because VMware does not guarantee that the adapter name remains the same across releases, you should not hard code the name in the scripts. The name change does not affect the behavior of the iSCSI software adapter.Workaround: None.
  • The bnx2x inbox driver that supports the QLogic NetXtreme II Network/iSCSI/FCoE adapter might cause problems in your ESXi environment
    Problems and errors occur when you disable or enable VMkernel ports and change the failover order of NICs for your iSCSI network setup.Workaround: Replace the bnx2x driver with an asynchronous driver. For information, see the VMware Web site.
  • When you use the Dell lsi_mr3 driver version 6.903.85.00-1OEM.600.0.0.2768847, you might encounter errors
    If you use the Dell lsi_mr3 asynchronous driver version 6.903.85.00-1OEM.600.0.0.2768847, the VMkernel logs might display the following message ScsiCore: 1806: Invalid sense buffer.Workaround: Replace the driver with the vSphere 6.5 inbox driver or an asynchronous driver from Broadcom.
  • Storage I/O Control settings are not honored per VMDK
    Storage I/O Control settings are not honored on a per VMDK basis. The VMDK settings are honored at the virtual machine level.Workaround: None.
  • Cannot create or clone a virtual machine on a SDRS-disabled datastore cluster
    This issue occurs when you select a datastore that is part of a SDRS-disabled datastore cluster in any of the New Virtual Machine, Clone Virtual Machine (to virtual machine or to template), or Deploy From Template wizards. When you arrive at the the Ready to Complete page and click Finish, the wizard remains open and nothing appears to occur. The Datastore value status for the virtual machine might display “Getting data…” and does not change.Workaround: Use the vSphere Web Client for placing virtual machines on SDRS-disabled datastore clusters.

These are just a few, that I have singled out…it’s worth reading through all the known issues just in case there are any specific issues that might impact you.

In the next post in this vSphere 6.5 for Service Providers series I will cover, more vCenter features as well as ESXi enhancements and what’s new in Core Storage.

References:

http://pubs.vmware.com/Release_Notes/en/vsphere/65/vsphere-esxi-vcenter-server-65-release-notes.html

http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/vsphere/vmw-white-paper-vsphr-whats-new-6-5.pdf

http://pubs.vmware.com/Release_Notes/en/vsphere/65/vsphere-client-65-html5-functionality-support.html

VCSPs Please Note: Veeam 9.5 RTM to GA Upgrade

A couple of weeks ago Veeam released the RTM Build of Backup & Replication 9.5 to it’s Cloud and Service Provider partners. This was to ensure that any keen early adopters who have Cloud Connect services with VCSP providers are able to backup and replicate without issue when the GA dropped. The GA (build 9.5.0.711) was officially released yesterday and I know of a few keen early clients and partners that have already updated production which is a great testament to the trust people have in the product.

<UPDATE> 

The team have released the Zero Day Update:

This update is provided to enable upgrading existing installations of Veeam Backup & Replication 9.5 RTM partner preview (build 9.5.0.580) to generally available version of Veeam Backup & Replication 9.5 GA (build 9.5.0.711). This update addresses a number of issues reported by our partners on the preview build.

All new installations should be performed using Veeam Backup & Replication 9.5.0.711 ISO that is available for download starting November 16th, 2016. Please, delete the partner preview build ISO to ensure you don’t accidentally use it in the future.

https://www.veeam.com/kb2189

</UPDATE>

There is a small catch for those VCSPs that have upgraded to the RTM build in that you won’t be able to upgrade directly to the GA build that’s currently on the download site.

There is an important note about this release that specifically affects VCSP partners with Cloud Connect deployments. The GA build (9.5.0.711) is different than the RTM build (9.5.0.580). An update should be available in the next few days for service providers to upgrade their Cloud Connect environments from the RTM to the GA build. In the meantime, these service providers can continue to run the RTM build in their production Cloud Connect environments.

If you try to upgrade you will get the following splash screen. As you can see, there is no upgrade option.

So as mentioned above, there will be a patch update shortly which will be communicated through the VCSP channels and I will look to update this post when it becomes available.

 

HomeLab – SuperMicro 5028D-TNT4 Storage Driver Performance Issues and Fix

Ok, i’ll admit it…i’ve had serious lab withdrawals since having to give up the awesome Zettagrid Labs. Having a lab to tinker with goes hand in hand with being able to generate tech related content…point and case, my new homelab got delivered on Monday and I have been working to get things setup so that I can deploy my new NestedESXi lab environment.

Vote for your favorite blogs at vSphere-land!

Top vBlog Voting 2017

By way of an quick intro (longer first impression post to follow) I purchased a SuperMicro SYS-5028D-TN4T that I based off this TinkerTry Bundle which has become a very popular system for vExpert homelabers. It’s got an Intel Xeon D-1541 CPU and I loaded it up with 128GB or RAM. The system comes with an embedded Lynx Point AHCI Controller that allows up to six SATA devices and is listed on the VMware Compatibility Guide for ESXi 6.5.

The issue that I came across was to do with storage performance and the native driver that comes bundled with ESXi 6.5. With the release of vSphere 6.5 yesterday, the timing was perfect to install ESXI 6.5 and start to build my management VMs. I first noticed some issues when uploading the Windows 2016 ISO to the datastore with the ISO taking about 30 minutes to upload. From there I created a new VM and installed Windows…this took about two hours to complete which I knew was not as I had expected…especially with the datastore being a decent class SSD.

I created a new VM and kicked off a new install, but this time I opened ESXTOP to see what was going on, and as you can see from the screen shots below, the Kernel and disk write latencies where off the charts topping 2000ms and 700-1000ms respectivly…In throuput terms I was getting about 10-20MB/s when I should have been getting 400-500MB/s. 

ESXTOP was showing the VM with even worse write latency.

I thought to myself if I had bought a lemon of a storage controller and checked the Queue Depth of the card. It’s listed with a QD of 31 which isn’t horrible for a homelab so my attention turned to the driver. Again referencing the VMware Compatability Guide the listed driver for the conrtoller the device driver is listed as ahci version 3.0.22vmw.

I searched for the installed device driver modules and found that the one listed above was present, however there was also a native VMware device drive as well.

I confirmed that the storage controller was using the native VMware driver and went about disabling it as per this VMwareKB (thanks to @fbuechsel who pointed me in the right direction in the vExpert Slack Homelab Channel) as shown below.

After the host rebooted I checked to see if the storage controller was using the device driver listed in the compatability guide. As you can see below not only was it using that driver, but it was now showing the six HBA ports as opposed to just the one seen in the first snippet above.

I once again created a new VM and installed Windows and this time the install completed in a little under five minutes! Quiet a difference! Upon running a crystal disk mark I was now getting the expected speeds from the SSDs and things are moving along quiet nicely.

Hopefully this post saves anyone else who might by this, or other SuperMicro SuperServers some time and not get caught out by poor storage performance caused by the native VMware driver packaged with ESXi 6.5.


References
:

http://www.supermicro.com/products/system/midtower/5028/SYS-5028D-TN4T.cfm

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2044993

vForumAU 2016 Recap: Best Event In Years!

Last week I was in Sydney for the 2016 edition of vForumAU…I’ve been coming to vForumAU since 2011 and this years event was probably up there with the best that I have attended in that time. For the past couple of years the event has had to shift venues due to the Sydney Exhibition Center being knocked down and rebuilt and in that time the it’s been at Luna Park and Star City Casino…both of which presented their own challenges for VMware, sponsors and attendees. This years event was held at The Royal Hall of Industries in Moore Park which offered a perfect venue for the event and helped deliver on what was a great vForumAU.

Adding to the venue was the calibre of speakers that VMware ANZ was able to bring out for this years event…in fact it was the best lineup that I’ve seen or heard of outside of VMworld. We had Pat Gelsinger, Kit Colbert, Paul Strong and Bruce Davie to add to the local VMware talent and given that this event fell after both VMworld US and Europe, I felt that the content was more complete in terms of announcements, products and overall strategy and vision.

I heard Pat deliver the keynote at VMworld US a few months back and the deck was largely the same, however I felt he delivered the message better and talked to the key points around VMware’s hybrid cloud strategy a lot more concisely and with a lot more tact in terms of ensure that vCloud Air Network providers where still very much in the reckoning for VMware’s future strategy around Hybrid cloud. There is no doubt that the partnership’s with AWS and IBM has caused some unease in the vCAN but every key slide had vCAN representation which was pleasing to see.

The Cross Cloud Foundation is something also that still sits uneasily with a lot of vCAN Providers but I have to admit that the tech preview of the Cross Cloud Platform was very very slick and shows how much VMware has changed tact when it comes to playing with other public clouds. There is no doubt that Cloud is the new Hardware and VMware want to be there to manage it and offer it’s customers tools that do the same. Hybrid cloud is here to stay, and they hyper-scalers certainly have a share…however on-premises and partner hosted IaaS will remain significant and relevant for the next 10-15 years.

Moving on from Pat’s keynote there was a super session Technical Keynote that was held after lunch that featured 20-30 minutes on every new product enhancement or release that has been announced of late. From vSphere 6.5 to VSAN 6.5 and a look at NSX futures as well as VMware’s container platforms this was a brilliant couple of hours of presentations. Highlights for me was Paul Strong talking VSAN, Kit Colbert going over the various Photon platforms and Bruce Davie talking around NSX extensibility into AWS. Of note was Bruce Davie (who also presented at the main keynote) who I have come to seriously admire as a speaker over the past couple of years.

The Sponsors hall has a very VMworld feel to it this year which elements of VMworld brought to the event such as VMVillage, special lounges for All Access Pass visitors and probably the best food that I’ve experienced at a vForumAU by way of specialised food trucks bringing a wide array of foods to enjoy. Though the first day wasn’t as well received by exhibitors (AAP attendees pay for sessions, not so much visiting sponsors) in talking with some people on the booths, the second day was very busy and the venue and location had everything to do with that. Again well done do the VMware events team for bringing the event to The Royal Hall of Industries.

Finishing off this recap, once again there was great spirit and community around both sponsors and the attendees to which the venue offered a great chance to catch up socially with people from the VMware community and that fact shouldn’t be lost on the benefit of attending such an event. And while I didn’t attend the offical party I heard that it went really well and was highly entertaining with a lot of food!

Well done to VMware ANZ for putting on a great event!


As a side note, I also attended my final VMware vChampion event on the Wednesday morning where Kit Colbert facilitated an open discussion on containerised platforms and the new continuous integration and continuous deployment methodologies that are creeping their ways into mainstream IT. Again, thanks to the vChampion team!

VMware vChampion Farewell!

About four years ago I was invited to join a program called the VMware vChampions…this program is run and operated by the VMware ANZ Channel and Marketing teams and is an invite only advocacy group who’s members are made up exclusively from VMware’s top partners and service providers in the ANZ region. The numbers have varied over the past couple of years, but at any one time there are about 30-40 vChampions in the group.

With my new role at Veeam I have had to leave the program and this week at #vForumAU will be my last as a member of the group. Before I sign off I wanted to openly thank the people who have made the program such instrumental not only from a personal work point of view, but also from the point of view of enhancing my engagement with the wider VMware community. Probably of most importance, superseding both work and community benefits the program has allowed me to develop friendships with those I have come to meet through the program…some of those people I now consider some of my closest friends.

The program helped take me to my first VMworld in 2012 which is still one of the highlights of my career and an experience that included an VMware Executive Brief at the VMware campus and an introduction to the global VMware community. At vForumAU that same year, the vChampion’s where briefed by then CTO Steve Herrod. The following year at PEX ANZ I was able to work towards landing a dream role at Zettagrid and also establish friendships that are still going strong today. Later that year at vForumAU the vChampions had a whole day event that included a discussion with Martin Casado just shortly after Nicira had been acquired by VMware…the inspiring talk by Martin was, again a career highlight and lit of flame under me that got me into Network Virtualization and deeper into automation.

Over the last couple of years the vChampion program scaled back it’s activities and bi-annual meetings become once a year get togethers however the team was still able to secure guest speakers such as Sanjay Poonen and Kit Colbert. In an amongst the speakers the group was given insider NDA access and product roadmaps…and there in lies the true value of the group for VMware and in equipping the vChampion’s with knowledge and updates the group is equipped to go back to their companies and advocated VMware technologies to the rest of their peers and hopefully also spoke out in the community about VMware technologies.

All in all the value that the program has added to my career can not be understated and I would like to thank, Katrina Jones, Anthony Segren, John Donovan, Rhody Burton and Eugene Geaher for allowing me to be part of such a brilliant program. Also a special mention to Grant Orchard and Greg Mulholland for being the vChampion Champions within VMware and for always being there to help organise and support the vChampions.

Thanks guys and I hope the program can continue to deliver!

Veeam 9.5 – RTM Build Available for VCSPs

Last week (we at) Veeam dropped the RTM build of Veeam 9.5 to it’s Cloud Service Provider partners. As a VCSP partner you need to be ready for the v9.5 GA date (at this stage set for mid November) to ensure that any keen early adopters who have Cloud Connect services with providers are able to backup and replicate without issue. VSCPs should upgrade their Veeam B&R 9 platforms to v9.5 as soon as possible.

To start with it’s best to have Veeam B&R updated to the latest patch release which is Update 2 Build 1715 and before upgrading to v9.5 it’s best to restart all Veeam services or better yet reboot the Veeam Management Servers…as an extra measure before beginning the upgrade you should disable all jobs and call an outage window so that your customers can pause their jobs.

If you have Enterprise Manager installed you will need to upgrade that first…notice below that the only available option is to upgrade Enterprise Manager.

This is a very easy upgrade and there isn’t really any gotchyas to get from v9 to v9.5 and the following components will be upgraded.

As this is a point release you will not be asked for a new license and the installer will detect the current license as shown below.

Next, dependancies will be checked and you will be able to install any that are missing.

Once Completed you now have the option to upgrade Backup & Replication. Running through the next next install you get prompted again for the license and the service account password and get warned that the database will be updated.

Once done you are set to update the Cloud Connect components via the updated console interface. On first login it will prompt you to upgrade all components to the their respective 9.5 versions. Any Cloud Gateways, Veeam Proxies, Repository Servers and Network Extension Appliances will be upgraded.

To validate the install and make sure everything is working as expected when its comes to Cloud Connect you should run a test job and make sure it goes through without issue. With full v9 backward compatibility Cloud Connect is ready for the GA release. Stay tuned over the next couple of weeks as I run through the best new features of Veeam 9.5 for VCSPs…there are significant new features and enhancements that extend the platform for VCSPs to take advantage of and offer more services to customers.

Veeam Forums:

I would encourage all VCSPs to keep track of what’s happening in the Veeam Cloud & Service Provider forums as there are already a number of useful posts in there related to the RTM.

https://forums.veeam.com/veeam-cloud-service-providers-forum-f34/

IMPORTANT NOTE:

The product team have built in an important warning that VCSP customers will see if their Service Provider endpoints are not version 9.5. It will show the warning below that the Cloud Connect platform version is not up to date and that performing the update will backup and replication jobs will stop working. Again, this is why it’s critical that all VCSPs upgrade ASAP.

vForumAU 2016: vBrownBag TechTalks

With vForumAU 2016 less than a week away it’s time to talk about what the vBrownBag crew will be up to next week in Sydney. If you don’t know what the vBrownBag TechTalks head here for an overview…but in a nutshell the crew offer the technical community a platform to present on topics that are more social than sales and marketing and allow those that participate a public platform from which to interact with the community.

The Sydney vForumAU edition still has a few slots available so if you are going over to vForumAU next week and want to get something off your chest that the VMware community might find informative…head to the site below and register.

Below is a snapshot of the talks that will feature next week:

  • Matt Allford – Using Vester Project to Enforce vSphere Configuration
  • Frank Yoo – What is RESTFul API and How to use it
  • David Lloyd – Building an Elastic Bare Metal Service
  • Luis Concistre – Microsegmentation VMware Horizon and NSX
  • Brett Johnson – Disaster Planning and What’s new in vSphere 6.5

TechTalks at vForum Sydney