HomeLab – SuperMicro 5028D-TNT4 Storage Driver Performance Issues and Fix

Ok, i’ll admit it…i’ve had serious lab withdrawals since having to give up the awesome Zettagrid Labs. Having a lab to tinker with goes hand in hand with being able to generate tech related content…point and case, my new homelab got delivered on Monday and I have been working to get things setup so that I can deploy my new NestedESXi lab environment.

By way of an quick intro (longer first impression post to follow) I purchased a SuperMicro SYS-5028D-TN4T that I based off this TinkerTry Bundle which has become a very popular system for vExpert homelabers. It’s got an Intel Xeon D-1541 CPU and I loaded it up with 128GB or RAM. The system comes with an embedded Lynx Point AHCI Controller that allows up to six SATA devices and is listed on the VMware Compatibility Guide for ESXi 6.5.

The issue that I came across was to do with storage performance and the native driver that comes bundled with ESXi 6.5. With the release of vSphere 6.5 yesterday, the timing was perfect to install ESXI 6.5 and start to build my management VMs. I first noticed some issues when uploading the Windows 2016 ISO to the datastore with the ISO taking about 30 minutes to upload. From there I created a new VM and installed Windows…this took about two hours to complete which I knew was not as I had expected…especially with the datastore being a decent class SSD.

I created a new VM and kicked off a new install, but this time I opened ESXTOP to see what was going on, and as you can see from the screen shots below, the Kernel and disk write latencies where off the charts topping 2000ms and 700-1000ms respectivly…In throuput terms I was getting about 10-20MB/s when I should have been getting 400-500MB/s. 

ESXTOP was showing the VM with even worse write latency.

I thought to myself if I had bought a lemon of a storage controller and checked the Queue Depth of the card. It’s listed with a QD of 31 which isn’t horrible for a homelab so my attention turned to the driver. Again referencing the VMware Compatability Guide the listed driver for the conrtoller the device driver is listed as ahci version 3.0.22vmw.

I searched for the installed device driver modules and found that the one listed above was present, however there was also a native VMware device drive as well.

I confirmed that the storage controller was using the native VMware driver and went about disabling it as per this VMwareKB (thanks to @fbuechsel who pointed me in the right direction in the vExpert Slack Homelab Channel) as shown below.

After the host rebooted I checked to see if the storage controller was using the device driver listed in the compatability guide. As you can see below not only was it using that driver, but it was now showing the six HBA ports as opposed to just the one seen in the first snippet above.

I once again created a new VM and installed Windows and this time the install completed in a little under five minutes! Quiet a difference! Upon running a crystal disk mark I was now getting the expected speeds from the SSDs and things are moving along quiet nicely.

Hopefully this post saves anyone else who might by this, or other SuperMicro SuperServers some time and not get caught out by poor storage performance caused by the native VMware driver packaged with ESXi 6.5.


References
:

http://www.supermicro.com/products/system/midtower/5028/SYS-5028D-TN4T.cfm

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2044993

10 comments

  • Hi, thanks for the info!! I had the same problem with an old NUC, after disabling the vmw_ahci I see again the six HBA ports and the speed comes back to the normal. The model is a: “Panther Point AHCI Controller”

  • Thankyou! I had the same issue on a Supermicro 5018A-FTN4 (Atom C2758 CPU). I had been tearing my hair out all day wondering why the disk performance had deteriorated. I was also seeing lost datastore errors on my two local disks. I checked the compatability guide and it is the same driver for this system as you have above. I disabled the VMware driver and I seem to be back in business.

  • Anthony, thank you so much for your careful documentation on this issue, which I’m also carefully tracking as well. As you know, turns out it doesn’t seem to affect everybody with Xeon D.

    While the exact systems affected haven’t yet been identified, nor the exact upgrade and install scenarios that can cause this, your fix seems to resolve the issue for everybody who has the need, which is great!

    Seems most of the right people area already looped in as well, see:
    https://twitter.com/ErikBussink/status/799662926449770496

  • Thanks for sharing. I had the same problem on my Intel NUC 5th gen.

  • Thanks for sharing indeed. The problem CAN go all the way back to Sandy Bridge and the Cougar Point SATA-AHCI controller, as I can attest. Couldn’t get more than 1MB/s out of an Intel 730 SSD 480GB, which had been speedy under 6.0U2.

  • Very good detective work!! Worked like a charm for me. can’t thank you enough.

  • I believe this is my exact issue on my newly upgraded C602-based Patsburg SATA ports. I didn’t notice it on my main box because I only use LSI controllers. But one of my other boxes is just using the on-board SATA…so slow. Trying your fix now. Hoping this does the trick!

  • And that did the trick! You rock! Thank you so much!

  • And I spoke too soon. This works fine on one of my systems. But on my main box, when I disable that driver, all hell breaks loose. I can’t even reboot without a hard reset. What am I missing?

Leave a Reply