ESXi 6.5 Storage Performance Issues and Fix

[NOTE] : I decided to republish this post with a new heading and skip right to the meat of the issue as I’ve had a lot of people reach out saying that the post helped them with their performance issues on ESXi 6.5. Hopefully people can find the content easier and have a fix in place sooner.

The issue that I came across was to do with storage performance and the native driver that comes bundled with ESXi 6.5. With the release of vSphere 6.5 yesterday, the timing was perfect to install ESXI 6.5 and start to build my management VMs. I first noticed some issues when uploading the Windows 2016 ISO to the datastore with the ISO taking about 30 minutes to upload. From there I created a new VM and installed Windows…this took about two hours to complete which I knew was not as I had expected…especially with the datastore being a decent class SSD.

I created a new VM and kicked off a new install, but this time I opened ESXTOP to see what was going on, and as you can see from the screen shots below, the Kernel and disk write latencies where off the charts topping 2000ms and 700-1000ms respectively…In throuput terms I was getting about 10-20MB/s when I should have been getting 400-500MB/s. 

ESXTOP was showing the VM with even worse write latency.

I thought to myself if I had bought a lemon of a storage controller and checked the Queue Depth of the card. It’s listed with a QD of 31 which isn’t horrible for a homelab so my attention turned to the driver. Again referencing the VMware Compatibility Guide the listed driver for the controller the device driver is listed as ahci version 3.0.22vmw.

I searched for the installed device driver modules and found that the one listed above was present, however there was also a native VMware device drive as well.

I confirmed that the storage controller was using the native VMware driver and went about disabling it as per this VMwareKB (thanks to @fbuechsel who pointed me in the right direction in the vExpert Slack Homelab Channel) as shown below.

After the host rebooted I checked to see if the storage controller was using the device driver listed in the compatibility guide. As you can see below not only was it using that driver, but it was now showing the six HBA ports as opposed to just the one seen in the first snippet above.

I once again created a new VM and installed Windows and this time the install completed in a little under five minutes! Quiet a difference! Upon running a crystal disk mark I was now getting the expected speeds from the SSDs and things are moving along quiet nicely.

Hopefully this post saves anyone else who might by this, or other SuperMicro SuperServers some time and not get caught out by poor storage performance caused by the native VMware driver packaged with ESXi 6.5.



  • It’s a little misleading that this is called a fix when it’s really a workaround, but thank you. I wonder when VMware will add this as a known issue in the ESXi 6.5 release notes. It’s been 4 months now..

  • Looks like latest ESXI patch includes an update, did you get a chance to test it ?

    [[email protected]:~] esxcli software vib list | grep vmw-ahci
    vmw-ahci 1.0.0-34vmw.650.0.14.5146846 VMW VMwareCertified 2017-03-15

  • I had similar issues. Fresh installed a 6.5 ESXi cluster and could not relaibly deploy an of our appliances on it. Would get I/O errors during the boot of the VM, or sometimes even an I/O error during the OVA import process. I did upgrade them all to the latest vmware patch, and still had the issues (to answer am3rigo question above). Only when I disabled the VMware driver did this work. (Hosts are on semi-older Supermicro h/w, SATA disks). Thanks for the workaround. VMware needs to address this.

  • Thank you!!!
    This fix took my latency #s from crazy high 50,000 down to high but acceptable 60ms average under heavy sustained write conditions. Maximum read & write rate is ca. 20MB/s

    Part of my problem is a result of using the Crucial CT525MX300, a Samsung 850EVO is reporting a -.24 average latency (not sure how that’s possible) with a maximum disk read & write rate of approximately 25MB/s!

    Thanks again.

  • Thank you!
    That helped me. I had the same issue with following driver version:
    [[email protected]:~] esxcli software vib list | grep ahci
    sata-ahci 3.0-22vmw.650.0.0.4564106 VMW VMwareCertified 2017-05-26
    vmw-ahci 1.0.0-32vmw.650.0.0.4564106 VMW VMwareCertified 2017-05-26
    After disabling vmw-ahci, latency on my SSD went back to normal (few ms) from several hundreds of ms.

  • This solved my issue. I was transferring at 2MB/s to a SSD over a 1Gbps network connection – tried a number of things and this was the first solution that worked.

    Thank you so much for your post! 10/10, would recommend!

  • Good stuff, have a pair of SuperMicro 1541/XeonD’s with each with 2x10T ST/He and 512G Toshiba M2, was crawling @ ~40Mb/s on the VSAN when copying my good old media library (6TB) from one volume to another. And VSAN volume rebuild was estimated to take days… Now all is back to acceptable values, at least getting the full write throughput of the disks (~100MB/s)

  • Thanks! Did the trick on 6.5.0 (Build 4887370)