There are countless posts out there comparing E1000s and VMXNET3 and why the VMXNET3 should (where possible) always be used for Windows VMs.
http://rickardnobel.se/vmxnet3-vs-e1000e-and-e1000-part-2/
http://longwhiteclouds.com/2014/08/01/vmware-vsphere-5-5-virtual-network-adapter-performance/
Last week I was provisioning a new Windows 2012 R2 VM to act as a Veeam Repository. For mass storage we have MD3200i’s presenting block storage over iSCSI. Going through the motions of a build that I’ve done countless times…I deployed the OS Template and then added to additional NICs to complete the VM build.
After mounting the iSCSI Volume I went to do some basic benchmarking and throughput testing… Crystal Disk Mark is great for basic Performance Testing. The initial results where underwhelming to say the least.
Fairly poor read results …with OK sequential writes. The MD3200i Disk Groups are capable of doing 100-130MB/s so I knew something wasn’t right. I initially was looking at the physical network to blame…but everything checked out. I then looked at the iSCSI MPIO setup and again…everything checked out. Looking back though the VM hardware I found that I had mistakenly added E1000 NICs for the two additional iSCSI networks. After removing those and reconfiguring them as VMXNET3 I reran the tests and got the results I expected (though writes where low possibly due to concurrent operations) and all was well
This was a quick public service announcement post to ensure VMXNET3 is used where possible. If you want to search through your environment for Windows VMs with E1000s…have a look at this post using a CloudPhysics Card…and remember!
Please no more E1000s… #southpark #vmware pic.twitter.com/Kxgp0z0hXH
— Luke Brown (@Luke_br) March 27, 2015