Optimizing performance using VAAI and the ESX MaxHWTransferSize setting

xcopy transfer size

If you’re running an EMC VNX using a lower version than block OE version, you might want to upgrade to the latest and greatest version (patch 209 or newer). The 209 offers EMC’s latest fixes and enhancements for VAAI performance. Many of the found performance issues have been fixed in the 209 code. However, in some environments sub-optimal performance has been detected with xcopy operations, or in some cases with the performance of non-xcopy IO during xcopy operations to the same pool.

EMC’s lab testing has revealed an ESX host setting which may offer a performance increase if you are suffering from sub-optimal xcopy and non-xcopy performance issues during VAAI enabled xcopy operations. The host setting which needs to be changed requires that all attached storage supports a MaxHWTransferSize of 16MB. Adjusting the xcopy transfer size on ESX hosts from the default 4MB to 16MB can improve performance significantly in some cases, especially instances where fewer than 6 concurrent xcopy operations are being performed. A larger xcopy transfer size results in fewer concurrent xcopy I/Os using up the host queue depth. As a result, more of the queue depth is available for non-xcopy host I/O. The result is a better balance between xcopy and non-xcopy I/O on the ESX host side. To adjust the xcopy transfer size, the following command needs to be issued on all attached ESX hosts:

# esxcfg –advcfg –s 16384 /DataMover/MaxHWTransferSize

And to verify if changing the setting was succesful (or to check what the current value is):

# esxcfg –advcfg –g /DataMover/MaxHWTransferSize

Validation of support for the 16MB transfer size for all storage systems in the node should be confirmed before implementation.


  1. Optimizing performance using VAAI and the ESX .... - pingback on July 15, 2014 at 10:39

Would you like to comment on this post?

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackbacks and Pingbacks: