vStorage APIs for Array Integration

The Linux SCSI Target Wiki

Revision as of 18:41, 18 August 2012 by Marcf (Talk | contribs)
Jump to: navigation, search
LIO Target
LIO 150513.png
LIO Unified Target
Original author(s) Nicholas Bellinger
Developer(s) Datera, Inc.
Development status Production
Written in C
Operating system Linux
Type Target engine
License Proprietary commercial software
Website datera.io
See Target for a complete overview over all fabric modules.

The VMware vStorage APIs for Array Integration (VAAI) enable seamless offload of locking and block operations onto the storage array. VAAI is supported in the LIO Enterprise Edition as used in RTS OS.



VMware introduced the vStorage APIs for Array Integration (VAAI) in vSphere 4.1 with a plugin, and provided native VAAI support with vSphere 5. VAAI significantly enhances the seamless integration of storage and servers.


Name Primitive Description Block NFS RTS OS
Atomic Test & Set (ATS) Hardware Assisted Locking
Enables granular locking of block storage devices, accelerating performance. Yes N/A Yes
Zero Block Zeroing
Communication mechanism for thin provisioning arrays. Used when creating VMDKs. Yes N/A Yes
Clone Full Copy, XCopy
Commands the array to duplicate data in a LUN. Used for Clone and VMotion operations. Yes N/A Yes
Delete Space Reclamation
Allow thin provisioned arrays to clear unused VMFS space. Yes Yes Yes

The presence of VAAI and its features can be verified from the ESX 5 CLI as follows:

~ # esxcli storage core device vaai status get
   VAAI Plugin Name:
   ATS Status: supported
   Clone Status: supported
   Zero Status: supported
   Delete Status: unsupported



ATS is arguably one of the most valuable storage technologies to come out of VMware. It allows comparing and writing SCSI blocks in one atomic operation using the T10 COMPARE_AND_WRITE command. This enables locking of block storage devices at much finer granularity than with the preceding T10 Persistent Reservations, which operate only on full LUNs. Thus ATS allows a significant performance gain for shared LUNs. For instance, HP reported that it supports six times more VMs per LUN with VAAI than without it.

NFS doesn’t need ATS, as locking is a non-issue and VM files aren’t shared the same way LUNs are.

Feature presence can be verified from the ESX 5 CLI:

~ # esxcfg-advcfg -g /VMFS3/HardwareAcceleratedLocking
Value of HardwareAcceleratedLocking is 1


Thin provisioning is difficult to get right because storage arrays don't know what’s going on in the hosts. VAAI includes a generic interface for communicating free space, thus allowing large ranges of blocks to be zeroed out at once.

Zero uses the T10 WRITE_SAME command (or a proprietary alternative enabled with a vendor-specific VMware plug-in) and defaults to a 1 MB block size. VMware can use WRITE_SAME in conjunction with the T10 UNMAP command. Zeroing only works for capacity inside a VMDK.

Feature presence can be verified from the ESX 5 CLI:

~ # esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit
Value of HardwareAcceleratedInit is 1

To disable Zero from the ESX 5 CLI:

~ # esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedInit
Value of HardwareAcceleratedInit is 0


This is the signature VAAI command. Instead of reading each block of data from the array then writing it back, the hypervisor can command the array to duplicate a range of data on its behalf. If supported and enabled, VMware operations like Clone and VMotion can become extremely fast. For Speedups of 10x or more are achievable, on particular on flash backstores over slow links, such as 1 GbE.

Clone uses the T10 EXTENDED_COPY command (or a proprietary alternative enabled with a vendor-specific VMware plug-in) and defaults to a 4 MB block size.

Feature presence can be verified from the ESX 5 CLI:

~ # esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove
Value of HardwareAcceleratedMove is 1

To disable Clone from the ESX 5 CLI:

~ # esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedMove
Value of HardwareAcceleratedMove is 0

This change takes immediate effect, without requiring a 'Rescan All' from VMware.


VMFS operations like cloning and vMotion didn’t include any hints to the array to clear out unused VMFS space. Hence, some of the biggest storage operations couldn't be accelerated or "thinned out".

Delete uses the T10 UNMAP command (or a proprietary alternative enabled with a vendor-specific VMware plug-in) to allow thin-capable arrays to offload clearing unused VMFS space.

However, vCenter 5 doesn't correctly handle waiting for the storage array to return the UNMAP command status, so the use of Delete is disabled per default in vSphere 5.

Feature presence can be verified from the ESX 5 CLI:

~ # esxcfg-advcfg -g /VMFS3/EnableBlockDelete
Value of EnableBlockDelete is 1

To disable Delete from the ESX 5 CLI:

~ # esxcfg-advcfg -s 0 /VMFS3/EnableBlockDelete
Value of EnableBlockDelete is 0

Many SATA SSDs also have issues handling UNMAP properly, so it's disabled in RTS OS per default. To enable UNMAP from targetcli, enter the context of the respective backstore device, and set the following attribute:

/backstores/iblock/fioa> set attribute emulate_tpu=1
Parameter emulate_tpu is now '1'.

Then, in the 'Storage' view of VMware vSphere Client or vCenter, perform a 'Rescan All' of the datastores.



Cloning VMware VMs in 25s over 1 GbE on an RTS OS SAN with VAAI and Fusion-IO ioDrive PCIe flash memory.

Performance improvements offered by VAAI can be grouped into three categories:

The actual improvement seen in any given environment depends on a number of factors, discussed in the following section. In some environments, improvement may be small.

Cloning, migrating and zeroing VMs

The biggest factor for Cloning and Block Zeroing operations is whether the limiting factor is on the front end or the back end of the storage controller. If the throughput of the storage network is slower than the backstore can handle, offloading the bulk work of reading and writing virtual disks for cloning and migration and writings zeroes for virtual disk initialization can help immensely.

One example where substantial improvement is likely is when the ESX servers use 1 GbE iSCSI to connect to an RTS OS storage system with flash memory. The front end at 1 Gbps doesn't support enough throughput to saturate the back end. When cloning or zeroing is offloaded, however, only small commands with small payload go across the front, while the actual I/O is completed by the storage controller itself directly to disk.

VMFS datastore scalability

Documentation from various sources, including VMware professional services best practices, has traditionally recommended 20 to 30 VMs per VMFS datastore, and sometimes even fewer. Documents for VMware Lab Manager suggest limiting the number of ESX servers in a cluster to eight. These recommended limits are due in part to the effect of SCSI reservations on performance and reliability. Extensive use of some features, such as VMware snapshots and linked clones, can trigger large numbers of VMFS metadata updates, which require locking. Before vSphere 4.1, reliable locks on smaller objects were obtained by briefly locking the entire LUN with a SCSI Persistent Reservations. Any other server trying to access the LUN during the reservation would fail and would wait and retry up to 80 times by default. This wait and retry added to perceived latency and reduced throughput in VMs. In extreme cases, if the other server exceeded the number of retries, errors would be logged in the VMkernel logs and I/Os could return as failures to the VM.

When all ESX servers sharing a datastore support VAAI, ATS can eliminate SCSI Persistent Reservations, at least reservations due to obtaining smaller locks. The result is that datastores can be scaled to more VMs and attached servers than previously.

RTS has tested up to 128 VMs in a single VMFS datastore on RTS OS. The number of VMs was limited in testing to 128 because the maximum addressable LUN size in ESX is 2 TB, which means that each VM can occupy a maximum of 16 GB, including virtual disk, virtual swap, and any other files. Virtual disks much smaller than this generally do not allow enough space to be practical for an OS and any application.

Load was generated and measured on the VMs by using iometer. For some tests, all VMs had load. In others, such as when sets of VMs were started, stopped, or suspended, load was placed only on VMs that stayed running.

Tests such as starting, stopping, and suspending numbers of VMs were run with Iometer workloads running on other VMs that weren't being started, stopped, or suspended. Additional tests were run with all VMs running Iometer, and VMware snapshots were created and deleted as quickly as possible on all or some large subset of the VMs.

The results of these tests demonstrated that performance impact measured before or without VAAI was either eliminated or substantially reduced when using VAAI, to the point that datastores could reliably be scaled to 128 VMs in a single LUN.


For the ATS (Atomic Test-and-Set) primitive, the use of VAAI will depend on the type of filesystem:

On VAAI Hardware New VMFS-5 Upgraded VMFS-5 VMFS-3
Single-extent datastore reservations ATS only[Caveats 1] ATS but fall back to SCSI-2 reservations ATS but fall back to SCSI-2 reservations
Multi-extent datastore when locks on non-head Only allow spanning on ATS hardware[Caveats 2] ATS except when locks on non-head ATS except when locks on non-head
  1. If a new VMFS-5 is created on a non-ATS storage device, SCSI-2 reservations will be used.
  2. When creating a multi-extent datastore where ATS is used, the vCenter Server will filter out non-ATS devices, so that only devices that support the ATS primitive can be used.


The esxtop command in ESX 5 has two new sets of counters for VAAI operations available under the disk device view. Both sets of counters include the three VAAI key primitives. To view VAAI statistics using esxtop, follow these steps from the ESX 5 CLI:

~ # esxtop
  1. Press 'u' to change to the disk device stats view.
  2. Press 'f' to select fields, or 'o' to change their order. Note: This selects sets of counters, not individual counters.
  3. Press 'o' to select VAAI Stats and/or 'p' to select VAAI Latency Stats.
  4. Optionally, deselect Queue Stats, I/O Stats, and Overall Latency Stats by pressing 'f', 'g', and 'i' respectively in order to simplify the display.
  5. To see the whole LUN field, widen it by pressing 'L' (capital) then entering a number ('36' is wide enough to see a full NAA ID of a LUN).

The output of esxtop looks similar to the following:

 4:46:50am up 44 min, 281 worlds, 0 VMs, 0 vCPUs; CPU load average: 0.00, 0.00, 0.00

naa.60014050e4485b9bdc841d09478888e6        0        0        0     0.00     0.00       23    0        0        0     0.00        0        0     0.00
naa.600140515743d5195b0498b8aad6fdd2     1583      792        0     0.00     0.00     1322    0       23        0     0.00        0        0     0.00
naa.60014053937c69d44ff4e0b9e5a95398        0        0        0     0.00     0.00        0    0        0        0     0.00        0        0     0.00
naa.60014055fcf891d0c5b4a60a66942400     4746     3955        0     0.00     0.00     4402    0       45        0     0.00        0        0     0.00
naa.600140573d94f8e531d4d1ab5c8a72ef        0        0        0     0.00     0.00       23    0        0        0     0.00        0        0     0.00
naa.6001405a2e547c17329487b865d1a66e     3164     4746        0     0.00     0.00     5692    0       54        0     0.00        0        0     0.00
naa.6001405a3a17fe4483c46f994f74b4e6        0        0        0     0.00     0.00        0    0        0        0     0.00        0        0     0.00
t10.ATA_____ST3400832AS_____________        0        0        0     0.00     0.00        0    0        0        0     0.00        0        0     0.00

The VAAI counters in esxtop are:

Counter Name Description
DEVICE Devices that support VAAI (LUNs on a supported storage system) are listed by their NAA ID. You can get the NAA ID for a datastore from the datastore properties in vCenter, the Storage Details—SAN view in Virtual Storage Console, or using the vmkfstools -P /vmfs/volumes/<datastore> command. LIO/RTS OS LUNs start with naa.6001405.

Note: Devices or datastores other than LUNs on an external storage system such as CD-ROM, internal disks (which may be physical disks or LUNs on internal RAID controllers), and NFS datastores are listed but have all zeroes for VAAI counters.

CLONE_RD Number of Full Copy reads from this LUN.
CLONE_WR Number of Full Copy writes to this LUN.
CLONE_F Number of failed Full Copy commands on this LUN.
MBC_RD/s Effective throughput of Full Copy command reads from this LUN in megabytes per second.
MBC_WR/s Effective throughput of Full Copy command writes to this LUN in megabytes per second.
ATS Number of successful lock commands on this LUN.
ATSF Number of failed lock commands on this LUN.
ZERO Number of successful Block Zeroing commands on this LUN.
ZERO_F Number of failed Block Zeroing commands on this LUN.
MBZERO/s Effective throughput of Block Zeroing commands on this LUN in megabytes per second.

Counters that count operations do not return to zero unless the server is rebooted. Throughput counters are zero when no commands of the corresponding primitive are in progress.

Clones between VMFS datastores and Storage VMotion operations that use VAAI increment clone read for one LUN and clone write for another LUN. In any case, the total for clone read and clone write columns should be equal.

See also

External links

Personal tools
Google AdSense