Good post by Cormac Hogan (thank you) over in vSphere Blog
I have done a number of blog posts in the recent past related to our newest VAAI primitive UNMAP. For those who do not know, VAAI UNMAP was introduced in vSphere 5.0 to allow the ESXi host to inform the storage array that files or VMs had be moved or deleted from a Thin Provisioned VMFS datastore. This allowed the array to reclaim the freed blocks. We had no way of doing this previously, so many customers ended up with a considerable amount of stranded space on their Thin Provisioned VMFS datastores.
Now there were some issues with using this primitive which meant we had to disable it for a while. Fortunately, 5.0 U1 brought forward some enhancements which allows us to use this feature once again.
Over the past couple of days, my good friend Paudie O’Riordan from GSS has been doing some testing with the VAAI UNMAP primitive against our NetApp array. He kindly shared the results with me, so that I can share them with you. The posting is rather long, but the information contained will be quite useful if you are considering implementing dead space reclamation.
Some details about the environment which we used for this post:
- NetApp FAS 3170A
- ONTAP version 8.0.2 (I believe earlier versions do not support UNMAP)
- ESXi version 5.0U1, build 623860,
Step 1 - Verify that your storage array is capable of processing the SCSI UNMAP commands. The first place to look is on the vSphere Client UI. Select the datastore and examine the ‘Hardware Acceleration’ details (Hardware Acceleration is how we refer to VAAI in the vSphere UI):
Step 2 - The Hardware Acceleration status states Supported so it looks like this array is VAAI capable. The issue now is that we don’t know exactly which primitives are supported so we need to run an esxcli command to determine this. First, you need to get the NAA id of the device backing your datastore. One way of doing this is to use the CLI command ‘esxcli storage vmfs extent list’ on the ESXi host. In our setup, this command returned the following NAA id for the LUN backing our VMFS-5 datastore:
Once the NAA id has been identified, we can now go ahead and display device specific details around Thin Provisioning and VAAI. To do that, we use another esxcli command ‘esxcli storage core device list –d <naa>’. This command can show us information such as firmware revision, thin provisioning status, the VAAI filter and the VAAI status:
# esxcli storage core device list –d naa.60a98000572d54724a346a6170627a52
Display Name: NETAPP Fibre Channel Disk (naa.60a98000572d54724a346a6170627a52)
Has Settable Display Name: true
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.60a98000572d54724a346a6170627a52
SCSI Level: 4
Is Pseudo: false
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: yes
Attached Filters: VAAI_FILTER
VAAI Status: supported
Other UIDs: vml.020033000060a98000572d54724a346a6170627a524c554e202020
Here we see that the device is indeed Thin Provisioned and supports VAAI. Now we can run a command to display the VAAI primitives supported by the array for that device. In particular we are interested in knowing whether the array supports the UNMAP primitive for dead space reclamation (what we refer to as the Delete Status). Another esxcli command is used for this step – ‘esxcli storage core device vaai status get -d <naa>’:
VAAI Plugin Name: VMW_VAAIP_NETAPP
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported
The device displays Delete Status as supported meaning that it is capable of sending SCSI UNMAP commands to the array when a space reclaim operation is requested.
Great – so we have now confirmed that we have a storage array that is capable of dead space reclamation.
Read on here