Thin Provisioning – What’s the scoop?

Post from Cormac Hogan (thank you)

This is a discussion that comes up regularly, going way back to vSphere 4.0 when we first introduced some serious features around thin provisioning storage for Virtual Machines. The objective of this post is to take a look at what options are available to you to do over-commitment/over-allocation of storage, as well as avoiding stranded storage scenarios. The whole point of thin provisioning, whether done on the array or at the hypervisor layer, is to allow a VM to run with just the storage it needs, and to avoid giving a VM storage that it might use sometime in the future. After all, you’re paying for this storage, so the last thing you want is to be paying for something you might never use.

Lets begin by looking at the various types of Virtual Machine disks (VMDKs) that are available to you.

VMDK Overview

Thin - These virtual disks do not reserve space on the VMFS filesystem, nor do they reserve space on the back-end storage. They only consume blocks when data is written to disk from within the VM/Guest OS. The amount of actual space consumed by the VMDK starts out small, but grows in size as the Guest OS commits more I/O to disk, up to a maximum size set at VMDK creation time. The Guest OS believes that it has the maximum disk size available to it as storage space from the start.

Thick (aka LazyZeroedThick) – These disks reserve space on the VMFS filesystem but there is an interesting caveat. Although they are called thick disks, they behave similar to thinly provisioned disks. Disk blocks are only used on the back-end (array) when they get written to inside in the VM/Guest OS. Again, the Guest OS inside this VM thinks it has this maximum size from the start.

EagerZeroedThick – These virtual disks reserve space on the VMFS filesystem and zero out the disk blocks at creation  time. This disk type may take a little longer to create as it zeroes out the blocks, but its performance should be optimal from deployment time (no overhead in zeroing out disk blocks on-demand, meaning no latency incurred from the zeroing operation). However, if the array supports the VAAI Zero primitive which offloads the zero operation to the array, then the additional time to create the zeroed out VMDK should be minimal.

Option 1 – Thin Provision at the Array Side

If you’re storage array supports it, devices/LUN can be thinly provisioned at the back-end/array. The advantage is physical disk space savings. There is no need to calculate provisioned storage based on the total VMDKs. Storage Pools of ‘thin’ disks (which can grow over time) can now be used to present datastores to ESXi hosts. VMs using thin or lazyzeroed VMDKs will now consume what they need rather than what they are allocated, which results in a capex saving (no need to purchase additional disk space). Most arrays which allow thin provisioning will generate events/alarms when the thin provisioned devices/pools start to get full. In most cases, it simply a matter of dropping more storage into the pool to address this, but of course the assumption here is that you have a SAN admin who is monitoring for these events.

Advantages of Thin Provisioning at the back-end:

  1. Address situations where a Guest OS or applications require lots of disk space before they can be installed, but might end up using only a portion of that disk space.
  2. Address situations where your customer state they need lot of disk space for their VM, but might end up using only a portion of that disk space.
  3. In larger environments which employ SAN admins, the monitoring of over-committed storage falls on the SAN admin, not the vSphere admin (in situations where the SAN admin is also the vSphere admin, this isn’t such an advantage)

Option 2- Thin Provision at the Hypervisor Side

There are a number of distinct advantages to using Thin Provisioned VMDKs. In no specific order:

  1. As above, address situations where a Guest OS or applications require lots of disk space before they can be installed, but might end up using only a portion of that disk space.
  2. Again as above, address situations where your customer state they need lot of disk space for their VM, but might end up using only a portion of that disk space.
  3. Over-commit in a situation where you need to deploy more VMDKs than the currently available disk space at the back-end, perhaps because additional storage is on order, but not yet in place.
  4. Over-commit, but on storage that does not support Thin Provisioning on the back-end (e.g. local storage).
  5. No space reclamation/dead space accumulation issues. More on this shortly.
  6. Storage DRS space usage balancing features can be used when one datastore in a datastore cluster starts to run out of space on one datastore, possibly as a result of thinly provisioned VMs growing in size.

Read on here

Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,149 other followers