Good post from Hu Yoshida (thank you) on Thin Provisioning
As you may have read already, I led off my 2012 trends blog series with a post on a “Focus on increasing storage utilization.”
I have talked with many customers who have seen utilization of storage assets increase from 20% -30%, and 50% – 60% using efficiency tools such as thin provisioning, dynamic tiering, deduplication, and active archive. A comment from John Nicholson indicates that the problem of efficiency may be even greater than the problem of utilization, as he ponders “how 100TB of raw disk capacity turns into 15 TB of actual data with layers of thick provisioning, virtualization, and wasteful snapshots.”
Layers of thick provisioning can be eliminated by a combination of thin provisioning in the storage systems and APIs from file vendors. Storage systems that support thin provisioning can provision user requests for storage with virtual capacity until they actually write to the storage, then they will provide the physical storage as the data is written. The storage system will not be able to reclaim space that is deleted unless the file system informs the storage through an API or SCSI command. Storage systems that support thin provisioning can “thin” existing thick volumes, by moving them into a thin provisioned pool. As they move the pages or chunks, they can determine which pages or chunks have zero data and thin them from the volume. If the storage system is also capable of storage virtualization, then it can thin provision external storage that would otherwise need to be replaced to get this capability.
Once a volume is thin provisioned, all the snapshots and moves of that volume become more efficient since all the allocated unused capacity has been eliminated. If the volume is composed of static data, data may be active but is not being updated, it could be moved into an active archive where only one copy is needed for redundancy and all those additional snapshots and backups can be eliminated.
Read on here