Snapshot 101: Copy-on-write vs Redirect-on-write

Good post by W. Curtis Preston (thank you)

There are two very different ways to create snapshots: copy-on-write and redirect-on-write. If IT is considering using the snapshot functionality of their storage system, it is essential to understand which type of snapshot it creates and the pros and cons of using either method.

Rather than the more common term volume, this column will use the term protected entity to refer to the entity being protected by a given snapshot. While it is true that the protected entity is typically a RAID volume, it is also true that some object storage systems do not use RAID. Their snapshots may be designed to protect other entities, including containers, a NAS share, etc. In this case, the protected entity may reside on a number of disk drives, but it does not reside on a volume in the RAID or LUN sense.

Read on here

Data Retention for Dummies

Good post by Chris Mellor (thank you) over at El Reg

All is confusion. The old certainties are gone. New certainties just don’t exist. The shifting shapes, players, products and technologies in the storage landscape are seen through fog. How the heck does everything fit together?

After four days in Silicon Valley meeting startups the bewilderment ratio us even higher. It’s like Dragons’ Den, where each new player is shinier and brighter than the previous one, becomes your favourite but then, as sure as eggs are eggs, will be eclipsed by the next one.

Read on here

 

Hadoop Storage: DAS vs. Shared

Post by George Crump (thank you)

Hadoop is a software solution that was developed to solve the challenge of doing a very rapid analysis of vast, often disparate data sets. Also known as big data, the results of these analytics, especially when produced quickly, can significantly improve an organization’s ability to solve problems, create new products and to cure diseases. One of the key tenets of Hadoop is to bring the compute to the storage instead of the storage to the compute. The fundamental belief is that the network in-between compute and storage is too slow, impacting time to results.

Read on here

Object storage: why, when, where… and but.

Good post by Enrico Signoretti (thank you)

In one of my latest posts I wrote about private object storage not being for everyone… especially if you don’t have the size that makes it viable… But, on the other hand we are all piling up boatloads of data and users need to access it from many different locations, applications and devices at anytime.

Read on here

Flash + Object – The Emergence of a Two Tier Enterprise

Post by George Crump (thank you)

For as long as there has been data there has been a quest to consolidate that data onto a single, consolidated storage system, but that quest seems to never be satisfied. The problem is that there are essentially two types of data; active and archive. Active data typically needs fast I/O response time at a reasonable cost. Archive needs to be very cost effective with reasonable response times. Storage systems that try to meet both of these needs in a single system often end up doing neither particularly well. This has led to the purchase of data and/or environment specific storage systems and storage system sprawl.

Read on here

Next-generation storage for the software-defined datacenter

Post by Siddhartha Roy and Paul Luber (thank you)

Storage is a foundational component of the datacenter fabric and is an intrinsic part of Microsoft’s software-defined datacenter solution.  Our storage investments are centered on bringing value to customers in terms of increasing cloud scale, availability, performance, and reliability, while lowering acquisition and operational costs – with Windows Server, and now also with Microsoft Azure Stack.

Read on here

HGST Unveils Active Archive Object Storage System

Post by Adam Armstrong (thank you)

Today HGST (a Western Digital company) announced its new object storage system, Active Archive. The Active Archive System is designed to address the need for rapid access to massive data stores. The type of data the system predominately stores is data that is past its create and modify phase of its life, moving into its long term retention phase but still requires fast access.

Read on here

The reverse wars – DAS vs NAS vs SAN

Post by Chin-Fah Heoh (thank you)

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

Read on here

Qumulo emerges with data-aware scale-out NAS

Good post by Dave Raffo (thank you)

Isilon founding engineers launch Qumulo Core, software designed to manage scale-out NAS with real-time analytics.

Qumulo came out of stealth today with what it describes as data-aware scale-out NAS with real-time analytics built in.

Qumulo Core was developed by many of the same developers who created Isilon scale-out NAS. Qumulo founders Peter Godman, Aaron Passey and Neal Fachan were responsible for dozens of Isilon patents. EMC acquired Isilon for $2.25 billion in 2010.

Read on here

vSphere Virtual Volumes Interoperability: VAAI APIs vs VVOLs

Good post by Rawlinson Rivera (thank you)

In 2011 VMware introduced block based VAAI APIs as part of vSphere 4.1 release. This APIs helped improving performance of VMFS by providing offload of some of the heavy operations to the storage array. In subsequent release, VMware added VAAI APIs for NAS, thin provisioning, and T10 command support for Block VAAI APIs.

Now with Virtual Volumes (VVOLs) VMware is introducing a new virtual machine management and integration framework that exposes virtual disks as the primary unit of data management for storage arrays. This new framework enables array-based operations at the virtual disk level that can be precisely aligned to application boundaries with the capability of providing a policy-based management approach per virtual machine.

Read on here