Thinking different about storage

Good post by Enrico Signoretti (thank you)

In the last few months I had several interesting briefings with storage vendors. Now, I need to stop and try to connect the dots, and think about what could come next.
It’s incredible to see how rapidly the storage landscape is evolving and becoming much smarter than in the past. This will change the way we store, use and manage data and, of course, the design of future infrastructures.

Read on here

How to safely use 8TB Drives in the Enterprise

Good post by George Crump (thank you)

After a few year hiatus higher capacity hard drives are coming to market. We expect 8TB drives to be readily available before the end of the year with 10TB drives soon to follow. And at the rate that capacity demands are increasing those drives can’t get here soon enough. But, these new extremely high-capacity disk drives are being met with some trepidation. There are concerns about performance, reliability and serviceability. Can modern storage systems build enough safeguards around these products for the enterprise data center to count on them?

Read on here

Deduplicated Backup Storage – 3 Modes of Operation

Good post by Brian Seltzer (thank you)

The move away from tape backups towards disk-based backups has been going on for a while now.  Storing backups on disk generally means faster backups and faster restores.  However, disk isn’t as cheap as tape, and storing many terabytes or even petabytes of data on disk can lead to sprawling storage systems.  To reduce the cost and physical size of backup disk, backup storage is commonly deduplicated.  This can have a pretty dramatic impact on the size of backed up data, especially if your retention period include multiple full backups.

Read on here

HGST Unveils Active Archive Object Storage System

Post by Adam Armstrong (thank you)

Today HGST (a Western Digital company) announced its new object storage system, Active Archive. The Active Archive System is designed to address the need for rapid access to massive data stores. The type of data the system predominately stores is data that is past its create and modify phase of its life, moving into its long term retention phase but still requires fast access.

Read on here

Dispelling Myths about IOPs

Good post by Hu Yoshida (thank you)

Since the SPC numbers for the G1000 came in at 2 million IOPs and blew away every other all flash array vendor, there have been a number of posts in the blog sphere discussing the relevance of IOPs as a measure of performance. Lets look at some of the myths about IOPs.

Read on here

Quantum Doubles Down on Data Archiving

Post by Pedro Hernandez (thank you)

Quantum is tackling the growth of unstructured data, and its growing impact on IT budgets, with three new offerings unveiled today.

The San Jose, Calif.-based data backup specialist has taken the wraps off its new Artico NAS appliance that provides fast file services courtesy of its internal disks and while supporting data archival operations that target the company’s Lattus Object Storage hardware, Scalar tape libraries (i80, i500, i6000) or Q-Cloud Archive.

Read on here

How does Data Loss Happen and How can it be Stopped?

Good post by George Crump (thank you)

A recent blog by B&L Associates brought to light the problem of data loss. The entry cites surveys from Kroll Ontrack and EMC, which indicated that organizations see data loss as their single most significant risk and that the cost of losing data is almost two trillion dollars worldwide. Assuming that most of these organizations have some form of a data protection process in place and that those processes include some form of disaster recovery, why does data loss occur and how can it be prevented?

Read on here

How much Storage do i need ?

Good post in the Solarwinds forum (thank you)

When it comes to the enterprise technology stack, nothing has captured my heart &  imagination quite like enterprise storage systems.

Stephen Foskett once observed that all else is simply plumbing, and he’s right. Everything else in the stack exists merely to transport, secure, process, manipulate, organize, index or in some way serve & protect the bytes in your storage array.

But it’s complex to manage, especially in small/medium enterprises where the storage spend is rare and there are no do-overs. If you’re buying an array, you’ve got to get it right the first time, and that means you’ve got to figure out a way to forecast how much storage you actually need over time.

Read on here

Why Virtual Volumes?

Post by Andrew Sullivan (thank you)

How many times in the last 3-4 years have you heard “Virtual Volumes”, “VVols”, “Storage Policy Based Management”, or any of the other terms associated with VMware’s newest software-defined storage technology?  I first heard about VVols in 2011, when I was still a customer, and the concept of no longer managing my virtual machine datastores, but rather simply consuming storage as needed with features applied as requested, was fascinating and exciting to me.

Read on here

More Money for the Hyper-Converged Market

Good post by Chris M Evans (thank you)

This month, SimpliVity Corporation (www.simplivity.com) announced a Series D round of financing, raising an additional $175m for a total valuation of $1bn.  To date, the company has raised $276 million in four rounds (press release).  The money is to be used to fund company expansion, including a global roadshow event at which I will be presenting on behalf of Langton Blue (disclosure: SimpliVity will be a client of Langton Blue in May).  It is clear that hyper-convergence has struck a chord with customers, no doubt due to the sheer simplicity of the solutions.

Read on here

Follow

Get every new post delivered to your Inbox.

Join 1,204 other followers