The paradigm shift in enterprise computing 10 years from now.

Good post by Erwin van Londen (thank you)

The way businesses arrange their IT infrastructure is based based upon 3 things: Compute, Networks and Storage. Two of these have had a remarkable shift in the way they operate over the last decade. The keyword here was virtualization. Both Compute and Networking have been torn apart and put together in a totally different way we were used to from the 70 to the early 2000’s.

Read on here

Is Software-Defined Storage Enough?

Good post by George Crump (thank you)

Initiatives like server virtualization, cloud infrastructure-as-a-service, and real-time analytics are allowing IT to meet today’s ever-increasing business demands. These initiatives are designed to bring agility to the data center, yet they trip and fall when they have to interact with the silo’ed storage infrastructure. Software Defined Storage (SDS) was supposed to be the answer. But for the most part, SDS is really storage software without the reliance on dedicated hardware, and is limited to specific storage containers. Most SDS solutions cannot extend a container across vendors, formats (file, block, object) or protocols. Even hyper-converged systems don’t help. They attempt to solve this problem by moving all enterprise data into a bigger and proprietary container at the server layer.

Read on here

How to safely use 8TB Drives in the Enterprise

Good post by George Crump (thank you)

After a few year hiatus higher capacity hard drives are coming to market. We expect 8TB drives to be readily available before the end of the year with 10TB drives soon to follow. And at the rate that capacity demands are increasing those drives can’t get here soon enough. But, these new extremely high-capacity disk drives are being met with some trepidation. There are concerns about performance, reliability and serviceability. Can modern storage systems build enough safeguards around these products for the enterprise data center to count on them?

Read on here

Quantum Doubles Down on Data Archiving

Post by Pedro Hernandez (thank you)

Quantum is tackling the growth of unstructured data, and its growing impact on IT budgets, with three new offerings unveiled today.

The San Jose, Calif.-based data backup specialist has taken the wraps off its new Artico NAS appliance that provides fast file services courtesy of its internal disks and while supporting data archival operations that target the company’s Lattus Object Storage hardware, Scalar tape libraries (i80, i500, i6000) or Q-Cloud Archive.

Read on here

Unstructured Data is distracting Backup Administrators

Post by George Crump (thank you)

File based data accounts for more than 80 percent of capacity demand and backup administrators spend most of their time protecting this unstructured data. But the remaining set, structured data, will cause the organization the most harm if it is not recoverable. This data (databases, VM Images) requires special backups and fast recoveries. The key to protecting the organization from disaster is to eliminate the unstructured data protection problem. If backup administrators could focus 100% of their time on 20% of their problem, then organizations would be in a much better position to protect themselves from a disaster.

Read on here

Recalculating Odds of RAID5 URE Failure

Good Post by Matt Simmons (thank you)

Alright, my normal RAID-5 caveats stand here. Pretty much every RAID level other than 0 is better than a single parity RAID, until RAID goes away. If you care about your data and speed, go with RAID-10. If you’re cheap, go with RAID-6. If you’re cheap and you’re on antique hardware, or if you just like arguing about bits, keep reading about RAID-5.

Read on here

What is Copy Data?

Good Post by George Crump (thank you)

Copy Data is the term used to describe the copies of primary data made for data protection, testing, archives, eDiscovery and analytics. The typical focus of copy data is data protection to recover data when something goes wrong. The problem is that each type of recovery requires a different copy of data. Recovery from corruption requires snapshots. Recovery from server failure requires disk backup. Protection from disk backup requires tape. Finally, recovery from a site disaster requires that all these copies be off-site. Add to the data protection copy problem, all the copies being made for test/development, archives, eDiscovery and now analytics. The end result: copy data is about much more than data protection and providing the capacity to manage all these copies has become a significant challenge to the data center.

Read on here

The Physics of Spinning Disk – How We Got To 10 TB

Post by Scott D. Lowe (thank you)

The year is 1956.  Computers are starting to take hold in businesses and IBM recognizes the need for a new kind of data storage.  The company succeeds in creating what is considered to be the world’s first hard drive.  Bigger than a refrigerator and weighing more than a ton, the RAMAC stored a whopping 5 megabytes of data.

Read on here

Will the foundation of your Disaster Recovery plan collapse?

Good post by George Crump (thank you)

The ability to replicate data between data centers as it changes is an essential ingredient of any enterprise class storage system. Data centers count on this capability as the foundational component in their disaster recovery (DR) plans. But this foundation is undergoing several seismic shifts that are making the very foundation of DR unstable and combined, and may cause the entire DR strategy to collapse. A DR plan failure can mean loss of revenue, regulatory fines and eventually may cause the failure of the business.

Read on here

Top 4 Reasons Why The World Needs Tape More Than Ever

Post by Christian Toon (thank you)

Magnetic tape was first used to store computer data in 1951. By the mid 1970s pretty much everyone relied on tape cassettes and cartridges. Then, with the arrival of data storage discs and the cloud, tape’s popularity began to plummet. The shiny new alternatives positioned tape as a dated, costly, inflexible, unreliable technology that few forward-thinking organisations would dream of using as part of their data storage and protection infrastructure. Consequently, its image bruised and tarnished, tape was expected to quietly disappear into IT oblivion. Only it didn’t.

Read on here