Making All-Flash 3D TLC SSD Arrays Enterprise Ready

3d-nano-ssd-rcm992x0

Good post by George Crump (thank you)

All-Flash Array Vendors are now releasing systems with 3D TLC SSDs. They claim that they have reached price parity, without data efficiency, to mainstream data center hard disk arrays. 3D TLC NAND does bring the price per GB of flash storage down considerably, but it does carry the risk of device failure and data loss. Understanding how a vendor mitigates that risk is critical to vendor selection.

Read on here

Network, your next big storage problem!

data

Good post by Enrico Signoretti (thank you)

A few days ago I had an interesting chat with Andy Warfield at Coho Data and the topic of Network/Storage relationship came up several times. (Quick disclaimer: I’m currently doing some work for Coho)

In a couple of my latest articles (here and here) I talked about why many large IT organizations prefer PODs to other topologies for their datacenters but I totally forgot to talk about networking (I also have to admit that networking is not my field at all). So, this article could be the right follow-up for those posts.

Read on here

SMR Drives: Are they too late to the game?

smr2

Good post by Petros Koutoupis (thank you)

The sudden popularity over NAND Flash has spelled doom for traditional magnetic Hard Disk Drives (HDD). For years we have been hearing how HDDs are reaching the end of their life. We have also heard the same about Tape drives, long before that. Although, it would seem that the prediction on HDDs may become a bit more of reality, sooner than expected.

Read on here

Pure gives its flash boxes some 3D TLC

Post by Chris Mellor (thank you) over at El Reg

Pure Storage wants to be its flash array customers’ best friend forever with announcements lowering flash storage cost and improving its availability.

The Silicon Valley biz is now supporting 3D TLC flash, the three-bits-per-cell stuff that has an endurance long enough for enterprise use. Other flash array suppliers using this technology include HP Enterprise, Kaminario, and Dell.

Read on here

 

LTO Program Announces Generation 7 Specifications

Post by Adam Armstrong (thank you)

Today the LTO Program Technology Provider Companies (HP, IBM, and Quantum) announced new specifications for the LTO Ultrium format generation 7 or LTO-7. The new specifications more than double the capacity per tape cartridge bring the capacity to 15TB (when compressed) up from 6.25TB of the previous generation. And the new specifications have even faster transfer speeds up to 750MB/s or 2.7TB/hour/drive, up from 400MB/s or 1.4TB/hour/drive of the previous generation.

Read on here

Samsung announces 16TB SSD

Post by Robin Harris (thank you)

Not just the world’s highest capacity SSD, but the world’s highest capacity drive of any type. The PM1633 uses Samsung’s new 48 layer V-NAND, itself a technical tour-de-force, and represents a new thrust in flash storage beyond performance: capacity.

Read on here

Hadoop Storage: DAS vs. Shared

Post by George Crump (thank you)

Hadoop is a software solution that was developed to solve the challenge of doing a very rapid analysis of vast, often disparate data sets. Also known as big data, the results of these analytics, especially when produced quickly, can significantly improve an organization’s ability to solve problems, create new products and to cure diseases. One of the key tenets of Hadoop is to bring the compute to the storage instead of the storage to the compute. The fundamental belief is that the network in-between compute and storage is too slow, impacting time to results.

Read on here

Docker: What do Storage Pros need to know?

Good post by George Crump (thank you)

Docker was created to solve the problems that organizations face when they implement server virtualization on a wide scale; overhead and inefficiency. These challenges occur because virtualization is a sledgehammer to the problem it was designed to solve; allow multiple applications to run simultaneously on the same physical hardware in such a way that if one application fails the rest of the applications are not impacted. This is the real goal of virtualization, isolation of applications so that a misbehaving application does not impact another application or its resources.

Read on here

 

Flash + Object – The Emergence of a Two Tier Enterprise

Post by George Crump (thank you)

For as long as there has been data there has been a quest to consolidate that data onto a single, consolidated storage system, but that quest seems to never be satisfied. The problem is that there are essentially two types of data; active and archive. Active data typically needs fast I/O response time at a reasonable cost. Archive needs to be very cost effective with reasonable response times. Storage systems that try to meet both of these needs in a single system often end up doing neither particularly well. This has led to the purchase of data and/or environment specific storage systems and storage system sprawl.

Read on here

Pure Storage FlashArray//m

Post by Justin Warren (thank you)

What I like most about the FlashArray//m is the combination of very dull things that combine to make this an interesting piece of infrastructure.

I tweeted out a cheeky picture of a 6509 with a Pure Storage logo on it, because that’s what the FlashArray//m reminds me of: a modular chassis with a backplane that was the workhorse core switch at multiple clients for over a decade. I quite liked them. We’ve had modular chassis like this for years in networking and server gear, so it’s somewhat astounding that storage doesn’t do this, at least, not in the same ubiquitous way (software based things on x86 servers notwithstanding).

Read on here