Making All-Flash 3D TLC SSD Arrays Enterprise Ready

3d-nano-ssd-rcm992x0

Good post by George Crump (thank you)

All-Flash Array Vendors are now releasing systems with 3D TLC SSDs. They claim that they have reached price parity, without data efficiency, to mainstream data center hard disk arrays. 3D TLC NAND does bring the price per GB of flash storage down considerably, but it does carry the risk of device failure and data loss. Understanding how a vendor mitigates that risk is critical to vendor selection.

Read on here

Docker: What do Storage Pros need to know?

Good post by George Crump (thank you)

Docker was created to solve the problems that organizations face when they implement server virtualization on a wide scale; overhead and inefficiency. These challenges occur because virtualization is a sledgehammer to the problem it was designed to solve; allow multiple applications to run simultaneously on the same physical hardware in such a way that if one application fails the rest of the applications are not impacted. This is the real goal of virtualization, isolation of applications so that a misbehaving application does not impact another application or its resources.

Read on here

 

The Benefits of a Cloud Integrated Hyper-converged Architecture

Post by George Crump (thank you)

Hyper-converged architectures (HCA) consolidate compute, storage and networking onto a single server and then through software, aggregate these servers, creating a shared pool of resources. These resources can then be allocated to virtual machines (VMs) based on their performance and capacity demands. The goal is to simplify the purchasing, implementation and operation of the data center by consolidating or converging it. The logical next step for HCA is to extend its capabilities to the cloud allowing data centers of all sizes to achieve greater flexibility and resilience from disaster.

Raed on here

Flurry of new data storage technology can bring confusion

Post by Rich Castagna (thank you)

Confusion reigns in the storage world, as new data storage technology tries to find its place in the data center.

In a blizzard, it’s hard to see a single snowflake. And with the avalanche of new data storage technology that has swirled around data centers the past couple of years, it can be pretty tough to pick out that exemplary piece of engineering innovation and dexterity. They say no two snowflakes are alike (how “they” came to that conclusion, I’ll never know) and, similarly, this data storage maelstrom is marked by a staggering number of new products and product categories. In other words, it’s tough to figure out.

Read on here

Which All-flash Architecture do you prefer?

Good post by George Crump (thank you)

The title of this entry, “Which All-flash Architecture Do You Prefer?”, was actually a question asked on the LinkedIn group, “Storage: SAN, NAS“, a couple of days ago. It was in response to a recent post by Calvin Zito @HPStorageGuy that was also discussing the importance of good architecture design. It was a great question that I responded to right away and below is a more organized version of my response.

Read on here

Free Disk? No Thanks

Good post by George Crump (thank you)

This morning I am flying to New York City to attend Fujifilm’s 6th Annual Global IT Executive Summit. The theme of this year’s event is, “into Tomorrow with Tape Technology, Preserving and Protecting Critical Data”. This is a data protection event grounded in the reality. Tape is not dead, it is alive and has a bright future. Data centers that ignore tape technology do so at their own peril.

Read on here

The promise of next-generation WAN optimization

Good post by Enrico Signoretti (thank you)

Bandwidth, throughput, and latency aren’t issues when you are within the boundaries of a data center, but things drastically change when you have to move data over a distance. Applications are designed to process data and provide results as fast as possible, because users and business processes now require instant access to resources of all kinds. This is not easy to accomplish when data is physically far from where it is needed.

Read on here (needs registration)

HP StoreOnce VSA: Software-defined data protection for your virtualized environments

 

Information

Post by Ashwin Shetty (thank you)

I’m excited about VMworld next week, especially given the way software-defined storage is changing the way organizations implement storage technology. Since we are so close to VMworld, let me focus specifically on data protection—and give you a bit more information on some of the things you can expect to see and hear from us at the big VMware event this week in San Francisco.

The exponential growth of data has forced your organization to look for cost-effective data protection solutions for your virtualized environments whether in the data center or in small and remote offices. Most vendors provide data deduplication solution that requires dedicated storage hardware. This isn’t always a cost-effective solution for remote/virtualized environments. To address this issue while providing greater flexibility, HP has extended the StoreOnce family to provide a software-defined backup solution: HP StoreOnce VSA.

Read on here

VMware Announces Software Defined Infrastructure with EVO:RAIL

Good Post by Chris Wahl (thank you)

Wow, the world of converged infrastructure sure is on fire. It seems that everyone is looking to stuff all of the data center food groups into an appliance-like node for simpler data center architecture models. Further validating this idea, VMware has entered the ring with their hyper-converged infrastructure offering called EVO (check out the landing page, here).For a moment, however, let’s take a step back and look at the various tiers of convergence that are available to data center customers today:

Read on here

Rethinking Enterprise Storage

Post by Colm Keegan (thank you)

With soaring data growth occurring across all industries, IT planners need to rethink how they design and implement enterprise storage technology. Traditional NAS and SAN platforms have served as the bulwark of data center storage capacity for decades, however, storing information on these premium storage assets is becoming increasingly cost prohibitive for many organizations. As a result, a new paradigm is needed to augment existing data center storage assets to more efficiently store and protect the vast amounts of information piling up across enterprise environments.

Read on here