Docker: What do Storage Pros need to know?

Good post by George Crump (thank you)

Docker was created to solve the problems that organizations face when they implement server virtualization on a wide scale; overhead and inefficiency. These challenges occur because virtualization is a sledgehammer to the problem it was designed to solve; allow multiple applications to run simultaneously on the same physical hardware in such a way that if one application fails the rest of the applications are not impacted. This is the real goal of virtualization, isolation of applications so that a misbehaving application does not impact another application or its resources.

Read on here

 

Getting to know the Network Block Device Transport in VMware vStroage APIs for Data Protection

Post by Abdul Rasheed (thank you)

When you backup a VMware vSphere virtual machine using vStorage APIs for Data Protection (VADP), one of the common ways to transmit data from VMware data store to backup server is through Network Block Device (NBD) transport. NBD is a Linux-like module that attaches to VMkernel and makes the snapshot of the virtual machine visible to backup server as if the snapshot is a block device on network. While NBD is quite popular and easy to implement, it is also the least understood transport mechanisms in VADP based backups.

Read on here

The Benefits of a Cloud Integrated Hyper-converged Architecture

Post by George Crump (thank you)

Hyper-converged architectures (HCA) consolidate compute, storage and networking onto a single server and then through software, aggregate these servers, creating a shared pool of resources. These resources can then be allocated to virtual machines (VMs) based on their performance and capacity demands. The goal is to simplify the purchasing, implementation and operation of the data center by consolidating or converging it. The logical next step for HCA is to extend its capabilities to the cloud allowing data centers of all sizes to achieve greater flexibility and resilience from disaster.

Raed on here

When Solid State Drives are not that solid

Post by Adam Surak (thank you)

It looked just like another page in the middle of the night. One of the servers of our search API stopped processing the indexing jobs for an unknown reason. Since we build systems in Algolia for high availability and resiliency, nothing bad was happening. The new API calls were correctly redirected to the rest of the healthy machines in the cluster and the only impact on the service was one woken-up engineer. It was time to find out what was going on.

Read on here also have a look at the detailed Comments

Faster Ethernet Gets Weird

Good post by Stephen Foskett (thank you)

Once upon a time there was Ethernet. Every half decade or so, the industry got together and worked out a faster version. Sometimes they didn’t totally agree, but a standard emerged at 10x the speed of the previous version. Throw all that out the window: Faster Ethernet is coming, and it’s going to be weird!

Read on here and check out the the Comments section too

Flash, Trash and data-driven infrastructures!

Post by Enrico Signoretti (thank you)

I’ve been talking about two-tier storage infrastructures for a while now. End users are targeting this kind of approach to cope with capacity growth and performance needs. The basic idea is to leverage Flash memory characteristics (All-flash, Hybrid, hyperconvergence) on one side and implement huge storage repositories, where they can safely store all the rest (including pure Trash) at the lowest possible cost, on the other. The latter is lately also referred to as a data lake.

Read on here

Is a Copy a Backup?

Good post by W.Curtis Preston (thank you)

Are we breaking backup in a new way by fixing it?  That’s the thought I had while interviewing Bryce Hein from Quantum. It made me think about a blog post I wrote four years ago asking whether or not snapshots and replication could be considered a backup.  The interview is an interesting one and the blog post has a lot of good points, along with quite a bit of banter in the comments section.
Read on here

Flash + Object – The Emergence of a Two Tier Enterprise

Post by George Crump (thank you)

For as long as there has been data there has been a quest to consolidate that data onto a single, consolidated storage system, but that quest seems to never be satisfied. The problem is that there are essentially two types of data; active and archive. Active data typically needs fast I/O response time at a reasonable cost. Archive needs to be very cost effective with reasonable response times. Storage systems that try to meet both of these needs in a single system often end up doing neither particularly well. This has led to the purchase of data and/or environment specific storage systems and storage system sprawl.

Read on here

VVOLs are more than just “per-VM”storage volumes

Good post by Luca Dell’Oca (thank you)

As I’m following closely the growth and evolution of this new technology for vSphere environments, and I’m still in search of a solution to play with VVOLs in my lab, I’ve found an article on the blogosphere and some additional comments on Twitter that made me re-think a bit about the real value of VVOLs.

It’s just a “per-VM” storage?

The original article comes from one of my favorite startups, Coho Data. In this article Suzy Visvanathan explains how Coho being an NFS-based storage doesn’t really need to support VVOLs.

Read on here

Facebook’s SSD findings: Failure, fatigue and the data center

Post by Robin Harris (thank you)

SSDs revolutionized data storage, even though we know little about how well they work. Now researchers at Facebook and Carnegie-Mellon share millions of hours of SSD experience.

Millions of SSDs are bought every year. It’s easy to be impressed by fast boots and app starts. But what about 24/7 data center operations? What are the common problems that admins should be concerned about?

Read on here

Follow

Get every new post delivered to your Inbox.

Join 1,237 other followers