Is Deduplication Useless on Archive Data?

52425-338038_4786

Good post by George Crump (thank you)

One of the techniques that storage vendors use to reduce the cost of hard disk-based storage is deduplication. Deduplication is the elimination of redundant data across files. The technology is ideal for backup, since so much of a current copy of data is similar to the prior copy. The few extra seconds required to identify redundant data is worth the savings in disk capacity. Deduplication for primary storage is popular for all-flash arrays. While the level of redundancy is not as great, the premium price of flash makes any capacity savings important. In addition, given the excess performance of AFAs the deduplication feature can often be added without a noticeable performance impact. There is one process though where deduplication provides little value; archive. IT professionals need to measure costs differently when considering a storage destination for archive.

Read on here

The paradigm shift in enterprise computing 10 years from now.

Good post by Erwin van Londen (thank you)

The way businesses arrange their IT infrastructure is based based upon 3 things: Compute, Networks and Storage. Two of these have had a remarkable shift in the way they operate over the last decade. The keyword here was virtualization. Both Compute and Networking have been torn apart and put together in a totally different way we were used to from the 70 to the early 2000’s.

Read on here

Docker and Storage – Understanding Docker I/O

Good Post by George Crump (thank you)

Designing a Docker Storage Infrastructure

A recent Storage Switzerland report covered the basics of Docker and Storage; what Docker is and how it impacts storage. Container technology and Docker specifically places unique demands on the storage infrastructure that most legacy storage architectures are ill-prepared to handle. The initial concern is developing a storage infrastructure that will support a container based dev/ops environment. The second concern is that these infrastructures will have to support containers in production as the value of something more granular than a virtual machine (VM) is understood.

Read on here

Microsegmentation: How VMware Addresses the Container Security Issue

Post by Scott M. Fulton III (thank you)

Easily the most astonishing result from ClusterHQ’s most recent State of Container Usage survey [PDF] was that nearly three-fourths of 229 IT professional respondents said their data centers are running containers in a hypervisor-virtualized environment — that is to say, a container environment inside the safety of a separate virtualization layer. That figure was bolstered by about 61 percent of respondents saying that security remained a barrier to their data centers’ adoption of containers in production.

Read on here

Today’s Storage: Same As It Ever Was

Good post by Stephen Foskett (thank you)

Data storage has always been one of the most conservative areas of enterprise IT. There is little tolerance for risk, and rightly so: Storage is persistent, long-lived, and must be absolutely reliable. Lose a server or network switch and there is the potential for service disruption or transient data corruption, but lose a storage array (and thus the data on it) and there can be serious business consequences.

Read on here

The Benefits of a Cloud Integrated Hyper-converged Architecture

Post by George Crump (thank you)

Hyper-converged architectures (HCA) consolidate compute, storage and networking onto a single server and then through software, aggregate these servers, creating a shared pool of resources. These resources can then be allocated to virtual machines (VMs) based on their performance and capacity demands. The goal is to simplify the purchasing, implementation and operation of the data center by consolidating or converging it. The logical next step for HCA is to extend its capabilities to the cloud allowing data centers of all sizes to achieve greater flexibility and resilience from disaster.

Raed on here

A Flash Storage Technical and Economic Primer

Good post by Scott D. Lowe (thank you)

Flash memory is a type of non-volatile memory storage, which can be electrically erased and programmed. What was the event that precipitated the introduction of this new storage medium? Well, it started in the mid-1980s, when Toshiba was working on a project to create a replacement for the EEPROM, a low-cost type of non-volatile memory, which could be erased and reprogrammed. The problem with the EEPROM was its cumbersome erasure process; it needed to be exposed to an ultraviolet light source to perform a complete erasure. To overcome this challenge, the E2PROM was created. The E2PROM type of memory cell was block erasable, but it was eight times the cost of the EEPROM. The high cost of the E2PROM led to rejection from consumers who wanted the low cost of EEPROM coupled with the block erasable qualities of the E2PROM.

Read on here

A quick introduction to Rubrik

I first encountered Rubrik at this year’s Partner Exchange (PEX) 2015 in San Francisco. They had some promotional flyers made up labeled “Backup Still Sucks”. I guess a lot of people can relate to that. I had a chat with Julia Lee, who used to be a storage product marketing manager here at VMware, but recently moved to Rubrik.

Read on here

A closer look at SpringPath

Good post by Cormac Hogan (thank you)

Another hyper-converged storage company has just emerged out of stealth. Last week I had the opportunity to catch up with the team from SpringPath (formerly StorVisor), based in Silicon Valley. The company has a bunch of ex-VMware folks on-board, such as Mallik Mahalingam and Krishna Yadappanavar. Mallik and Krishna were both involved in a number of I/O related initiatives during their time at VMware. Let’s take a closer look at their new hyper-converged storage product.

Read on here

Data Protection: All Starts with an Architecture

Post by Edward Haletky (thank you)

At The Virtualization Practice, we have systems running in the cloud as well as on-premises. We run a 100% virtualized environment, with plenty of data protection, backup, and recovery options. These are all stitched together using one architecture: an architecture developed through painful personal experiences. We just had an interesting failure—nothing catastrophic, but it could have been, without the proper mindset and architecture around data protection. Data protection these days does not just mean backup and recovery, but also prevention and redundancy.

Read on here