The Storage Requirements for 100% Virtualization

Post by George Crump (thank you)

After a rapid move from test to production, virtualization of existing servers in many companies seems to slow down. While it is true that most data centers have adopted a virtualize first philosophy, getting those older, mission critical workloads virtualized seems to be a thorny issue. These applications are often at the heart of an organization’s revenue or customer interaction and tend to be unpredictable in the resources they require. This is especially true when it comes to storage and networking.

Read on here

More Money for the Hyper-Converged Market

Good post by Chris M Evans (thank you)

This month, SimpliVity Corporation (www.simplivity.com) announced a Series D round of financing, raising an additional $175m for a total valuation of $1bn.  To date, the company has raised $276 million in four rounds (press release).  The money is to be used to fund company expansion, including a global roadshow event at which I will be presenting on behalf of Langton Blue (disclosure: SimpliVity will be a client of Langton Blue in May).  It is clear that hyper-convergence has struck a chord with customers, no doubt due to the sheer simplicity of the solutions.

Read on here

Storage QoS – A New Requirement for Shared Storage

Good post by Eric Slack (thank you)

In the virtualized data center shared storage is truly shared. Each attached server host may have dozens of virtual machines all accessing the same shared storage system at the same time. This creates a new requirement for shared storage systems, how to provide a guaranteed level of storage performance to mission critical applications as they are virtualized.

Read on here

Overcoming The Risk of Mixing Storage Workloads

52425-338038_4786

Post by George Crump (thank you)

Silos of storage within the storage environment are increasing at an alarming rate with each being dedicated to a specific task or workload. Why? The answer is simple: risk mitigation. The data center can’t afford to have applications experience unpredicted drops in and inconsistent performance. The simplest way to assure this predictability is to dedicate a storage system to each type of workload. The problem with this, of course, is that dedicating storage systems to workloads is expensive, both in financial and human resources.

Read on here