The Benefits of a Cloud Integrated Hyper-converged Architecture

Post by George Crump (thank you)

Hyper-converged architectures (HCA) consolidate compute, storage and networking onto a single server and then through software, aggregate these servers, creating a shared pool of resources. These resources can then be allocated to virtual machines (VMs) based on their performance and capacity demands. The goal is to simplify the purchasing, implementation and operation of the data center by consolidating or converging it. The logical next step for HCA is to extend its capabilities to the cloud allowing data centers of all sizes to achieve greater flexibility and resilience from disaster.

Raed on here

It’s time to “VMware” Storage

Good post by George Crump (thank you)

Before hypervisors like VMware, Hyper-V and KVM came to market, data centers had few options when it came to managing the growth of their server infrastructure. They could buy one big server that ran multiple applications, which, while it simplified operations and support, meant that one application was at the mercy of the other applications in terms of reliability and performance. Alternatively, IT professionals could buy a server for each application as it came online but this sacrificed operational efficiency and IT budget to the demands of fault and performance isolation. Until hypervisors came to market, the latter choice was considered to be the best practice.

Read on here

How to safely use 8TB Drives in the Enterprise

Good post by George Crump (thank you)

After a few year hiatus higher capacity hard drives are coming to market. We expect 8TB drives to be readily available before the end of the year with 10TB drives soon to follow. And at the rate that capacity demands are increasing those drives can’t get here soon enough. But, these new extremely high-capacity disk drives are being met with some trepidation. There are concerns about performance, reliability and serviceability. Can modern storage systems build enough safeguards around these products for the enterprise data center to count on them?

Read on here

Designing Primary Storage to Ease the Backup Burden

Good post by George Crump (thank you)

When IT planners map out their primary storage architectures they typically focus on how well the system will perform, how far it will scale and how reliable it will be. Data protection, that process that guards against corruption or system failure in primary storage or even a site disaster, is too often a secondary consideration, and often made by someone else. But what if the primary storage system could be designed to protect itself from these occurrences? Would that make it possible to simplify or even eliminate the data protection process altogether?

Read on here

Orchestrating Copy Data

Post by George Crump (thank you)

2015 will be THE year of copy data management. Multiple vendors will bring solutions to the market. Many of these solutions will leverage snapshot technology in one form or another in an effort to reduce the capacity requirements of secondary data copies needed to drive data protection, business analytics, and test/dev operations. But there is another key resource that needs to be saved; time. Copy data solutions need to provide a high level of orchestration and analysis so that system administrators can be more efficient and decrease the chance for error.

Read on here

The State of Deduplication in 2015

Good post by George Crump (thank you)

At its core, deduplication is an enabling technology. First, it enabled disk based backup devices to become the primary backup target in the data center. Now it promises to enable the all-flash data center by driving down the cost of flash storage. Just as deduplication became the table stake for backup appliances, it is now a required capability for flash and hybrid primary storage.

Read on here

Not all Snapshots are the same

Good post by George Crump (thank you)

In an upcoming webinar, Storage Switzerland will make the case for using snapshots as a primary component of data protection. For this strategy to work several things are needed from the storage infrastructure. First, it must be able to keep an almost unlimited number of snapshots; second, it needs to have a replication process that can transfer those snapshot deltas (the changed blocks of data) to a safe place; and third, the entire storage infrastructure has to be very cost effective. In this column we will look at that first requirement, the ability to create and store a large amount of snapshots without impacting performance.

Read on here

Should you be able to turn All-Flash Deduplication off?

Good post by George Crump (thank you)

Deduplication, along with compression, provides the ability to more efficiently use premium priced flash capacity. But capacity efficiency comes with at least some performance impact. This is especially true on all-flash arrays where data efficiency features can’t hide behind hard disk drive latency. This has lead some all-flash vendors, like Violin Memory, to claim that an on/off switch on all-flash should be a requirement. Is that the case?

Read on here

Fibre Channel or Ethernet?

There’s been a lot of discussion about the emergence of Ethernet as a storage protocol. Analyst George Crump of Storage Switzerland interviews Scott Shimomura of Brocade about the differences between FC and FCoE with regards to performance, price, and complexity. With 60% to 80% of the market still using FC and with the FCIA announcement of Gen 6, the majority of FC customers have compelling reasons to continue using fibre channel.