Designing Backup to replace Primary Storage

blueprint

Good post by George Crump (thank you)

Users and application owners expect that the systems they use will never go down, and if they do they will be returned to operation quickly with little data loss. In our article “Designing Primary Storage to Ease the Backup Burden” we discussed how to architect a primary storage infrastructure that is able to help meet these challenges. We call this design Protected Primary Storage. But this design can be expensive, especially if it is applied to every application in the data center.

Read on here

VMware Metro Storage Cluster Overview

Good post by Derek Hennessy (thank you)

VMware Metro Storage Cluster

VMware Metro Storage Cluster (vMSC) allows vCenter to stretch across two data centers in geographically dispersed locations. In normal circumstances, in vSphere 5.5 and below at least, vCenter would be deployed in Link-Mode so two vCenters can be managed as one. However, with vMSC it’s possible to have one vCenter manage all resources across two sites and leverage the underlying stretch storage and networking infrastructures. I’ve done previous blogs on NetApp MetroCluster to describe how a stretched storage cluster is spread across two disparate data centers. I’d also recommend reading a previous post done on vMSC by Paul Meehan over on www.virtualizationsoftware.com. The idea behind this post is to provide the VMware view for the MetroCluster posts and to give a better idea on how MetroCluster storage links into virtualization environments.

Read on here

The Storage Requirements for 100% Virtualization

Post by George Crump (thank you)

After a rapid move from test to production, virtualization of existing servers in many companies seems to slow down. While it is true that most data centers have adopted a virtualize first philosophy, getting those older, mission critical workloads virtualized seems to be a thorny issue. These applications are often at the heart of an organization’s revenue or customer interaction and tend to be unpredictable in the resources they require. This is especially true when it comes to storage and networking.

Read on here

VVOLs and VMware

Post by Christine Taylor (thank you)

The definition of VVOLs is simple but the effect is ground-breaking. Here is the simple definition part: Virtual Volumes (VVOL) is an out-of-band communication protocol between array-based storage services and vSphere 6.

And here is the ground-breaking part: VVOLs enables a VM to communicate its data management requirements directly to the storage array. The idea is to automate and optimize storage resources at the VM level instead of placing data services at the LUN (block storage) or the file share (NAS) level.

Read on here

Virtual Volumes (VVols) and Replication/DR

Good post by Cormac Hogan (thank you)

There have been a number of queries around Virtual Volumes (VVols) and replication, especially since the release of KB article 2112039 which details all the interoperability aspects of VVols.

In Q1 of the KB, the question is asked “Which VMware Products are interoperable with Virtual Volumes (VVols)?” The response includes “VMware vSphere Replication 6.0.x”.

Read on here

Combining snapshots and backups for best practice data protection

Post by Simon Watkins (thank you)

When it comes to best practice data protection for your business-critical applications, no single snapshot or backup technology can provide the complete solution. Snapshots and backups have different yet complementary roles to play for availability, backup and disaster recovery.

When you’re looking for a comprehensive, tiered and converged data protection architecture that balances availability and protection, it should not be a question of either-or but more a case of where and how to best use snapshots and backup.  

Read on here

Will the foundation of your Disaster Recovery plan collapse?

Good post by George Crump (thank you)

The ability to replicate data between data centers as it changes is an essential ingredient of any enterprise class storage system. Data centers count on this capability as the foundational component in their disaster recovery (DR) plans. But this foundation is undergoing several seismic shifts that are making the very foundation of DR unstable and combined, and may cause the entire DR strategy to collapse. A DR plan failure can mean loss of revenue, regulatory fines and eventually may cause the failure of the business.

Read on here

Overcoming challenges when integrating Disk Backup Appliances with Veeam

Post by George Crump (thank you)

There is a new breed of backup applications that is gaining acceptance in data centers of all sizes, including the enterprise: VM Specific Backup Applications. These solutions were designed from the ground up, with virtualization in mind and they do their best to fully exploit the virtual environment. According to Storage Switzerland’s research, this new category of products, led by Veeam, represent the fastest growing segment of the data protection market. The challenge facing IT designers is making sure the rest of the environment can take full advantage of the capabilities of these new software solutions.

Read on here

vSphere Metro Cluster (vMSC) and HDS G1000 GAD : Rise of HDS Virtual Storage Machine

Good Post by Paul Meehan (thank you)

Sometime around August HDS will launch what is called Global Active Device, also known as GAD. This is part of the new Virtual Storage Platform G1000.

What is a G1000?

The VSP G1000 is the latest variant of HDS high-end Tier-1 array that scales to thousands of drives and millions of IOPS (if you’re into just counting millions of IOPS like some vendors).

Read on here

Backup Basics: What do SLO, RPO, RTO, VRO and GRO Mean?

Good post by George Crump (thank you)

Being in charge of the data protection process is a thankless job. The process you create can run perfectly 99% of the time but everyone will remember the 1 time it fell short, and they will blame you. The task of protecting an organization’s data is getting harder too, data is growing and users expect there to be no downtime. The key, we believe, is to manage the data protection process from a service level perspective instead of a job perspective, by setting a service level objective (SLO) for each application.

Read on here