Designing Backup to replace Primary Storage

blueprint

Good post by George Crump (thank you)

Users and application owners expect that the systems they use will never go down, and if they do they will be returned to operation quickly with little data loss. In our article “Designing Primary Storage to Ease the Backup Burden” we discussed how to architect a primary storage infrastructure that is able to help meet these challenges. We call this design Protected Primary Storage. But this design can be expensive, especially if it is applied to every application in the data center.

Read on here

Is Deduplication Useless on Archive Data?

52425-338038_4786

Good post by George Crump (thank you)

One of the techniques that storage vendors use to reduce the cost of hard disk-based storage is deduplication. Deduplication is the elimination of redundant data across files. The technology is ideal for backup, since so much of a current copy of data is similar to the prior copy. The few extra seconds required to identify redundant data is worth the savings in disk capacity. Deduplication for primary storage is popular for all-flash arrays. While the level of redundancy is not as great, the premium price of flash makes any capacity savings important. In addition, given the excess performance of AFAs the deduplication feature can often be added without a noticeable performance impact. There is one process though where deduplication provides little value; archive. IT professionals need to measure costs differently when considering a storage destination for archive.

Read on here

Protecting NFS storage on vSphere using Veeam virtual proxies

Good post by Luca Dell’Oca (thank you)

Since Patch 3 of Veeam Backup & Replication v7 (build 7.0.0.839) there has been a new mode to manage hotadd backups over NFS, available via a registry key. Per the original release notes:

“Intelligent load balancing can now be configured to give preference to backup proxy located on the same host using the EnableSameHostHotaddMode (DWORD) registry value.”

I’ve kept this post on hold for a while, since with the upcoming v9, DirectNFS will be a much better option than virtual proxies to backup virtual machines running on NFS shares. But there are situations where this key may be still needed, like people still wanting to use virtual proxies against NFS. So, what is this key, and what you can do with it?

Read on here

Getting to know the Network Block Device Transport in VMware vStroage APIs for Data Protection

Post by Abdul Rasheed (thank you)

When you backup a VMware vSphere virtual machine using vStorage APIs for Data Protection (VADP), one of the common ways to transmit data from VMware data store to backup server is through Network Block Device (NBD) transport. NBD is a Linux-like module that attaches to VMkernel and makes the snapshot of the virtual machine visible to backup server as if the snapshot is a block device on network. While NBD is quite popular and easy to implement, it is also the least understood transport mechanisms in VADP based backups.

Read on here

HP Announces A Series Of Innovations To Speed Flash Adoption In The Datacenter

Post by Adam Armstrong (thank you)

Today HP made a series of announcements around its 3PAR StoreServ Storage family. These announcements include lowering the price of flash capacity, new highly scalable all-flash arrays, and flash-optimized data services. These new innovations are aimed at accelerating the adoption off all flash in datacenters.

As we’ve seen over the past few years, flash technology is continuing to be adopted in many areas. With their increased density, performance, and predictability, all-flash arrays will continue to grow, in fact IDC forecasts that flash arrays will see a 46% compound annual growth rate over the next five years. Customers are looking to extend the benefits of flash into the datacenter.

Read on here

Deep Dive: Memory consumption of a Veeam repository

Good post by Luca Dell’Oca (thank you)

Veeam repositories, both Windows and Linux based, are running a software component responsible for receiving and storing data as they are processed by proxies. One of the most important parameter when sizing a repository is its expected memory consumption. Here are some informations for its proper configuration.

Read on here

 

5 Takeaways on XtremIO 4.0 from EMC World 2015

Post by Vaughn Stewart (thank you)

Like most in the storage industry, I’ve been paying close attention to EMC World 2015. In the absence of a genuine industry technical conference, EMC World provides the market with insight and perspective from the industry’s king-pin; an unveiling of the efforts of EMC engineering and a glimpse of where their sales and marketing efforts will be focused. While the EMC product portfolio spans a broad gamut, from cloud software to development platforms, enterprise storage remains EMC’s core competency.

Read on here

Virtual Volumes (VVols) and Replication/DR

Good post by Cormac Hogan (thank you)

There have been a number of queries around Virtual Volumes (VVols) and replication, especially since the release of KB article 2112039 which details all the interoperability aspects of VVols.

In Q1 of the KB, the question is asked “Which VMware Products are interoperable with Virtual Volumes (VVols)?” The response includes “VMware vSphere Replication 6.0.x”.

Read on here

Interesting Question?

Good post by Martin Glassborow (thank you)

Are AFAs ready for legacy Enterprise Workloads? The latest little spat between EMC and HP bloggers asked that question.

But it’s not really an interesting question; a more interesting question is why would I put traditional Enterprise workloads on an AFA? Why even bother?

Read on here

Not all Snapshots are the same

Good post by George Crump (thank you)

In an upcoming webinar, Storage Switzerland will make the case for using snapshots as a primary component of data protection. For this strategy to work several things are needed from the storage infrastructure. First, it must be able to keep an almost unlimited number of snapshots; second, it needs to have a replication process that can transfer those snapshot deltas (the changed blocks of data) to a safe place; and third, the entire storage infrastructure has to be very cost effective. In this column we will look at that first requirement, the ability to create and store a large amount of snapshots without impacting performance.

Read on here