Designing Backup to replace Primary Storage

blueprint

Good post by George Crump (thank you)

Users and application owners expect that the systems they use will never go down, and if they do they will be returned to operation quickly with little data loss. In our article “Designing Primary Storage to Ease the Backup Burden” we discussed how to architect a primary storage infrastructure that is able to help meet these challenges. We call this design Protected Primary Storage. But this design can be expensive, especially if it is applied to every application in the data center.

Read on here

Is Deduplication Useless on Archive Data?

52425-338038_4786

Good post by George Crump (thank you)

One of the techniques that storage vendors use to reduce the cost of hard disk-based storage is deduplication. Deduplication is the elimination of redundant data across files. The technology is ideal for backup, since so much of a current copy of data is similar to the prior copy. The few extra seconds required to identify redundant data is worth the savings in disk capacity. Deduplication for primary storage is popular for all-flash arrays. While the level of redundancy is not as great, the premium price of flash makes any capacity savings important. In addition, given the excess performance of AFAs the deduplication feature can often be added without a noticeable performance impact. There is one process though where deduplication provides little value; archive. IT professionals need to measure costs differently when considering a storage destination for archive.

Read on here

Backup is not Archive

Good post by Joseph Ortiz (thank you)

In order to protect their data while dealing with explosive data growth, many organizations have started backing up their data to the cloud in an effort to reduce their storage and data center costs as well as obtaining data redundancy without the need to maintain a separate physical DR site. Many also mistakenly believe that these additional backup copies qualify as archive copies. Unfortunately, they do not.

Read on here

Is a Copy a Backup?

Good post by W.Curtis Preston (thank you)

Are we breaking backup in a new way by fixing it?  That’s the thought I had while interviewing Bryce Hein from Quantum. It made me think about a blog post I wrote four years ago asking whether or not snapshots and replication could be considered a backup.  The interview is an interesting one and the blog post has a lot of good points, along with quite a bit of banter in the comments section.
Read on here

Microsoft’s Scale-Out File Server Overcomes SAN Cloud Barriers

Good post by Paul Schnackenburg (thank you)

There are no SANs in the cloud because the venerable storage technology just doesn’t scale to that level. But there are ways around it, and Microsoft shops should start with the company’s Scale-Out File Server.

There’s a revolution going on in storage. Once the domain of boring but dependable storage-area network (SAN) arrays, there’s now a plethora of choice, including all flash storage (with varying different underlying technology) and server-message block (SMB)-based storage.

Read on here

What is Copy Data?

Good Post by George Crump (thank you)

Copy Data is the term used to describe the copies of primary data made for data protection, testing, archives, eDiscovery and analytics. The typical focus of copy data is data protection to recover data when something goes wrong. The problem is that each type of recovery requires a different copy of data. Recovery from corruption requires snapshots. Recovery from server failure requires disk backup. Protection from disk backup requires tape. Finally, recovery from a site disaster requires that all these copies be off-site. Add to the data protection copy problem, all the copies being made for test/development, archives, eDiscovery and now analytics. The end result: copy data is about much more than data protection and providing the capacity to manage all these copies has become a significant challenge to the data center.

Read on here