Data Protection: All Starts with an Architecture

Post by Edward Haletky (thank you)

At The Virtualization Practice, we have systems running in the cloud as well as on-premises. We run a 100% virtualized environment, with plenty of data protection, backup, and recovery options. These are all stitched together using one architecture: an architecture developed through painful personal experiences. We just had an interesting failure—nothing catastrophic, but it could have been, without the proper mindset and architecture around data protection. Data protection these days does not just mean backup and recovery, but also prevention and redundancy.

Read on here

What is Copy Data?

Good Post by George Crump (thank you)

Copy Data is the term used to describe the copies of primary data made for data protection, testing, archives, eDiscovery and analytics. The typical focus of copy data is data protection to recover data when something goes wrong. The problem is that each type of recovery requires a different copy of data. Recovery from corruption requires snapshots. Recovery from server failure requires disk backup. Protection from disk backup requires tape. Finally, recovery from a site disaster requires that all these copies be off-site. Add to the data protection copy problem, all the copies being made for test/development, archives, eDiscovery and now analytics. The end result: copy data is about much more than data protection and providing the capacity to manage all these copies has become a significant challenge to the data center.

Read on here

Using Server-Side Caching As A Single Point of Performance Management

Post by Colm Keegan (thank you)

Deploying server-side cache is an effective way to accelerate application workloads; whether they are hosted on bare metal servers or virtualized infrastructure. But with so many flash and SSD options to choose from – server-side, all flash arrays, hybrid storage systems, etc. IT decision makers may be concerned about which solution is best for their business. Furthermore, many organizations already have various forms of flash distributed throughout the data center. How can all of these resources be efficiently and effectively managed without resorting to multiple “panes of glass”?

Read on here