Tape vs Cloud for Archive and Cold Data

Post by Joseph Ortiz (thank you)

As my colleague, George Crump, discussed in a previous article, “What is Better than Cloud Storage for Cold Data”, cloud storage is great for processing active data but becomes increasingly expensive for storing cold data that is seldom accessed. While we have previously examined a few weaknesses of cloud storage such as latency and bandwidth issues, we have not really examined the actual costs of cloud storage in any detail to see the potential costs of storing large quantities of cold data and archive data long term in the cloud, or retrieving any of that archived data until now. There is a reason that many organizations are now starting to question their decision to store large quantities of cold and archive data in the cloud long term.

Read on here

Top 10 cloud trends for 2016

cloud-adoption-trends

Good post by Lazlo Creates (thank you)

Just like any other area of technological innovation, cloud is a massive industry which has developed in ways few would predict a couple of years ago. As more individuals and enterprises embrace cloud technologies, the security and usability questions become a central focus of many providers. But consumer expectations aren’t the only factor shaping the state of cloud. Here are 10 key cloud trends to watch in 2016.

Read on here

Amazon, Azure and Google in race to the bottom … of cloud storage pricing

cw054-running-the-race-winning-the-prize

Good post by Chris Mellor (thank you) over at El Reg

Storage 2016 A period of quiet, rest and reflection is what the storage industry needs after a frankly hectic and very eventful 2015.

It won’t get it. The opposing forces of simplicity and complexity, access speed versus capacity, server versus array, on premises versus cloud, and tuned hardware and software versus software-defined are still in deep conflict. And don’t forget the containerisation issues in the background.

There is also a growing generalised attack on storage data access latency, just to add something else into the mix.

Read on here

Data Retention for Dummies

Good post by Chris Mellor (thank you) over at El Reg

All is confusion. The old certainties are gone. New certainties just don’t exist. The shifting shapes, players, products and technologies in the storage landscape are seen through fog. How the heck does everything fit together?

After four days in Silicon Valley meeting startups the bewilderment ratio us even higher. It’s like Dragons’ Den, where each new player is shinier and brighter than the previous one, becomes your favourite but then, as sure as eggs are eggs, will be eclipsed by the next one.

Read on here

 

Backup is not Archive

Good post by Joseph Ortiz (thank you)

In order to protect their data while dealing with explosive data growth, many organizations have started backing up their data to the cloud in an effort to reduce their storage and data center costs as well as obtaining data redundancy without the need to maintain a separate physical DR site. Many also mistakenly believe that these additional backup copies qualify as archive copies. Unfortunately, they do not.

Read on here

Flash, Trash and data-driven infrastructures!

Post by Enrico Signoretti (thank you)

I’ve been talking about two-tier storage infrastructures for a while now. End users are targeting this kind of approach to cope with capacity growth and performance needs. The basic idea is to leverage Flash memory characteristics (All-flash, Hybrid, hyperconvergence) on one side and implement huge storage repositories, where they can safely store all the rest (including pure Trash) at the lowest possible cost, on the other. The latter is lately also referred to as a data lake.

Read on here

Flash + Object – The Emergence of a Two Tier Enterprise

Post by George Crump (thank you)

For as long as there has been data there has been a quest to consolidate that data onto a single, consolidated storage system, but that quest seems to never be satisfied. The problem is that there are essentially two types of data; active and archive. Active data typically needs fast I/O response time at a reasonable cost. Archive needs to be very cost effective with reasonable response times. Storage systems that try to meet both of these needs in a single system often end up doing neither particularly well. This has led to the purchase of data and/or environment specific storage systems and storage system sprawl.

Read on here

Unstructured Data is distracting Backup Administrators

Post by George Crump (thank you)

File based data accounts for more than 80 percent of capacity demand and backup administrators spend most of their time protecting this unstructured data. But the remaining set, structured data, will cause the organization the most harm if it is not recoverable. This data (databases, VM Images) requires special backups and fast recoveries. The key to protecting the organization from disaster is to eliminate the unstructured data protection problem. If backup administrators could focus 100% of their time on 20% of their problem, then organizations would be in a much better position to protect themselves from a disaster.

Read on here

What you won’t see in 2015

Good post by Enrico Signoretti (thank you)

After all the predictions I read about 2015 (…and many of them are pretty ridiculous indeed) I can’t help but make mine, which I would like to approach in a totally different manner, using common sense first and trying to make an anti-prediction for enterprise IT in 2015. So here is my opinion on what you won’t be seeing this year in your Datacenter.

Read on here

Microsoft Sheds Light on Azure’s Storage Growing Pains

Post by David Davis (thank you)

Microsoft, like other cloud providers before it, has faced some growing pains on its Azure platform.  On November 19, Azure customers began experiencing performance and availability issues on the platform.  Virtual machines, websites, and Visual Studio Online were among the impacted services.

Read on here