SMR Drives: Are they too late to the game?

smr2

Good post by Petros Koutoupis (thank you)

The sudden popularity over NAND Flash has spelled doom for traditional magnetic Hard Disk Drives (HDD). For years we have been hearing how HDDs are reaching the end of their life. We have also heard the same about Tape drives, long before that. Although, it would seem that the prediction on HDDs may become a bit more of reality, sooner than expected.

Read on here

Backup is not Archive

Good post by Joseph Ortiz (thank you)

In order to protect their data while dealing with explosive data growth, many organizations have started backing up their data to the cloud in an effort to reduce their storage and data center costs as well as obtaining data redundancy without the need to maintain a separate physical DR site. Many also mistakenly believe that these additional backup copies qualify as archive copies. Unfortunately, they do not.

Read on here

Newbies Guide to the 3PAR SSMC

Good post by 3PARDude (thank you)

So far in this series of posts on SSMC I have covered what’s new in the latest version, how toinstall SSMC and how to add systems to SSMC.  Today I want to run through the basics of the new interface so you can get up and running fast.

SSMC looks significantly different to the 3PAR Management Console, so the purpose of today’s post is to provide some familiarisation and to learn how to move around the new console.

Read on here

 

SSD Endurance. What does it mean to you?

Good post by Andrey Kudryavtsev (thank you)

I continuously think about the endurance aspect of our products, how SSD users understand it and use it for its positive benefits. Sadly, endurance is often underestimated and sometimes overestimated. I see customers buying High Endurance products for the benefit of protection, without understanding the real requirements of the application. Now that piece of night thoughts goes to my blog.

Read on here

A Flash Storage Technical and Economic Primer

Good post by Scott D. Lowe (thank you)

Flash memory is a type of non-volatile memory storage, which can be electrically erased and programmed. What was the event that precipitated the introduction of this new storage medium? Well, it started in the mid-1980s, when Toshiba was working on a project to create a replacement for the EEPROM, a low-cost type of non-volatile memory, which could be erased and reprogrammed. The problem with the EEPROM was its cumbersome erasure process; it needed to be exposed to an ultraviolet light source to perform a complete erasure. To overcome this challenge, the E2PROM was created. The E2PROM type of memory cell was block erasable, but it was eight times the cost of the EEPROM. The high cost of the E2PROM led to rejection from consumers who wanted the low cost of EEPROM coupled with the block erasable qualities of the E2PROM.

Read on here

Using RAID-5 Means the Sky is Falling!

Good post by Olin Coles (thank you)

Why disk URE rate does not guarantee rebuild failure.

Today’s appointment brought me out to a small but reliable business, where I’m finishing the hard drive upgrades for their cold storage backup system. It was an early morning drive into the city, with enough ice on the roads to contribute towards the more than 30,000 fatality accidents that occur each year1. The backup appliance I’m servicing has received 6TB desktop hard disks to replace an old set with a fraction of the capacity, so rebuilding the array has taken considerable time.

Read on here

What are IOPS and should you care?

Post by George Crump (thank you)

When evaluating a new storage system, especially an all-flash array, the number of IOPS (Inputs/Outputs per Second) that the storage system can sustain is often used to differentiate one storage system from another. But is this really a standard that has any merit given the demands of today’s data center and the capabilities of today’s storage systems?

There are three factors that when combined tell the full story of storage performance; bandwidth rate, latency and IOPS. Most storage vendors tend to focus on IOPS to brag about how fast their storage system is. But measuring storage system performance by IOPS only has value if the workloads using that storage system are IOPS demanding.

Read on here

Understanding Software Defined Storage

Very good post by Pushpesh Sharma (thank you)

Storage systems are essential to the data center. Given the exponential rate of data growth, it is increasingly becoming more and more challenging to scale the enterprise storage infrastructure in a cost effective way.

Storage technology over the years has seen incremental technology advancements. The early days of enterprise storage were mainly direct-attached storage (DAS) with host bus adapters (HBAs)  and redundant array of independent disks (RAIDs.) The DAS advanced by more faster and more reliable protocols like ATA over Ethernet(ATA), serial attached technology adapters (SATA), external serial attached technology adapters (eSATA), small computer system interface (SCSI), serial attached SCSI (SAS), and fibre channel.

Read on here

The More You Know Series: Forced Flushing

Good post by Reliant Technologies (thank you)

Over the past few weeks, we have talked a lot about different key performance indicators and how these may indicate an underlying issue associated with your performance problems. One of the recurring issues brought up was forced flushing. So, today we are going to dive a little deeper into what forced flushing is.

Read on here

Not all Snapshots are the same

Good post by George Crump (thank you)

In an upcoming webinar, Storage Switzerland will make the case for using snapshots as a primary component of data protection. For this strategy to work several things are needed from the storage infrastructure. First, it must be able to keep an almost unlimited number of snapshots; second, it needs to have a replication process that can transfer those snapshot deltas (the changed blocks of data) to a safe place; and third, the entire storage infrastructure has to be very cost effective. In this column we will look at that first requirement, the ability to create and store a large amount of snapshots without impacting performance.

Read on here