Snapshot 101: Copy-on-write vs Redirect-on-write

Good post by W. Curtis Preston (thank you)

There are two very different ways to create snapshots: copy-on-write and redirect-on-write. If IT is considering using the snapshot functionality of their storage system, it is essential to understand which type of snapshot it creates and the pros and cons of using either method.

Rather than the more common term volume, this column will use the term protected entity to refer to the entity being protected by a given snapshot. While it is true that the protected entity is typically a RAID volume, it is also true that some object storage systems do not use RAID. Their snapshots may be designed to protect other entities, including containers, a NAS share, etc. In this case, the protected entity may reside on a number of disk drives, but it does not reside on a volume in the RAID or LUN sense.

Read on here

vSphere tackles the Hyperconverged Infrastructure World: VMware VSAN 6.2

Good Post by W. Curtis Preston (thank you)

VMware is releasing VSAN 6.2, the third major release of VSAN since its introduction in August of 2014. (Like other VMware companion products, the release number is tied to the vSphere release number it is associated with.) This release gives vSphere most if not all of the major features found in other hyperconverged infrastructure products.

Read on here

 

Peak Fibre Channel

out_of_the_fog_by_luethy-d7thzdf

Good post by Tony Bourke (thank you)

There have been several articles talking about the death of Fibre Channel. This isn’t one of them. However, it is an article about “peak Fibre Channel”. I think, as a technology, Fibre Channel is in the process of (if it hasn’t already) peaking.

There’s a lot of technology in IT that doesn’t simply die. Instead, it grows, peaks, then slowly (or perhaps very slowly) fades. Consider Unix/RISC.

Read on here

 

Today’s Storage: Same As It Ever Was

Good post by Stephen Foskett (thank you)

Data storage has always been one of the most conservative areas of enterprise IT. There is little tolerance for risk, and rightly so: Storage is persistent, long-lived, and must be absolutely reliable. Lose a server or network switch and there is the potential for service disruption or transient data corruption, but lose a storage array (and thus the data on it) and there can be serious business consequences.

Read on here

IDC Predicts Increasing Cost of Enterprise Hard Disk Drives

Good post by Hu Yoshida (thank you)

IDC’s Worldwide Hard Disk Drive Forecast for 2015 to 2019, published in May 2015, predicts that “Slow HDD areal density (capacity per disk) growth means that a steadily increasing number of components per drive will be needed on average to reach higher capacity points, particularly for the enterprise segment. This dynamic will push the overall blended average HDD ASP higher each year over the forecast period….”

Read on here

How to safely use 8TB Drives in the Enterprise

Good post by George Crump (thank you)

After a few year hiatus higher capacity hard drives are coming to market. We expect 8TB drives to be readily available before the end of the year with 10TB drives soon to follow. And at the rate that capacity demands are increasing those drives can’t get here soon enough. But, these new extremely high-capacity disk drives are being met with some trepidation. There are concerns about performance, reliability and serviceability. Can modern storage systems build enough safeguards around these products for the enterprise data center to count on them?

Read on here

New X-IO ISE 800 All-Flash Array

Good post by Chris Mellor (thank you) over at El Reg

X-IO’s five-year warranty, maintenance-free sealed disk and disk+flash arrays have a new brother: the all-flash ISE. It’s sprinted right to the top of the SPC-1 price/performance benchmark charts.

The ISE 800 comes in a standard 3U X-IO enclosure and uses X-IO’s Gen 3 architecture, which first appeared in the ISE 780 in January. There are three models:

  • 820 – to 6.4TB pre-RAID capacity (2.7TB RAID 10, 4.3TB RAID 5)
  • 850 – to 25.6TB pre-RAID capacity (11.4TB RAID 10, 18.3TB RAID 5)
  • 860 – to 51.2TB pre-RAID capacity (22.9TB RAID 10, 36.6TB RAID 5)

All provide a max of 400,000 IOPS, or 260,000 IOPS with what is called an OLTP workload. There is up to 5GB/sec of bandwidth available.

Read on here

Understanding Software Defined Storage

Very good post by Pushpesh Sharma (thank you)

Storage systems are essential to the data center. Given the exponential rate of data growth, it is increasingly becoming more and more challenging to scale the enterprise storage infrastructure in a cost effective way.

Storage technology over the years has seen incremental technology advancements. The early days of enterprise storage were mainly direct-attached storage (DAS) with host bus adapters (HBAs)  and redundant array of independent disks (RAIDs.) The DAS advanced by more faster and more reliable protocols like ATA over Ethernet(ATA), serial attached technology adapters (SATA), external serial attached technology adapters (eSATA), small computer system interface (SCSI), serial attached SCSI (SAS), and fibre channel.

Read on here

Not all Snapshots are the same

Good post by George Crump (thank you)

In an upcoming webinar, Storage Switzerland will make the case for using snapshots as a primary component of data protection. For this strategy to work several things are needed from the storage infrastructure. First, it must be able to keep an almost unlimited number of snapshots; second, it needs to have a replication process that can transfer those snapshot deltas (the changed blocks of data) to a safe place; and third, the entire storage infrastructure has to be very cost effective. In this column we will look at that first requirement, the ability to create and store a large amount of snapshots without impacting performance.

Read on here

StrongBox Archive NAS solves problem of Long-term Unstructured Data Storage

Good post by Eric Slack (thank you)

Unstructured data is burying companies’ storage infrastructures. According to Gartner, files comprise 80 percent of all data, and its growth rate in enterprises will exceed 800% in five years. Compounding this problem is the need to store these data for longer periods. “Long term” used to mean 15 years, now 20 years is not uncommon, and for many companies, “forever” is becoming the norm. What’s needed is cost-effective storage, a solution that’s simple to use but able to handle the challenge of long-term data retention for unstructured data.

Read on here