Object storage: why, when, where… and but.

Good post by Enrico Signoretti (thank you)

In one of my latest posts I wrote about private object storage not being for everyone… especially if you don’t have the size that makes it viable… But, on the other hand we are all piling up boatloads of data and users need to access it from many different locations, applications and devices at anytime.

Read on here

It’s time to “VMware” Storage

Good post by George Crump (thank you)

Before hypervisors like VMware, Hyper-V and KVM came to market, data centers had few options when it came to managing the growth of their server infrastructure. They could buy one big server that ran multiple applications, which, while it simplified operations and support, meant that one application was at the mercy of the other applications in terms of reliability and performance. Alternatively, IT professionals could buy a server for each application as it came online but this sacrificed operational efficiency and IT budget to the demands of fault and performance isolation. Until hypervisors came to market, the latter choice was considered to be the best practice.

Read on here

The Problems With Server-Side Storage, Like VSAN

Good post by Colm Keegan (thank you)

The expression, “everything old is new again”, certainly applies to the renewed interest in server-side storage. Due to the widespread adoption of server virtualization technology, internal server storage is once again being hailed as a simple way to bring performance closer to where virtualized applications reside – on the hypervisor.

Now with VMware’s recent launch of their VSAN offering, which utilizes server-side storage capacity configured across a network of clustered hypervisor nodes, this seems to add further validation to this methodology. While there are some instances where server-side storage adds value, there are also some drawbacks that need to be considered.

Read on here

Scaling Storage In Conventional Arrays

Good post by Stephen Foskett (thank you)

It is amazing that something as simple-sounding as making an array get bigger can be so complex, yet scaling storage is notoriously difficult. Our storage protocols just weren’t designed with scaling in mind, and they lack the flexibility needed to dynamically address multiple nodes. Data protection is extremely difficult and data movement is always time-consuming.

Read on here

IBM Storwize Family – Scaling Capabilities and Value

whitepaper_download

Good Paper by Randy Kerns (thank you)

The IBM Storwize family represents a single architecture applied across several market segments as independent storage systems. This approach of delivering storage systems optimized to meet price, performance, and capacity demands for different markets delivers benefits for Information Technology across many areas. Commonality of administration, reduced risk when deploying a Storwize system, integration and support from software vendors, and use of unique Storwize features are just a few of the advantages. Systems delivered at competitive prices based on leverage from a single architecture and use of common underlying hardware may be the most evident benefit.

Get the Paper here

Scaling Storage Is Hard To Do

Good post by Stephen Foskett (thank you)

Data storage isn’t as easy as it sounds, especially at enterprise or cloud scale. It’s simple enough to read and write a bit of data, but much harder to build a system that scales to store petabytes.

Read on here