Quantum Announces DXi6900 Data Protection and Deduplication Appliance

Post by Josh Linden (thank you)

Quantum today announced the third solution to join its DXi deduplication family: the DXi6900 appliance, powered by StorNext 5 software and oriented towards enterprise deployments and service providers. The DXi6900 can scale from 17TB to 510TB, and according to Quantum, the DXi6900’s 16TB per hour ingestion rate will make it the fastest single-stream backup and restore solution in its class due to StorNext 5’s proprietary variable length deduplication technology.

Read on here

Stop wasting your Storage Controller CPU cycles

Post by Frank Denneman (thank you)

Typically when dealing with storage performance problems, the first questions asked are what type of disks? What speed? what protocol? However your problem might be in the first port of call of your storage array, the storage controller!

Read on here

New ShortTakes on StoreOnce and HP 3PAR AFA

Good Post by Calvin Zito (thank you)

I am back from what has been the longest vacation I’ve ever taken – almost 4 weeks.  While I did have three work events I attended in Europe, my time in Europe was pretty much all vacation.  I did post a blog from Germany about half way into the trip so check that out.

I took over 1300 pictures.  And no, I wouldn’t make you look at all of them (though many are amazing) but Google Plus has this very cool picture diary it created from the pictures I took with my SmartPhone.  I thought this was cool because all I had to do was add a few captions and remove a few pictures I thought weren’t the best so if you want to see what I did, click on the link. But now, it’s time for me to get to work.

Read on here

Who needs Cloud Storage – Violin delivers pay as you grow Flash

Post by George Crump (thank you)

One of the challenges that every IT planner is trying to figure out is what size, both in terms of capacity and performance, storage system they should invest in. They need to make sure that the storage system will not only meet their upfront but long term capacity needs as well. An incorrect calculation can lead to the purchase of an additional storage system and all the management overhead and costs that comes along with it.

Read on here

EMC Focuses on Services with VMAX3

Good Post by Chris M Evans (thank you)

Last week EMC announced an upgrade to their flagship Symmetrix high-end enterprise storage platform and the release of VMAX3.  With a maximum capacity of 384 cores and 5760 drives, the new platform scales to capacities never before seen in a single storage array.  What’s possibly more interesting though, is EMC’s intentions with what’s being called HyperMax technology.

Read on here

Introducing NetApp Private Storage for Microsoft Azure

Post by Brian Mitchell (thank you)

NetApp, Microsoft, and Equinix today introduced “NetApp Private Storage for Microsoft Azure”; a hybrid cloud infrastructure that links NetApp Storage with Azure Compute via Azure ExpressRoute.

Read on here

Storage Requirements Put Burden on Businesses of All Sizes

Post by Nathan Eddy (thank you)

Management of inactive data (as defined by being unused for six months or more) appeared to be an area where the greatest improvements can be made.

The majority of organizations continue to use expensive primary storage systems to store infrequently accessed data, with respondents engaging in this practice spending significantly more of their annual IT budget on storage than their peers, according to a report from TwinStrata.

Read on here

New disk IO scheduler used in vSphere 5.5

Are we there yet? Shortening the Time to Value

Good post by Duncan Epping (thank you)

When 5.1 was released I noticed the mention of “mClock” in the advanced settings of a vSphere host. I tried enabling it but failed miserably. A couple of weeks back I noticed the same advanced setting again, but this time also noticed it was enabled. So what is this mClock thingie? Well mClock is the new disk IO scheduler used in vSphere 5.5. There isn’t much detail on mClock by itself other than an academic paper by Ajay Gulati.

Read on here

Flash Arrays need high performance compression

What's the Difference Between Compression, Deduplication, and Single-Instance Storage?

Post by George Crump (thank you)

Startups like Nimble, Pure Storage, SolidFire and Tegile are starting to take business away from the traditional tier 1 storage vendors. Their key differentiator, and often the winning point, has been their ability to efficiently use flash storage. Making flash compelling to IT professionals requires a high performance architecture with the ability to use flash efficiently: at the right price point (effective cost) and effective capacity. Many tier 1 vendors have the high performance, but lack the effective cost and effective capacity. This is a direct result of the lack of compression, deduplication and thin provisioning capabilities. This is enabling independent all flash array vendors, as mentioned above, to encroach their accounts with better cost and capacity capabilities.

Read on here

Peer Persistence and Adaptive Optimization interoperation on 3PAR

Good Post by Philip Sellers (thank you)

This is part one of a two part series on enhancements to the HP 3PAR StoreServ platform announced at HP Discover in June. 

One of the biggest benefits of being an HP invited blogger for HP Discover is the opportunity to sit and talk with architects and executives during coffee talks and importu meetings in the bloggers lounge.  This gives me and other bloggers the opportunity to ask our specific questions from really technical or strategic HP staff.  During the past two HP Discover events, I have had great opportunities to talk to HP 3PAR storage team members and get some great information about specific issues and some ideas on where things may be heading.

Read on here

Follow

Get every new post delivered to your Inbox.

Join 1,132 other followers