Does it matter how we define ‘hyperconverged’?

Post by Edward Grigson (thank you)

A recent Twitter conversation made it clear there’s no common definition of ‘hyperconverged infrastructure’ which leads to confusion for customers. Technical marketing and analysts can assist but understanding requirements, risk and costs yourself is always essential.

Hyperconverged infrastructure has been around for a few years (I first came across it at Gestalt IT’s SFD#2 with Nutanix back in 2012) and long enough for Gartner (here) and IDC (here) to create ‘magic quadrants’.

Read on here

A Flash Storage Technical and Economic Primer

Good post by Scott D. Lowe (thank you)

Flash memory is a type of non-volatile memory storage, which can be electrically erased and programmed. What was the event that precipitated the introduction of this new storage medium? Well, it started in the mid-1980s, when Toshiba was working on a project to create a replacement for the EEPROM, a low-cost type of non-volatile memory, which could be erased and reprogrammed. The problem with the EEPROM was its cumbersome erasure process; it needed to be exposed to an ultraviolet light source to perform a complete erasure. To overcome this challenge, the E2PROM was created. The E2PROM type of memory cell was block erasable, but it was eight times the cost of the EEPROM. The high cost of the E2PROM led to rejection from consumers who wanted the low cost of EEPROM coupled with the block erasable qualities of the E2PROM.

Read on here

Rumors, strategies and facts about Hyper-converged

Good post by Enrico Signoretti (thank you)

A few weeks ago I attended SFD7. Most of the conversations we had during the event where about hyper-converge. And we had at least three meetings where hyper-convergence was at the center of the stage: Maxta, Springpath and VMware. The market is very active, to say the least, and still in a very effervescently expanding phase.

Read on here

SDS – The Missing Link – Storage Automation for Application Service Catalogs

Post by Rawlinson Rivera (thank you)

Automation technologies are a fundamental dependency to all aspects of the Software-Defined Data center. The use of automation technologies not only increases the overall productivity of the software-defined data center, but it can also accelerate the adoption of today’s modern operating models.

In recent years, a subset of the core pillars of the software-defined data center has experienced a great deal of improvements with the help of automation. The same can’t be said about storage. The lack management flexibility and capable automation frameworks have kept the storage infrastructures from delivering operational value and efficiencies similar to the ones available with the compute and network pillars.

VMware’s software-defined storage technologies and its storage policy-based management framework (SPBM) deliver the missing piece of the puzzle for storage infrastructure in the software-defined data center.

Read on here

It’s time to “VMware” Storage

Good post by George Crump (thank you)

Before hypervisors like VMware, Hyper-V and KVM came to market, data centers had few options when it came to managing the growth of their server infrastructure. They could buy one big server that ran multiple applications, which, while it simplified operations and support, meant that one application was at the mercy of the other applications in terms of reliability and performance. Alternatively, IT professionals could buy a server for each application as it came online but this sacrificed operational efficiency and IT budget to the demands of fault and performance isolation. Until hypervisors came to market, the latter choice was considered to be the best practice.

Read on here

How to upgrade vSphere 5.5 to version 6.0 – Part 1

Post by Marek Zdrojewski (thank you)

The latest release of VMware vSphere (version 6.0) has been generally available for a while now so it is time to look how to upgrade vSphere 5.5 environment to version 6.0.

Before you begin, make sure that you read the vSphere 6 upgrade guide, the vSphere 6 release notes, the Hardware and Guest OS Compatibility Guide and the Product Interoperability Matrix from VMware before you start the upgrade process. Also, make sure your environment meets the software and hardware requirements as described in the upgrade guide.

Read on here

Virtual Volumes (VVols) and Replication/DR

Good post by Cormac Hogan (thank you)

There have been a number of queries around Virtual Volumes (VVols) and replication, especially since the release of KB article 2112039 which details all the interoperability aspects of VVols.

In Q1 of the KB, the question is asked “Which VMware Products are interoperable with Virtual Volumes (VVols)?” The response includes “VMware vSphere Replication 6.0.x”.

Read on here

HP StoreOnce SQL Plug-in: Faster, efficient backup for Microsoft SQL Server databases

Post by Ian Blatchford (thank you)

Today’s BURA Sunday blog continues conversations started in recent posts from colleagues in HP Storage that have covered storage infrastructure and protection to optimize Microsoft SQL Server deployments.

Parissa talked about how deploying SQL Server databases on HP 3PAR StoreServ 7450 would save you from the tradeoff between database performance and resiliency. All-flash storage means very low latency and Peer Persistence between two distributed 3PAR StoreServ arrays provides resiliency against catastrophic site failure. Ashwin previewed the StoreOnce plug-in that enables DBAs to run direct backups from the SQL Server database to a StoreOnce appliance. He also described some of the benefits of using HP Data Protector for organizations who want to include SQL Server database protection as part of a centralized data protection process.

Read on here

Thinking different about storage

Good post by Enrico Signoretti (thank you)

In the last few months I had several interesting briefings with storage vendors. Now, I need to stop and try to connect the dots, and think about what could come next.
It’s incredible to see how rapidly the storage landscape is evolving and becoming much smarter than in the past. This will change the way we store, use and manage data and, of course, the design of future infrastructures.

Read on here

How to safely use 8TB Drives in the Enterprise

Good post by George Crump (thank you)

After a few year hiatus higher capacity hard drives are coming to market. We expect 8TB drives to be readily available before the end of the year with 10TB drives soon to follow. And at the rate that capacity demands are increasing those drives can’t get here soon enough. But, these new extremely high-capacity disk drives are being met with some trepidation. There are concerns about performance, reliability and serviceability. Can modern storage systems build enough safeguards around these products for the enterprise data center to count on them?

Read on here

Follow

Get every new post delivered to your Inbox.

Join 1,203 other followers