VSAN 6.2 Upgrade – Failed to realign objects

Good post by Cormac Hogan (thank you)

A number of customers have reported experiencing difficulty when attempting to upgrade the on-disk format on VSAN 6.2. The upgrade to vSphere 6.0u2 goes absolutely fine; it is only when they try to upgrade the on-disk format, to use new features such as Software Checksum, and Deduplication and Compression, that they encounter this error.

Read on here

An overview of the new Virtual SAN 6.2 features

Good Post by Cormac Hogan (thank you)

If you were wondering why my blogging has dropped off in recent months, wonder no more. I’ve been fully immersed in the next release of VSAN. Today VMware has just announced the launch of VSAN 6.2, the next version of VMware’s Virtual SAN product. It is almost 2.5 years since we launched the VSAN beta at VMworld 2013, and almost 2 years to the day since we officially GA’ed our first release of VSAN way back in March 2014. A lot has happened since then, with 3 distinct releases in that 2 year period (6.0, 6.1 and now 6.2). For me the product has matured significantly in that 2 year period, with 3,000 customers and lots of added features. VSAN 6.2 is the most significant release we have had since the initial launch.

Read on here

More data services in VSAN 6.2

Good Post by Andrea Mauro (thank you)

As announced some months ago, the new Virtual SAN (VSAN 6.2) will add new data services making this solution more rich that before. Version 6.1 was announced during the last VMworld editions with some interesting features, including a ROBO scenario.

But was still limited in data service: better snapshot technologies, better VMFS, but still some limits and no deduplication, no compression, no erasure coding at all.

Read on here

Designing Backup to replace Primary Storage

blueprint

Good post by George Crump (thank you)

Users and application owners expect that the systems they use will never go down, and if they do they will be returned to operation quickly with little data loss. In our article “Designing Primary Storage to Ease the Backup Burden” we discussed how to architect a primary storage infrastructure that is able to help meet these challenges. We call this design Protected Primary Storage. But this design can be expensive, especially if it is applied to every application in the data center.

Read on here

Is Deduplication Useless on Archive Data?

52425-338038_4786

Good post by George Crump (thank you)

One of the techniques that storage vendors use to reduce the cost of hard disk-based storage is deduplication. Deduplication is the elimination of redundant data across files. The technology is ideal for backup, since so much of a current copy of data is similar to the prior copy. The few extra seconds required to identify redundant data is worth the savings in disk capacity. Deduplication for primary storage is popular for all-flash arrays. While the level of redundancy is not as great, the premium price of flash makes any capacity savings important. In addition, given the excess performance of AFAs the deduplication feature can often be added without a noticeable performance impact. There is one process though where deduplication provides little value; archive. IT professionals need to measure costs differently when considering a storage destination for archive.

Read on here

Today’s Storage: Same As It Ever Was

Good post by Stephen Foskett (thank you)

Data storage has always been one of the most conservative areas of enterprise IT. There is little tolerance for risk, and rightly so: Storage is persistent, long-lived, and must be absolutely reliable. Lose a server or network switch and there is the potential for service disruption or transient data corruption, but lose a storage array (and thus the data on it) and there can be serious business consequences.

Read on here

A Flash Storage Technical and Economic Primer

Good post by Scott D. Lowe (thank you)

Flash memory is a type of non-volatile memory storage, which can be electrically erased and programmed. What was the event that precipitated the introduction of this new storage medium? Well, it started in the mid-1980s, when Toshiba was working on a project to create a replacement for the EEPROM, a low-cost type of non-volatile memory, which could be erased and reprogrammed. The problem with the EEPROM was its cumbersome erasure process; it needed to be exposed to an ultraviolet light source to perform a complete erasure. To overcome this challenge, the E2PROM was created. The E2PROM type of memory cell was block erasable, but it was eight times the cost of the EEPROM. The high cost of the E2PROM led to rejection from consumers who wanted the low cost of EEPROM coupled with the block erasable qualities of the E2PROM.

Read on here

SDS – The Missing Link – Storage Automation for Application Service Catalogs

Post by Rawlinson Rivera (thank you)

Automation technologies are a fundamental dependency to all aspects of the Software-Defined Data center. The use of automation technologies not only increases the overall productivity of the software-defined data center, but it can also accelerate the adoption of today’s modern operating models.

In recent years, a subset of the core pillars of the software-defined data center has experienced a great deal of improvements with the help of automation. The same can’t be said about storage. The lack management flexibility and capable automation frameworks have kept the storage infrastructures from delivering operational value and efficiencies similar to the ones available with the compute and network pillars.

VMware’s software-defined storage technologies and its storage policy-based management framework (SPBM) deliver the missing piece of the puzzle for storage infrastructure in the software-defined data center.

Read on here

HP StoreOnce SQL Plug-in: Faster, efficient backup for Microsoft SQL Server databases

Post by Ian Blatchford (thank you)

Today’s BURA Sunday blog continues conversations started in recent posts from colleagues in HP Storage that have covered storage infrastructure and protection to optimize Microsoft SQL Server deployments.

Parissa talked about how deploying SQL Server databases on HP 3PAR StoreServ 7450 would save you from the tradeoff between database performance and resiliency. All-flash storage means very low latency and Peer Persistence between two distributed 3PAR StoreServ arrays provides resiliency against catastrophic site failure. Ashwin previewed the StoreOnce plug-in that enables DBAs to run direct backups from the SQL Server database to a StoreOnce appliance. He also described some of the benefits of using HP Data Protector for organizations who want to include SQL Server database protection as part of a centralized data protection process.

Read on here