VSAN 6.2 Upgrade – Failed to realign objects

Good post by Cormac Hogan (thank you)

A number of customers have reported experiencing difficulty when attempting to upgrade the on-disk format on VSAN 6.2. The upgrade to vSphere 6.0u2 goes absolutely fine; it is only when they try to upgrade the on-disk format, to use new features such as Software Checksum, and Deduplication and Compression, that they encounter this error.

Read on here

vSphere tackles the Hyperconverged Infrastructure World: VMware VSAN 6.2

Good Post by W. Curtis Preston (thank you)

VMware is releasing VSAN 6.2, the third major release of VSAN since its introduction in August of 2014. (Like other VMware companion products, the release number is tied to the vSphere release number it is associated with.) This release gives vSphere most if not all of the major features found in other hyperconverged infrastructure products.

Read on here

 

An overview of the new Virtual SAN 6.2 features

Good Post by Cormac Hogan (thank you)

If you were wondering why my blogging has dropped off in recent months, wonder no more. I’ve been fully immersed in the next release of VSAN. Today VMware has just announced the launch of VSAN 6.2, the next version of VMware’s Virtual SAN product. It is almost 2.5 years since we launched the VSAN beta at VMworld 2013, and almost 2 years to the day since we officially GA’ed our first release of VSAN way back in March 2014. A lot has happened since then, with 3 distinct releases in that 2 year period (6.0, 6.1 and now 6.2). For me the product has matured significantly in that 2 year period, with 3,000 customers and lots of added features. VSAN 6.2 is the most significant release we have had since the initial launch.

Read on here

More data services in VSAN 6.2

Good Post by Andrea Mauro (thank you)

As announced some months ago, the new Virtual SAN (VSAN 6.2) will add new data services making this solution more rich that before. Version 6.1 was announced during the last VMworld editions with some interesting features, including a ROBO scenario.

But was still limited in data service: better snapshot technologies, better VMFS, but still some limits and no deduplication, no compression, no erasure coding at all.

Read on here

A Flash Storage Technical and Economic Primer

Good post by Scott D. Lowe (thank you)

Flash memory is a type of non-volatile memory storage, which can be electrically erased and programmed. What was the event that precipitated the introduction of this new storage medium? Well, it started in the mid-1980s, when Toshiba was working on a project to create a replacement for the EEPROM, a low-cost type of non-volatile memory, which could be erased and reprogrammed. The problem with the EEPROM was its cumbersome erasure process; it needed to be exposed to an ultraviolet light source to perform a complete erasure. To overcome this challenge, the E2PROM was created. The E2PROM type of memory cell was block erasable, but it was eight times the cost of the EEPROM. The high cost of the E2PROM led to rejection from consumers who wanted the low cost of EEPROM coupled with the block erasable qualities of the E2PROM.

Read on here

Flurry of new data storage technology can bring confusion

Post by Rich Castagna (thank you)

Confusion reigns in the storage world, as new data storage technology tries to find its place in the data center.

In a blizzard, it’s hard to see a single snowflake. And with the avalanche of new data storage technology that has swirled around data centers the past couple of years, it can be pretty tough to pick out that exemplary piece of engineering innovation and dexterity. They say no two snowflakes are alike (how “they” came to that conclusion, I’ll never know) and, similarly, this data storage maelstrom is marked by a staggering number of new products and product categories. In other words, it’s tough to figure out.

Read on here

A closer look at SpringPath

Good post by Cormac Hogan (thank you)

Another hyper-converged storage company has just emerged out of stealth. Last week I had the opportunity to catch up with the team from SpringPath (formerly StorVisor), based in Silicon Valley. The company has a bunch of ex-VMware folks on-board, such as Mallik Mahalingam and Krishna Yadappanavar. Mallik and Krishna were both involved in a number of I/O related initiatives during their time at VMware. Let’s take a closer look at their new hyper-converged storage product.

Read on here

Data Efficiency: Compression

Good post by Jesse St.Laurent (thank you)

In a previous post, we referenced the importance of delivering data efficiency as a core part of the solution and not as a bolt on. The first data efficiency technology we will review is compression. There are two types of compression: inline and post-processing. This post will review both types. I have attempted to create formulas to help clarify the differences and make it simpler to compare compression options, as well as other data efficiency technologies. Let’s set the formulas aside for a moment and come back to them later with a discussion about how they impact hyperconverged infrastructure.

Read on here

The State of Deduplication in 2015

Good post by George Crump (thank you)

At its core, deduplication is an enabling technology. First, it enabled disk based backup devices to become the primary backup target in the data center. Now it promises to enable the all-flash data center by driving down the cost of flash storage. Just as deduplication became the table stake for backup appliances, it is now a required capability for flash and hybrid primary storage.

Read on here