Protecting NFS storage on vSphere using Veeam virtual proxies

Good post by Luca Dell’Oca (thank you)

Since Patch 3 of Veeam Backup & Replication v7 (build 7.0.0.839) there has been a new mode to manage hotadd backups over NFS, available via a registry key. Per the original release notes:

“Intelligent load balancing can now be configured to give preference to backup proxy located on the same host using the EnableSameHostHotaddMode (DWORD) registry value.”

I’ve kept this post on hold for a while, since with the upcoming v9, DirectNFS will be a much better option than virtual proxies to backup virtual machines running on NFS shares. But there are situations where this key may be still needed, like people still wanting to use virtual proxies against NFS. So, what is this key, and what you can do with it?

Read on here

VMware Metro Storage Cluster Overview

Good post by Derek Hennessy (thank you)

VMware Metro Storage Cluster

VMware Metro Storage Cluster (vMSC) allows vCenter to stretch across two data centers in geographically dispersed locations. In normal circumstances, in vSphere 5.5 and below at least, vCenter would be deployed in Link-Mode so two vCenters can be managed as one. However, with vMSC it’s possible to have one vCenter manage all resources across two sites and leverage the underlying stretch storage and networking infrastructures. I’ve done previous blogs on NetApp MetroCluster to describe how a stretched storage cluster is spread across two disparate data centers. I’d also recommend reading a previous post done on vMSC by Paul Meehan over on www.virtualizationsoftware.com. The idea behind this post is to provide the VMware view for the MetroCluster posts and to give a better idea on how MetroCluster storage links into virtualization environments.

Read on here

VMware VSAN… and the missed opportunity

Good post by Enrico Signoretti (thank you)

One of the most interesting (among the few) announcements at VMworld was about VSAN 6.1. The product is quickly maturing and new features are being added version after version (here what’s new). And the product promises to be even better (with erasure coding and dedupe coming up in the next version!).

Read on here

Hadoop Storage: DAS vs. Shared

Post by George Crump (thank you)

Hadoop is a software solution that was developed to solve the challenge of doing a very rapid analysis of vast, often disparate data sets. Also known as big data, the results of these analytics, especially when produced quickly, can significantly improve an organization’s ability to solve problems, create new products and to cure diseases. One of the key tenets of Hadoop is to bring the compute to the storage instead of the storage to the compute. The fundamental belief is that the network in-between compute and storage is too slow, impacting time to results.

Read on here

Getting to know the Network Block Device Transport in VMware vStroage APIs for Data Protection

Post by Abdul Rasheed (thank you)

When you backup a VMware vSphere virtual machine using vStorage APIs for Data Protection (VADP), one of the common ways to transmit data from VMware data store to backup server is through Network Block Device (NBD) transport. NBD is a Linux-like module that attaches to VMkernel and makes the snapshot of the virtual machine visible to backup server as if the snapshot is a block device on network. While NBD is quite popular and easy to implement, it is also the least understood transport mechanisms in VADP based backups.

Read on here

VVOLs are more than just “per-VM”storage volumes

Good post by Luca Dell’Oca (thank you)

As I’m following closely the growth and evolution of this new technology for vSphere environments, and I’m still in search of a solution to play with VVOLs in my lab, I’ve found an article on the blogosphere and some additional comments on Twitter that made me re-think a bit about the real value of VVOLs.

It’s just a “per-VM” storage?

The original article comes from one of my favorite startups, Coho Data. In this article Suzy Visvanathan explains how Coho being an NFS-based storage doesn’t really need to support VVOLs.

Read on here

SDS – The Missing Link – Storage Automation for Application Service Catalogs

Post by Rawlinson Rivera (thank you)

Automation technologies are a fundamental dependency to all aspects of the Software-Defined Data center. The use of automation technologies not only increases the overall productivity of the software-defined data center, but it can also accelerate the adoption of today’s modern operating models.

In recent years, a subset of the core pillars of the software-defined data center has experienced a great deal of improvements with the help of automation. The same can’t be said about storage. The lack management flexibility and capable automation frameworks have kept the storage infrastructures from delivering operational value and efficiencies similar to the ones available with the compute and network pillars.

VMware’s software-defined storage technologies and its storage policy-based management framework (SPBM) deliver the missing piece of the puzzle for storage infrastructure in the software-defined data center.

Read on here

Why Virtual Volumes?

Post by Andrew Sullivan (thank you)

How many times in the last 3-4 years have you heard “Virtual Volumes”, “VVols”, “Storage Policy Based Management”, or any of the other terms associated with VMware’s newest software-defined storage technology?  I first heard about VVols in 2011, when I was still a customer, and the concept of no longer managing my virtual machine datastores, but rather simply consuming storage as needed with features applied as requested, was fascinating and exciting to me.

Read on here

The reverse wars – DAS vs NAS vs SAN

Post by Chin-Fah Heoh (thank you)

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

Read on here

Qumulo emerges with data-aware scale-out NAS

Good post by Dave Raffo (thank you)

Isilon founding engineers launch Qumulo Core, software designed to manage scale-out NAS with real-time analytics.

Qumulo came out of stealth today with what it describes as data-aware scale-out NAS with real-time analytics built in.

Qumulo Core was developed by many of the same developers who created Isilon scale-out NAS. Qumulo founders Peter Godman, Aaron Passey and Neal Fachan were responsible for dozens of Isilon patents. EMC acquired Isilon for $2.25 billion in 2010.

Read on here