New Microsoft Exchange 2013 on Virtual SAN 6.1 Reference Architecture

Good post by Rawlinson Rivera (thank you)

A new Microsoft Exchange 2013 on Virtual SAN 6.1 is now available on the VMware Virtual SAN product Resource page.

This new VSAN-Exchreference architecture walks through the validation of Virtual SAN’s ability to support Microsoft Exchange 2013 designed to satisfy high IOPS mailbox configuration with Exchange Database Availability Groups (DAGs). The reference architecture is based on a resilient design that covers VMware vSphere clustering technology and Exchange DAG as well as data protection and recoverability design of Exchange Server 2013 with vSphere Data Protection and vSphere Site Recovery Manager.

Read on here

The Storage Requirements for 100% Virtualization

Post by George Crump (thank you)

After a rapid move from test to production, virtualization of existing servers in many companies seems to slow down. While it is true that most data centers have adopted a virtualize first philosophy, getting those older, mission critical workloads virtualized seems to be a thorny issue. These applications are often at the heart of an organization’s revenue or customer interaction and tend to be unpredictable in the resources they require. This is especially true when it comes to storage and networking.

Read on here

What is NVMe? And what does it mean for PCIe-SSD?

Good post by George Crump (thank you)

There are two constants in data center storage; the need for greater performance and the need for greater capacity. Flash based storage devices have become the go-to option to address the first challenge. But application owners and users quickly move from an initial euphoria with flash performance to demanding more. Since the flash NAND is essentially the constant in the equation, the surrounding infrastructure has to evolve to extract optimal performance from the technology. But achieving maximum performance often leads to proprietary architectures and designs. NVMe (Non Volatile Memory) is a new industry standard that enables data centers to realize full flash potential without compatibility headaches.

Read on here

How can SD cards be faster than SSDs?

Good post by Robin Harris (thank you)

SD cards – postage stamp sized flash cards in your camera – have no internal cache, little internal bandwidth, tiny CPUs, and slow I/O busses. But recent tests found that SD cards could be 200 times faster than an SSD. How???

Read on here

Throwing hardware at a software problem

Good post by Robin Harris (thank you)

Maybe software will eat the world, but sometimes the physical world gives software indigestion. That fact was evident at the Flash Memory Summit this month.

As mentioned in Flash slaying the latency dragon? several companies were showing remote storage accesses – using NVMe and hopped up networks – in the 1.5 to 2.5µsec range. That’s roughly 500 times better than the ≈1msec averages seen on today’s flash storage.

Read on here

Samsung announces 16TB SSD

Post by Robin Harris (thank you)

Not just the world’s highest capacity SSD, but the world’s highest capacity drive of any type. The PM1633 uses Samsung’s new 48 layer V-NAND, itself a technical tour-de-force, and represents a new thrust in flash storage beyond performance: capacity.

Read on here

Native PCI Express Back-end Interconnect in FlashArray//m

Good post by Roland Dreier (thank you)

Many storage users are familiar with ALUA, or Asymmetric Logical Unit Access. This describes storage where some paths don’t work at all or give lower performance, because of standby controllers, volumes associated with a controller, or other architectural reasons.  The Pure Storage FlashArray provides symmetric access to storage — any IO to any volume on any port always gets the same performance.

Read on here

 

Docker: What do Storage Pros need to know?

Good post by George Crump (thank you)

Docker was created to solve the problems that organizations face when they implement server virtualization on a wide scale; overhead and inefficiency. These challenges occur because virtualization is a sledgehammer to the problem it was designed to solve; allow multiple applications to run simultaneously on the same physical hardware in such a way that if one application fails the rest of the applications are not impacted. This is the real goal of virtualization, isolation of applications so that a misbehaving application does not impact another application or its resources.

Read on here

 

Thinking different about storage

Good post by Enrico Signoretti (thank you)

In the last few months I had several interesting briefings with storage vendors. Now, I need to stop and try to connect the dots, and think about what could come next.
It’s incredible to see how rapidly the storage landscape is evolving and becoming much smarter than in the past. This will change the way we store, use and manage data and, of course, the design of future infrastructures.

Read on here

Data Efficiency: Compression

Good post by Jesse St.Laurent (thank you)

In a previous post, we referenced the importance of delivering data efficiency as a core part of the solution and not as a bolt on. The first data efficiency technology we will review is compression. There are two types of compression: inline and post-processing. This post will review both types. I have attempted to create formulas to help clarify the differences and make it simpler to compare compression options, as well as other data efficiency technologies. Let’s set the formulas aside for a moment and come back to them later with a discussion about how they impact hyperconverged infrastructure.

Read on here