The Storage Requirements for 100% Virtualization

Post by George Crump (thank you)

After a rapid move from test to production, virtualization of existing servers in many companies seems to slow down. While it is true that most data centers have adopted a virtualize first philosophy, getting those older, mission critical workloads virtualized seems to be a thorny issue. These applications are often at the heart of an organization’s revenue or customer interaction and tend to be unpredictable in the resources they require. This is especially true when it comes to storage and networking.

Read on here

Software-Defined Storage in OpenStack

Good post by Rawlinson Rivera (thank you)

t’s time expand and showcase the strength of software-defined storage at the upcoming OpenStack Summit. Take a look at the abstract of the session I’m scheduled to participate as a speaker and lets’ get that bad boy vote in and into the schedule.

Getting the Bang for your Buck with Software-Defined Storage in OpenStack

OpenStack has become the standard infrastructure consumption layer for a variety of applications. These applications have different storage requirements and being able to provide a storage solution that matches these requirements is critical to ensure the adoption of OpenStack as the cloud platform of choice across a wide array of applications.

Read on here

Microsegmentation: How VMware Addresses the Container Security Issue

Post by Scott M. Fulton III (thank you)

Easily the most astonishing result from ClusterHQ’s most recent State of Container Usage survey [PDF] was that nearly three-fourths of 229 IT professional respondents said their data centers are running containers in a hypervisor-virtualized environment — that is to say, a container environment inside the safety of a separate virtualization layer. That figure was bolstered by about 61 percent of respondents saying that security remained a barrier to their data centers’ adoption of containers in production.

Read on here

Docker: What do Storage Pros need to know?

Good post by George Crump (thank you)

Docker was created to solve the problems that organizations face when they implement server virtualization on a wide scale; overhead and inefficiency. These challenges occur because virtualization is a sledgehammer to the problem it was designed to solve; allow multiple applications to run simultaneously on the same physical hardware in such a way that if one application fails the rest of the applications are not impacted. This is the real goal of virtualization, isolation of applications so that a misbehaving application does not impact another application or its resources.

Read on here

 

The Benefits of a Cloud Integrated Hyper-converged Architecture

Post by George Crump (thank you)

Hyper-converged architectures (HCA) consolidate compute, storage and networking onto a single server and then through software, aggregate these servers, creating a shared pool of resources. These resources can then be allocated to virtual machines (VMs) based on their performance and capacity demands. The goal is to simplify the purchasing, implementation and operation of the data center by consolidating or converging it. The logical next step for HCA is to extend its capabilities to the cloud allowing data centers of all sizes to achieve greater flexibility and resilience from disaster.

Raed on here

It’s time to “VMware” Storage

Good post by George Crump (thank you)

Before hypervisors like VMware, Hyper-V and KVM came to market, data centers had few options when it came to managing the growth of their server infrastructure. They could buy one big server that ran multiple applications, which, while it simplified operations and support, meant that one application was at the mercy of the other applications in terms of reliability and performance. Alternatively, IT professionals could buy a server for each application as it came online but this sacrificed operational efficiency and IT budget to the demands of fault and performance isolation. Until hypervisors came to market, the latter choice was considered to be the best practice.

Read on here

All-Flash Arrays vs. Performance Management

Post by George Crump (thank you)

Optimizing storage performance is almost an art. One of the earliest papers I wrote for Storage Switzerland was “Visualizing SSD Readiness“, which articulated how to determine if your application could benefit from implementing  solid state disk (SSD). It also discussed how to determine which files of your application should be put on the SSD. Remember that in 2009 no one could imagine putting an entire application on SSD, let alone an entire data center! Now though, thanks to all-flash arrays, we can. But does that mean we can abandon performance management as a discipline?

Read on here

The Future of Backup is an Architecture Not an Application

Good post by George Crump (thank you)

While the applications that protect data have vastly improved over the last 20 years, they still often struggle to keep up with the technical challenges of data growth, shrinking backup and recovery windows and demands for greater disaster resilience. At the same time, user expectations have also risen.

Read on here

IBM System Storage DS8000 Easy Tier Application

rb170x32 – DRAFT

IBM System Storage DS8000 Easy Tier Application

The IBM Easy Tier Application is part of the overall Easy Tier offering. Initially, the overall Easy Tier function was designed to automate data placement throughout the storage system disks pool. It enables the system, automatically and without disruption to applications, to relocate data (at the extent level) across up to three drive tiers. The process is fully automated. Easy Tier also automatically rebalances extents among ranks within the same tier, removing workload skew between ranks, even within homogeneous and single-tier extent pools.

Introduced with the DS8000 Licensed Machine Code (LMC) 7.7.10.xx.xx, the Easy Tier Application feature enables storage administrators to manually assign distinct application volumes to a particular storage tier in the Easy Tier pool, disregarding Easy Tier’s advanced data migration function described above. Easy Tier Application provides an additional flexible option for clients that want certain applications to remain on a particular tier to meet performance and cost requirements.

This paper is aimed at those professionals who want to understand the Easy Tier Application concept and its underlying design. It also provides guidance and practical illustrations.

How Flash Changes System and Application Design


Wikibon Chief Analyst Dave Vellante examines how the wave of flash storage impacts the design of infrastructure across the compute and storage layers.