GPFS Native RAID

GPFS Native RAID integrates the functionality of an advanced storage
controller into the GPFS NSD server. Unlike an external storage controller,
where configuration, LUN definition, and maintenance are beyond the control
of GPFS, GPFS Native RAID takes ownership of a JBOD array to directly match
LUN definition, caching, and disk behavior to GPFS file system
requirements.

The features of GPFS Native RAID include:

- Software RAID: GPFS Native RAID runs on standard AIX disks in a
dual-ported JBOD array, which does not require external RAID storage
controllers or other custom hardware RAID acceleration.
– Declustering: GPFS Native RAID distributes client data, redundancy
information, and spare space uniformly across all disks of a JBOD.
– This distribution reduces the rebuild (disk failure recovery process)
overhead compared to conventional RAID.
–  Checksum: An end-to-end data integrity check, using checksums and
version numbers, is maintained between the disk surface and NSD clients.
– The checksum algorithm uses version numbers to detect silent data
corruption and lost disk writes.
– Data redundancy: GPFS Native RAID supports highly reliable
2-fault-tolerant and 3-fault-tolerant Reed-Solomon based parity codes and
3-way and 4-way replication.
– Large cache: A large cache improves read and write performance,
particularly for small I/O operations.
– Arbitrarily sized disk arrays: The number of disks is not restricted to a
multiple of the RAID redundancy code width, which allows flexibility in the
number of disks in the RAID array.
– Multiple redundancy schemes: One disk array can support vdisks with
different redundancy schemes,for example Reed-Solomon and replication
codes.
– Disk hospital: A disk hospital asynchronously diagnoses faulty disks and
paths, and requests replacement of disks by using past health records.
– Automatic recovery: Seamlessly and automatically recovers from primary
server failure.
– Disk scrubbing: A disk scrubber automatically detects and repairs latent
sector errors in the background.
– Familiar interface: Standard GPFS command syntax is used for all
configuration commands; including, maintaining and replacing failed disks.
– Flexible hardware configuration: Support of JBOD enclosures with multiple
disks physically mounted together on removable carriers.

Get the “GPFS Native RAID Administration and Programming Reference” here

UPDATE  (29.2.2012):

Get the slide of the LISA talk on GNR  here and see the video below for the explanation to the slides. (Thanks to Veera Deenadhaylan from the IBM Almaden Research Center for providing this information)

Leave a comment

6 Comments

  1. Great post….

    Reply
  2. Ahren Simmons

     /  November 27, 2012

    Does this mean you no longer need a shared disk solution (i.e. SAN) for GPFS? Or do all Nodes/NSDs still need shared disk access to each JBOD?

    Reply
  3. Hi Ahren,
    Yes this would be done by GPFS itself. (no shared disk access needed) Please be aware that this is still work in progress.
    -Roger

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,136 other followers