Post by George Crump (thank you)
High Performance Computing (HPC) is a unique environment that places special demands on the storage infrastructure. These environments typically have dozens, if not hundreds, of compute nodes, each generating a unique sequential workflow that randomizes when it hits the shared storage infrastructure supporting it. This randomized, sequential workflow is exhausting the capabilities of legacy NAS architectures leading HPC storage designers to seek a new alternative. What is needed is a storage architecture that delivers high performance, the ability to scale for very large environments and is cost effective.
Read on here