Non Volatile Memory Express (NVMe) technology is starting to hit the market. It provides a brand new data transport protocol to replace aging protocols developed in the ‘90s, which bottlenecked processors by queuing traffic. NVMe increases the speed of data transfer and unlocks these bottlenecks, leading to an increase of about four orders of magnitude in queuing capability.
“HPC could sometimes be held back by slow I/O performance, which caused expensive compute resources and memory to be consumed,” said Greg Schulz, an analyst with Server and StorageIO Group. “With HPC emerging from a classic compute-intensive model to broader high productivity computing, the fact that NVMe reduces wait time while increasing the amount of effective work enables higher profitability compute.”
High performance computing (HPC), at its core, seeks to reduce and/or eliminate disk-based storage in order to crunch data at a quicker pace. In order to do this, it is imperative that HPC architects employ the fastest multi-core processors on the market, preferably ones that are backed by large quantities of memory.
In-memory architectures are recent innovations that provide the highest level of performance to applications harnessed for HPC. If cost were no object, everything would be done in-memory. In order to get around this, many implementers have been able to augment processing and memory with flash, either in the form of solid state drives (SSDs) or by placing the flash beside the processor using PCIe.
PCIe does provide more bandwidth and lower latency for HPC, and PCIe Gen 3 supports about 7.87 Gbps per lane. However, one of the major roadblocks to employing PCIe across platforms is that PCIe relies on aging SATA and SAS protocols developed for relatively sluggish hard drives. As a result, big queues can develop when too much compute is involved. Unfortunately, SATA allows for only one command queue that can hold up to 32 commands per queue. NVMe, on the other hand, enables 65,536 (64K) queues each with 64K commands per queue, enabling greater productivity in a smaller timeframe.
“The storage I/O capabilities of flash can now be fed across PCIe faster to enable modern multi-core processors to complete more useful work in less time,” said Schulz.
According to non-profit vendor collective NVM Express, Inc., NVMe is an “optimized, high-performance, scalable host controller interface with a streamlined register interface and command set designed for non-volatile memory based storage.” The first NVMe products to arrive on the shelves have demonstrated up to six times greater 4KB random and sequential read/write performance compared to SATA disk, as well as SATA-based solid state drives (SSDs). This is a big deal for any HPC application that has to interact with any form of storage, whether PCIe-based, SSD or Hard Disk Drive (HDD).
“NVMe helps the HPC community avoid clunky workarounds that tried to leverage many SATA (or SAS) SSDs, or multiple HDDs in parallel configured for performance,” said Schulz. “Therefore, NVMe allows more I/Os and larger I/Os to be processed concurrently and in parallel while off-loading CPUs to do more actual compute.”
Others assert that NVMe provides the HPC community with one more tool as it perpetually pushes the limits of available technology.
“With the NVMe standard, HPC applications are provided with unprecedented IOPS in a simplified interface and with lower power consumption,” said Janene Ellefson, enterprise SSD product marketing manager at Micron Technology. . “The key component that NVMe brings to the table for HPC is the simplicity of the interface, and the fact that it lowers latency which is a critical performance and efficiency element.
Here are a few of the benefits of the NVMe standard:
In the not-so-distant future, don’t be surprised if HPC vendors and supercomputing sites begin to take advantage of NVMe, as it has proved to be a viable choice to clear some of the roadblocks in the way of achieving the highest possible levels of compute.
Drew Robb is a contributing author.