Most of us know what an SSD is: a solid-state drive, present in most modern-day computers and laptops, fulfilling the need for individuals and organizations to have speedy access to large scores of data with more durability and less energy needed to sustain them than traditional HDDs did.
In comes something called the nonvolatile memory express protocol, or NVMe for short. An NVMe SSD can – wait a second. If we’ve already come such a long way with how SDDs replaced HDDs and fixed all their problems, what is NVMe SSD’s deal?
Defining the NVMe
With the rise of data science and big data and the ever-growing popularity of payment apps and mobile payment services, the need for speed (of absolutely instant access to massive scores of data simultaneously and at once) has never been more.
Enter the NVMe, a communications standard that has revolutionized SSDs to be premium fast access memory, developed in tandem by big names such as Dell, Intel, Samsung, Sandisk, and Seagate.
Before the advent of this technology, SSDs relied on SAS (for Serial Attached Small Computer System Interface) and SATA (Serial Advanced Technology Attachment) interfaces, whereas an NVMe is either a PCIe NVMe (using the PCI Express bus) – or U2 or M.2 NVMe (with U.2 and M.2 being much newer connectors that largely superseded the SAS and SATA).
In more technical terms, we can describe NVMe as a highly scalable storage protocol and high-performance, optimized through Non-Uniform Memory Access, built on high-speed PCIe lanes that can double the transfer speeds you’d be used to with the SATA interface – supporting multiple I/O queues instead of SAS and SATA’s single queues, with each having 64K entries (as compared to 254 for SAS and 32 for SATA).
It allows the host to connect to the memory subsystem so it can create these queues (as per expected workload and how the system is configured) up to the maximum allowed by the controller.
These connectors allow for more “direct” communication than the previous interfaces, which translates to lower latency, more efficient power usage, and greater IOPS. This makes NVMe SSDs stand out amongst the best SSDs for gaming, even becoming an industry standard for gaming consoles.
NVMe Over Fabrics
The name non-volatile memory (express protocol) not only sounds cool enough to be a Mission Impossible film but also tells us that the applications of this protocol can be varied and extensive.
Big businesses and enterprises run their performance applications with a NAS or SAN implementation over distributed infrastructures. For this, the NVMe protocol can be extended across different network fabrics – even Ethernet.
Using remote direct memory access, NVMe over Fabrics can reduce the data handling overhead that is inherent in more traditional distributed storage connections. This extension can add a full 10 microseconds worth of latency – as compared to an NVMe SSD that is directly connected – to any storage system.
The NVMe protocol is designed in such a way that it can work for anything dealing with nonvolatile memory, like other forms of NVM technology such as persistent memory based on PCM (phase-change technology) or spin-transfer torque MRAM.
This is where NVMe’s true potential shines through, something we get to in this next section.
What Is NVMe SSDs’ Big Selling Point?
Enterprises, as we’ve mentioned, rely on huge icebergs of data, every second of every minute in. Enterprises now are ecosystems of cloud and edge data requiring intense computation and giving rise to newer challenges every day. SSDs, even the best of the best of them, simply cannot keep up with Big Data nor Fast Data.
NVMe, on the other hand, is almost designed in a way that makes it perfect for reducing bottlenecks not only in the currently widely implemented scale-up database apps but even for the latest computing architectures such as Edge-based and cloud-based networks.
It achieves this not only through its infrastructure but to the unique features the infrastructure gives rise to, many of which have never been seen before, let alone implemented; the combining, multipath and virtualization of I/Os, a clear delineation between ownership and prioritization processes and, as mentioned before, multiple queues all make it uniquely equipped to open up enterprises to enliven top-line growth for their business while reducing their total ownership cost, all in one go, with greater capacity for more rigorous workloads with a smaller infrastructure footprint.
Our NVMe Recommendations
NVMe performance is determined by a combination of different factors, ranging from “how much” NAND you have and what type, to the controller, and the number of PCIe lanes. While our specific recommendations are at the end of this article, here are some fast tips to keep in mind for a better understanding of what you might need:
- More NAND chips mean that the controller has more paths and more destinations where it has to distribute and store data. Even if they share the same model number, drives with smaller capacity are found to be slower and than their larger counterparts.
- While vendors can treat almost any type of NAND (with the only exclusion being SLC) as its faster predecessor just by writing fewer bits until the cache is used up, there is a hierarchy to the speed of types of NAND, with SLC being fastest and going from there to MLC, to TLC, with QLC finally being the slowest. SLC, MLC, TLC, QLC, in that order.
- And, finally, x4 PCIe types are faster than x2 PCIe NVMe SSDs.
For specifics product recommendations, the Samsung 970 Pro M.2 NVMe SSD is considered to be one of the best amongst the top line, with the Adata XPG SX8200 Pro being the best “budget” product. For most people, however, the Western Digital Black SN750 will do.