Feeding the GPU Beast – How to Leverage NVMe™ for AI & ML

Feeding the GPU Beast – How to Leverage NVMe™ for AI & ML

Source: https://blog.westerndigital.com/gpu-beast-nvme-ai-ml-workloads/

Following the launch of the new Ultrastar Serv24-4N, our four-node storage server that delivers extremely fast, dense and efficient NVMe storage, we asked our partners at Excelero to share their insights into how to best leverage NVMe for AI & ML workloads.

Author: Tom Leyden, VP of Marketing at Excelero

In today’s digitised world, industry leaders are more effective in every aspect of data analytics; they run state-of-the-art analytics applications powered by GPU’s and high-performance storage infrastructures to capture, store and analyse more data faster and make better business decisions.

AI and ML use cases have exploded over the past few years, enabled by four major technology evolutions:

1. NEW SENSOR AND IOT TECHNOLOGIES have proliferated that capture everything from images to temperature, heartrate, and more – adding even more data volumes.

2. NEW NETWORKING TECHNOLOGIES enable high bandwidth data transfers, low latency, high IOPs, and smart offloads.

3. THE RISE OF POWERFUL GPU TECHNOLOGIES that lower the cost of compute on massive data sets have made parallel processing faster and much more powerful.

4. NEXT-GENERATION STORAGE INTERFACES AND PROTOCOLS SUCH AS NVME flash bring new performance and architectural features that are well-suited to these new computational engines.

Feeding the GPU Beast

The biggest advantage of modern GPU computing is also creating its biggest challenge: GPUs have an amazing appetite for data. Current GPUs can process up to 16GB of data per second. NVIDIA’s latest DGX-2™ system has as many as 16 GPUs, but only has 30TB (8 x 3.84TB) local NVMe and is not optimised to use it efficiently.

Other brand GPU servers typically feature few PCIe lanes for local flash (NVMe or other), meaning even the lowest latency option for these servers is a severe bottleneck or is simply too little capacity for the GPUs. Starving the GPUs with slow storage, or wasting time copying data, wastes expensive GPU resources and affects the overall ROI of AI and ML use cases.

Leveraging NVMe for AI & ML

NVMe flash offers great benefits for specific AI use cases like training a machine learning model. Machine learning involves two phases – training a model based on what is learned from the dataset, and running the model. Training of a model is the most resource hungry stage.

The modern datasets used for model training can be huge – for example, MRI scans can reach terabytes each; a learning system may use tens or hundreds of thousands of images. Even if the training itself runs from RAM, the memory should be fed from non-volatile storage, which has to support very high bandwidth.

In addition, paging out the old training data and bringing in new data should be done as fast as possible to keep the GPUs from being idle. This necessitates low latency, and the only protocol allowing for both high bandwidth and low latency like this is NVMe.

However, there are limitations as to how many local NVMe drives can be used. This is often the result of how many PCIe lanes are actually allocated to NVMe drives since the GPUs/SoCs also require PCIe lanes. Additionally, for checkpoint usage many local NVMe drives need to be synchronised to allow for a usable snapshot.

NVMesh for AI & ML

Fortunately, GPU servers also have massive network connectivity. They can ingest as much as 48GB/s of bandwidth via 4-8 x 100Gb ports – playing a key part in one of the ways to solve this challenge.

Excelero’s NVMesh is a Software-Defined Block Storage solution that features Elastic NVMe, a distributed block layer that allows AI & ML workloads to utilise pooled NVMe storage devices across a network at local speeds and latencies. NVMe storage resources are pooled with the ability to create arbitrary, dynamic block volumes that can be utilised by any host running the NVMesh block client.

What this means is that customers can maximise the utilisation of their GPUs leveraging the massive network connectivity and the low-latency and high IOPs/BW benefits of NVMe in a distributed and linearly scalable architecture.

We built NVMesh to give customers maximum flexibility in designing storage infrastructures. Being a 100% software-defined solution, NVMesh supports any Western Digital NVMe solution including the new Ultrastar Serv24-4N, a performance-optimised platform for Software Defined Storage (SDS.)

Chipset, core count, and power can be customised to match varying data workload and performance requirements. Combined with the Ultrastar Serv24-4N, NVMesh eliminates any compromise between performance and practicality, and allows GPU optimised servers to access scalable, high performance NVMe flash storage pools as if they were local flash. This technique ensures efficient use of both the GPUs themselves and the associated NVMe flash. The end result is higher ROI, easier workflow management and faster time to results.


____________________________________________________________________________________________________________________________

To discover more about AI and ML, why not register for Digital Transformation EXPO Europe, Register free now!

____________________________________________________________________________________________________________________________

View more articles here