The world's most powerful deep learning system for the most complex AI challenges.
In response to the rapidly growing demands of today’s modern AI workloads, from growing deep neural networks to algorithms automatically detecting features in complex data - processing deep learning has completely changed the surface of computational technology. Paving the way for modern AI, NVIDIA’s ® DGX-2™ is recognised as ‘the World’s most powerful Deep Learning system’ with unprecedented levels of compute, the platform is targeted at deep learning computing boasting the processing power of its equally magnificent predecessor, the DGX-1. This is the first ever server to usher in the SXM3 form factor allowing you to experience new levels of AI speed and scale. The first ever petaFLOPS system that combines 16 fully interconnected GPUs for 10X the deep learning performance alongside ground-breaking GPU scale allowing you to train models 4X bigger on a single node.
Perfect for leading edge research demands, the NVSwitch allows leveraging of model parallelism and includes new levels of inter-GPU bandwidth. Embrace model-parallel training with a networking fabric in DGX-2 that delivers 2.4TB/s of bisection bandwidth for a 24X increase over prior generations. Moreover, two of the fastest CPUs available, the Intel Platinum Skylake generation the CPUs and triple the memory of the DGX-1 has enough CPU power to stream data to the GPUs and avoid bottlenecks in deep learning. Without scaling costs and complexities yet still responding to business imperatives, the DGX-2 is powered by DGX software, enabling simplified deployment at a scale. With an accelerated deployment model and purpose-built for ease of scale, businesses can spend more time driving insights and less time on building complex infrastructures.
Designed to train what was thought to be previously impossible, experience new levels of AI speed and scale with the DGX-2, spend less time on optimising and focus your resources on discovery – ‘with every NVIDIA system, get started fast, train faster and remain faster with an integrated solution that includes software tools and NVIDIA expertise’.
CPU Quantity (Maximum)
NVIDIA Tesla Series
GPU Memory Sizes
8 x 100Gb/sec Infiniband/100GigE Dual 10/25Gb/s Ethernet
5°C ~ +35°C
Ubuntu Linux OS
2 x 960GB NVME SSD's
30TB (8 x 3.84TB) NVME SSD's
To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available
Join Boston, our sponsors and the Centre for High Performance Computing (CHPC) for our 2nd annual HPC roadshow, this time coming to you digitally. We invite you to join us as we explore the current state of High Performance Computing and detail our plans for the future including an exciting announcement during our keynote.