SC16 – Cray Systems ready to power Deep Learning
By   |  November 25, 2016

At the 2016 Supercomputing Conference in Salt Lake City, supercomputer manufacturer Cray announced new deep learning capabilities across its line of supercomputing and cluster systems. With validated deep learning toolkits and some of the most scalable supercomputing systems in the industry, Cray customers can now run deep learning workloads at their fullest potential – at scale on a Cray supercomputer.

“The convergence of supercomputing and big data analytics is happening now, and the rise of deep learning algorithms is evidence of how customers are increasingly using high performance computing techniques to accelerate analytics applications,” said Steve Scott, senior vice president and chief technology officer at Cray. “Training problems look very much like classical supercomputing problems. We believe that with our Cray Programming Environment, validated toolkits, and the latest processing technologies, we have the right combination of hardware and software expertise to help our customers efficiently execute deep learning workloads now and in the future.”

Cray has validated and made available several deep learning toolkits on Cray XC and Cray CS-Storm systems to simplify the transition to running deep learning workloads at scale. These toolkits include:

  • Microsoft Cognitive Toolkit (previously CNTK)
  • TensorFlow
  • NVIDIA DIGITS (Deep Learning GPU Training System)
  • Caffe
  • Torch
  • MXNet

Additionally, the Cray CS-Storm system – a dense, accelerated GPU cluster supercomputer that offers 850 GPU teraflops in a single rack – now supports the NVIDIA Tesla P100 for PCIe data center accelerator and the NVIDIA Tesla M40 deep learning training accelerator. And with the addition of the NVIDIA Tesla P100 to the Cray XC50 supercomputer, Cray now has a variety of scalable systems well suited for running a wide array of emerging deep and machine learning applications.

PGS, a leading marine geophysical company, is running machine learning algorithms on its Cray XC40 supercomputer, nicknamed “Abel.” Machine learning technologies such as regularization and steering can be applied to a significant computational problem in seismic exploration – Full Waveform Inversion (FWI), a methodology that seeks to find a high-resolution, high-fidelity representations of the subsurface in the ultra-deep Gulf of Mexico.

“This class of problems is notoriously hard,” said Dr. Sverre Brandsberg-Dahl, global chief geophysicist for Imaging and Engineering, at PGS. “It is a multidimensional ill-posed optimization problem that is far from automated and requires lots of skilled resources’ intervention – sometimes more art than science in many cases. Our Cray XC40 system was able to learn how to best steer refracted and diving waves for deep model updates and how best to reproduce the sharp salt boundaries in the Gulf of Mexico. Machine learning at scale on our Cray supercomputer showed dramatic improvement in the quality of the inversion process as compared to current state-of-the-art FWI.”

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index