Stanford Seminar – HPC Opportunities in Deep Learning by Greg Diamos, Baidu
By   |  March 07, 2017

Just this year, deep learning has fueled significant progress in computer vision, speech recognition, and natural language processing. We have seen a computer beat the world champion in Go with help from deep learning, and a single deep learning algorithm learn to recognize two vastly different languages, English and Mandarin. At Baidu, we think that this is just the beginning, and high performance computing is poised to help.

It turns out that deep learning is compute limited, even on the fastest machines in the world. This talk will provide empirical evidence from our Deep Speech work that application level performance (e.g. recognition accuracy) scales with data and compute, transforming some hard AI problems into problems of computational scale.

It will describe the performance characteristics of Baidu’s deep learning workloads in detail, focusing on the recurrent neural networks used in Deep Speech as a case study. It will cover challenges to further improving performance, describe techniques that have allowed us to sustain 250 TFLOP/s when training a single model on a cluster of 128 GPUs, and discuss straightforward improvements that are likely to deliver even better performance.

Our three big hammers are improving algorithmic efficiency, building faster and more power efficient processors, and strong scaling training to larger clusters. The talk will conclude with open problems in these areas, and suggest directions for future work.

Greg Diamos

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index