Jen-Hsen Huang showed the TESLA M40 and M4 boards respectively dedicated to internet data training and for inferencing. According to Jen-Hsen, theM40 and M4 are not only energy efficient, but also universal. They can be used for transcoding, image processing, as well as deep learning. These two products have become nVidia’s fastsest growing business, adopted by internet service providers across the world
He then goes on to present what is dubbed « the most advanced hyperscale data center GPU ever built » with the new Pascal architecture Tesla P100, the first GPU built on nVidia’s 11th generation Pascal Aarchitecture, built to blaze new frontiers for deep learning applications in hyperscale data centers. It’s got muscle—150 billion transistors, with 20 teraflops of half-precision performance.
Besides its brand new Pascal architecture, the P100 features NVLink, which delivers a 5x increase in interconnect bandwidth across multiple GPUs, or 160 gigabytes a second. Third, its 16nm FinFET fab technology – this is world’s largest FinFET chip every built with 15 billion transistors and huge amounts of memory.
Fourth, the HBM2 Stacked Memory, which unifies processor and data in a single package – there are 4,000 wires connecting Pascal to all the memories around it. And on top of these goundbreaking features, it uses AI Algorithms which are the result of thousands of engineers working for several years.
At launch nVidia’s P100 is alredy being endorsed by big names :
- There’s a string of big names who are endorsing Pascal and the breakthroughs it will enable.
- One of the biggest is deep learning guru Yann LeCun, director of AI Research at Facebook.
- Secondly, Baidu chief scientist, Andrew Ng, who keynoted at GTC last year, said, “AI computers are like space rockets. The bigger the better. Pascal’s throughput and interconnect will make the biggest rocket we’ve seen.”. Another is Microsoft’s Xuedong Huang, chief speech scientist.
Servers with P100 are being built by IBM, Hewlett Packard Enterprise, Dell and Cray. nVidia plans to ship theP100 “soon.” First it will show up in cloud and then in OEMs by Q1 next year.
© HPC Today 2019 - All rights reserved.
Thank you for reading HPC Today.