After Machine Learning in 2014, Deep Learning was the main theme of NVIDIA’s sixth GPU Technology Conference. 4000 people from 50 countries came to listen, learn and test all the ways to use GPUs but also have a glimpse of things to come. The annual event brought together an entire ecosystem to the San Jose McEnery Convention Center with 500 sessions on thirty themes. Of which the main were cloud Computing and HPC Supercomputing, Automotive, Augmented and virtual reality, Computer Vision, Machine Learning Deep learning. In his opening keynote Jen-Hsun Huang, CEO and founder of NVIDIA, talked about the four major announcements at this event and all closely associated with deep learning: a new GPU, a DevBox a roadmap and an automatic car steering system.
Why is deep learning in the spotlight this year? Because a computer recently for the first time beat a human in image recognition. For JHH, this is only the beginning: soon it will become common place. In order to achieve that, there is only one solution: speed. Without speed and increased efficiency, impossible to recognize an image in a reasonable time, impossible to take the next technological step. In this case, this step will become reality through, among other things, Titan X, the latest in the range of NVIDIA GPUs. For $ 999, it can in less than three days to turn the AlexNet project, a deep neural network that enables a computer to recognize images. To measure the difference between yesterday’s GPUS and the Titan X that exact same project requires 43 days with 16 Xeon CPU cores and 6 original TITAN GPUs. It is also an opportunity for NVIDIA to release a turnkey solution in the form of an ultra-fast DevBox. Because the arrival of deep learning is now in the process of democratizing thanks to the combination of high-level algorithms running in neural networks and the ability to process data sets without any limit in terms of size thanks to the advent of big data. All this coupled with the power of the latest developments. Deep learning thus arrived at the heart of the concerns of the largest technology companies : IBM, Microsoft, Google, Baidu, Facebook, etc. which had also made the trip to present their current research in this direction.
The same interest stems from the research centers of all kinds including medical laboratories, for which these advances open up new possibilities for research, testing and calculating the human genome. The algorithms necessary for deep learning are thus steadily improving as technology reaches the right level to become a practical reality. The time was ripe to NVIDIA to offer an “all inclusive” solution. The announced high-speed box includes a software development kit, DIGITS, intended to help the implementation of deep learning. How? By allowing to process data through the box’s GPU processor, simply configure a deep neural network, supervising the different stages when the deep learning algorithm runs and, of course, view the data. Dubbed DIGITS DevBox, it naturally contains GPU TITAN X, is orchestrated by a dedicated Linux Operating System and includes a deep learning generic algorithm. The target here is of course all researchers in the relevant fields which, for $15 000, will be able to focus on their research. The GPU Technology Conference was also an opportunity for the CEO of NVIDIA to unveil the roadmap for future GPU architectures. The PASCAL architecture, 10 times faster than Maxwell, will have 3 times the computing power, a 3D memory also brings a lot of improvements and a NVLink bus opening new opportunities for GPU interconnections .
Finally, on the road, NVIDIA offers the Drive PX self driving car computer based on ADAS (advanced driver support systems) used in cars in the deep learning of the era. Next May, specialists in these systems will be able to increase the obstacle detection capabilities for cars.
© HPC Today 2019 - All rights reserved.
Thank you for reading HPC Today.