We're gradually importing our scientific contents in French. Please forgive the visual glitches that may result from the operation.
NVIDIA really loves Deep Learning
By   |  September 29, 2016

Lately, it seems that every time NVIDIA is making an announcement it has to be about Deep Learning. The potential life changing revolutions emerging with this technology really matter for the GPU manufacturer and the least we can say is that the company has been working really hard on this area. And the way it goes, it does not look like it’s going to stop any time soon which is definitely good news. The last press releases of the company have all been on this specific subject and NVIDIA is not lacking examples when it comes to using Deep Learning potential. Here are three examples of NVIDIA Deep Learning technology utilization.

“Person of interest”
If you’ve ever watch the show, “The Machine” is pretty much what NVIDIA CEO Jen-Hsun Huang described at GTC China last week. By 2020, surveillance cameras across the globe will capture 100 trillion images an hour. To process such amount of images human monitoring and traditional methods won’t be up to the task and that’s where Deep Learning offers a real advantage.

Deep learning-powered AI computers will be able to watch, listen and even understand what’s on screen at phenomenal speeds. According to NVIDIA, this will result in AI Cities able to recognize when there’s an accident, a pet or child is lost, or someone is experiencing harm. An AI City will be a thinking robot, with many eyes, programmed to help keep people safe. For this purpose, NVIDIA is already providing its Jetson TX1 technology that can easily add Deep Learning capabilities at very low power. Features such as multi-object classification, facial recognition and behavior analysis will be able to operate on multiple channels in real time.

Growing well
Evaluating correct children growth is a difficult task for doctors. Techniques using hands x-rays have been around for more than 75 years and rely on the qualities of the radiologist. But once again, GPU-accelerated deep learning is aimed to change that. Researchers at Massachusetts General Hospital’s new Clinical Data Science Center are testing an automated bone-age analyzer they’ve created. It speeds diagnosis of children’s growth problems and is nearly as accurate as human radiologists.

The method used was classical. The research team trained the neural network on about 7,400 x-rays and radiologist reports. The training time was reduce by using the cuDNN version of the Caffe deep learning framework and the NVIDIA DIGITS DevBox deep learning appliance, equipped with four TITAN X GPUs and DIGITS deep learning training software. “Without GPUs, I wouldn’t have been able to get the performance I needed, and I wouldn’t have been able to develop such an accurate algorithm,” said Synho Do, who is assistant medical director for advanced health technology, research and development for the Massachusetts General Hospital physicians’ group. Performances to increase in the future: earlier this month, the Mass General center became one of the first research institutions to receive the NVIDIA DGX-1 deep learning system.

Adding accuracy to cancer diagnostics
When it comes to cancer diagnoses, reliability is definitely the main requirement. But keeping up with the massive flow of research data is a challenge for scientists and the variety of methods used to analyze that data make reliable predictions difficult to come by.

A team from Harvard Medical School’s Beth Israel Deaconess Medical Center (BIDMC) tackled this issue using deep learning, in the 2016 Camelyon Grand Challenge. The competition aimed to determine how algorithms could help pathologists better identifies cancer in lymph node images. The team’s results were dramatic, dropping the human error rate in diagnosis by 85% when aiding a pathologist’s efforts with GPU-powered deep learning analysis. Led by Andrew Beck, associate professor of pathology and director of bioinformatics at BIDMC’s Cancer Research Institute, the team looked into the latest advances in artificial intelligence to help pathologists with their overwhelming tasks. “The goal was to build a computational system to assist in the identification of metastatic areas of cancer in lymph nodes,” says Beck. “Our guiding hypothesis has been that pathologists working with computers will outperform pathologists operating alone.” For this project, the team used NVIDIA Tesla K80 GPUs with the cuDNN-accelerated Caffe deep learning framework, which Beck said significantly sped up the process of training their computational models.

The results were impressive, as the team’s system successfully identified cancer 92 percent of the time (compared to a 96 percent success rate of human pathologists). It also validated the team’s hypothesis: human analysis combined with deep learning results achieved a 99.5 percent success rate. Human performance can be improved when paired with AI systems, signaling an important advance in identifying and treating cancer. The team has continued to improve its algorithms, and its most recent entry achieved a 97.1% accuracy on the evaluation set, surpassing the pathologist’s performance. “The implications of this work are large, suggesting that in the future we’ll see more examples of AI being used with traditional pathology to make diagnoses more accurate, standardized and predictive,” Beck said.

In front of such tremendous issues the potential of Deep Learning appears almost limitless and NVIDIA seems very dedicated in making those changes happen.

© HPC Today 2017 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news


Brands / Products index