NVIDIA announced the Tesla M4 and M40, two cards dedicated to accelerate intensive Machine Learning workloads. The M40 is aimed at researchers to help them create new Neural Networks for each application they want to operate with artificial intelligence. The M4 is a low power accelerator designed to deploy these neural networks across the data center. Both cards benefit from a specific library for accelerating GPU treatments. Together, they enable developers to use the Tesla accelerated computing platform to manage machine learning on the data center. “The race to artificial intelligence is launched” said Jen-Hsun Huang, co-founder and CEO of NVIDIA. ” Machine Learning is undoubtedly one of the most important developments at present in the IT field, the same as the computer, Internet and the Cloud. Machine learning is the big IT challenge of our generation. We created the range of Tesla HyperScale accelerators to multiply by 10 the Machine Learning algorithms calculation power. Time and money earned for datacenters are huge” he says.
Accelerating Web applications
These new software and hardware products are specifically designed to accelerate the flow of Web applications that integrate artificial intelligence capabilities. Significant advances allow to use artificial intelligence techniques to create smarter services and applications. Machine learning is used to achieve accurate speech recognition. It also allows to create automatic recognition algorithms in videos or photos, with possibility for future research. Automatic learning also enables facial recognition in videos or pictures even when the face is veiled in part. Finally, it operates services that help to know the tastes and personal interests of the user, arranging schedules, provide relevant information and answer precisely a voice command. The challenge is to get the awesome power IT needed to innovate and to train neural networks and to process information to respond immediately to the billions of requests from consumers.
TESLA M40 : Designed for Data Scientists
With 3072 stream processors and 7 TFlops of computing power, the NVIDIA Tesla M40 accelerator GPU allows data scientists to save time, days or weeks, when they train their deep neural networks to manage huge amounts data while obtaining a higher accuracy. Adapted to automatic learning, it divides according to the manufacturer, the training time compared with 8 CPU processors (1.2 days against 10 days using AlexNet for a classical training). The support for NVIDIA GPUDirect enables rapid training and creation of multinode neural networks.
Tesla M4 for the datacenter
The M4 NVIDIA Tesla GPU is a low power accelerator created for HyperScale environments to enable responsive service and demanding web applications, as video transcoding, processing images and videos, and automatic learning inference. According to Nvidia, the M4 is capable of improving data analysis up to 5 times more video streams simultaneously than a conventional CPU. On the energy consumption side, the Tesla M4 consumes 50 to 75 watts, and provides energy efficiency up to 10 times better than a CPU while processing video and machine learning algorithms. The M4 has 1024 stream processors and 2.2 TFlops calculation power. The onboard memory is 4GB against 12GB on the M40.
NVIDIA Hyperscale Suite, a dedicated software kit
The new software suite contains NVIDIA HyperScale tools for both developers and data center managers and is specially designed for deploying web services, such as:
- cuDNN – the most popular software algorithms that manages deep convolutional neural networks used for artificial intelligence applications.
- Accelerated FFmpeg Multimedia Software GPU – accelerated transcoding and processing videos.
- NVIDIA GPU Engine REST – Create easily accelerated and deploy web services high processing capacity and low latency ranging from dynamic resizing Image accelerating research through the image classification.
- NVIDIA image resizing engine – GPU-accelerated APIs with REST Service which allows to resize images 5 times faster than a CPU.
At the last demonstration of Tesla accelerated computing platform NVIDIA announced its collaboration with Apache to add GPU technology support within the Mesosphere Datacenter Operating System (DCOS). This strategic partnership will allow Web service companies to build and deploy their new generation accelerated applications more easily in data centers. The Tesla M40 Acelerator and HyperScale Suite are available, whereas the Tesla M4 Accelerator will be available in the first quarter of 2016.
© HPC Today 2019 - All rights reserved.
Thank you for reading HPC Today.