We're gradually importing our scientific contents in French. Please forgive the visual glitches that may result from the operation.
Intel is placing its pieces on the deep learning chessboard
By   |  September 14, 2016

The Xeon Phi second generation, codenamed Knights Landing was only unveiled 3 months ago and Intel chose the IDF 2016 to start talking about the third generation codenamed Knights Mill. Once again, the key word is “Deep Learning”. This new CPU will hit the market next year and is expected to improve energy efficiency, offers better scale-out, and more flexible memory capacity. The details are still scarce but this generation should be complementary to the current and the next generation (Knights Hill) as it will focus on deep learning applications. With this computing area rapidly expanding, Intel had to tweak its Xeon Phi to optimize its deep learning abilities.

In fact, even if Intel processors are widely used for deep learning, they are not the best when it comes to the most intensive calculations. For those cases, NVIDIA GPUs are more performant as they support half precision that Knights Landing Xeon Phi chips don’t. With that in mind it is important to understand that for machine learning more data at lower precision is often more efficient than smaller amounts of more precise data.

Less precision but more data
So, one of the characteristic of this Knights Mill generation is the enhancement of variable precision. Moving to half precision would effectively at least double the performance compared to the actual generation on machine learning workloads that are sensitive to dataset size and memory bandwidth. In its competition with NVIDIA, this strategical move from Intel might be enough to close the performance gap on this specific kind of workloads that take advantage of half precision floating point.

The Knights Mill CPU might be inspired by the recently acquired Nervana technology: originally they planned to sell in 2017 a specific hardware-software mix for deep learning, up to ten times faster than the current GPU (democratizing the way that Google does: a processor specifically developed for these tasks), but Diane Bryant, executive VP end GM of Intel’s Data Center Group wasn’t clear about it, mentioning only that “bringing together the engineers that create Xeon and Xeon Phi with the talented Nervana Systems engineers and their deep learning engine can accelerate solutions”. However, these optimizations cannot always be done without cutting other features (eliminating circuits for operations on floating point numbers with a too high precision, for example). One thing is sure, the fight between Intel and NVIDIA to rule the deep learning world is probably about to take a new turn.

© HPC Today 2017 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

S01-170208

Brands / Products index