High Speed Computing: Where is the industry heading to ?
By   |  August 27, 2015

While we thought the area of supercomputing a quiet (relatively) flat, did not count with the vitality of a growing market likely to encourage new architectural pathways. What goad players installed to go further. Round up

At the beginning of this year, the landscape of hardware acceleration has significantly evolved with players able to innovate on other fronts than raw power. For we must now also consider the energy consumption and the performance / consumption ratio, as evidenced by the recent upheaval at the top of Green 500 (see interview with AMD’s Jean-Christophe Baratault in this issue). In both cases the challenge is for acceleration solutions designers to win – ever more – in efficiency. This challenge is regularly raised by the famous Moore’s law but also by real architectural progress. And where once a solution needed years in the making, the time has come to agility with short iterative cycles and micro-architectural developments that have a real and measurable impact over CPU generations.

The hardware is not everything
In terms of acceleration, the hardware is not enough. The software part is at least as important, as evidenced by the evolution of several programming paradigms, such as NVIDIA’s CUDA, or other open implementations like Open MP and OpenCL, which promote cross-pollination of ideas and helping to create ever more sophisticated algorithms.

CPU against GPU?
The last two years have been marked by a rather virulent opposition between CPU and GPU approaches. The GPU side was advocated by AMD and Nvidia but requires revision of the code to take advantage of the parallel architecture. On the other side Intel favored the CPU approach thanks to its x86 compatibility. The two approaches are not mutually exclusive so far, as illustrated by the solution adopted by the TACC (Texas Advanced Computing Center) for its Stampede supercomputer. The latter combines Xeon E52680 8-core processors, Xeon Phi and Nvidia GPU Kepler K20. Best of both worlds !

Is there a winner?
One is tempted to answer this question by yes and no. In reality, it is not only the platform acceleration that will make the difference. The code being executed will also improve the performance. This includes specific portions that have been rewritten to take full advantage of the hardware solution on which it runs. Some highly iterative treatments will benefit from being optimized on a GPU base, but others increase their efficiency on Xeon Phi architectures.

Some of the newest directions we present in this review are of the “many core” types. The Kalray manycore approach is worth being considering carefully. And just recently again, OpenPower consortium could surprise thanks to the architectural changes that we describe later. One thing is for sure: whereas the performance was once a matter of clock frequency, it is now the result of a subtle alchemy of massively parallel processors, architectural innovations and implementation execution, all combined with a software environment that all together, help in strengthening the High Performance Computing.

© HPC Today 2019 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index