Verbatim: Duncan Poole, President – OpenACC
By   |  April 11, 2014

Current compiler vendors supporting OpenACC.

Except for CAPS Entreprise that supports Intel Xeon Phi with its OpenACC source-to-source compiler, do you know the reason why NVIDIA-PGI or Cray do not support that target?

PGI has actually demonstrated support for all three NVIDIA, AMD and Intel accelerators. I don’t know the schedule for the official support of Intel Xeon Phi but I have seen the demo. As they build systems with both accelerators, Cray also has customers that need the compiler to work on both NVIDIA and Intel boards. The question to ask them is therefore what their release date is going to be…

Can you evaluate how broad OpenACC has been adopted so far?

OpenACC is widely used in academia, in government organizations and in different applications such as climate and weather, molecular modeling or computational chemistry. But I don’t have any figures on the number of users. I am part of the standard, not that much of one compiler team or another. Therefore I don’t have any relationship directly with the customers, how they buy the compiler and use it in their developed codes. I have to be neutral and I have to like everyone’s efforts. Sometimes there are differences and I have to be careful to not try to draw comparisons between these members. That’s their job to differentiate themselves.

How OpenACC can further expand its number of users/programmers?

We definitely have a number of efforts to expand OpenACC. Oscar Hernandez at ORNL, one of the directors of OpenACC, is in charge of developer adoption. We have a bunch of academic members that can easily join as it costs almost nothing. Some of these academic members have their own research compilers they use to try to figure out the best way to use directives as well. For example, Barbara Chapman at the University of Houston has been involved in this kind of standards for decades. She also chairs cOMPunity, the community of OpenMP researchers and developers in academia and industry. They develop the OpenUH Compiler at the University of Houston that is a good example of open source academic tools. They have developed an OpenACC test suite and, along with PGI and Cray, they also contribute to SPEC ACCEL, a benchmark in the SPEC suite that now uses OpenACC.

But since you mentioned developers, I’d say there are two reasons to meet with them: one is to discuss the standard, how we push it forward to address things like better programming models in C++ on accelerators or how do we measure performance in accelerators, what are the performance metrics that profiling tools like Vampir or TAU should use. We also need to meet to help them figure out how to port codes within their problems. This is usually what we do in different climate or oil and gas workshops for instance – projects that are hosted by our partners such as government agencies or universities.

What are the next key features the OpenACC consortium is working on today?

The big one is to bring support for deep copy and every derived type. We are also improving the debugging support so that cluster debuggers like Allinea DDT and TotalView can work better. And as mentioned earlier, we have this GCC effort going on. I really would love to announce a new draft published for comments in June for ISC14 but I can’t confirm as it depends on the work of the technical committee. For sure, it will be published by the end of the year. PGI, Cray and CAPS all work with Allinea, RogueWave, the Vampir and TAU folks, so it is possible for them to do incremental work in this area and then finalize before the standard is released.

What about the future of OpenACC if accelerators are directly integrated onto the die, as is the case with the AMD Fusion processor, and therefore without the need to manage data movements? Can OpenACC evolve as a programming model that distributes workloads among cores with specific data placement features?

Yes, there is room for hints that the programmer provides to describe where data is going to live and make sure that it is closest to the computational element that is involved. Also, as we are supposed to be portable, we need to cover not only these new advanced devices but the ones that plug in in a more traditional fashion, given that we are still substantially x86.

In your opinion, is OpenACC going to be used with the forthcoming exascale machines?

Yes it will be. Now, it depends on what your timeframe is. Suppose, maybe four years from now, that accelerators start to look just like CPUs: then, there will be less a reason to have a difference, less a reason to have two standards. OpenACC’s raison d’être is to give good performance on accelerators and it may be that the features that are being developed in the OpenACC compilers will continue to get good performance in that timeframe. We are not trying to be OpenMP. For all kinds of good reasons, you can and people will mix OpenMP and OpenACC.

Navigation

<123>

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index