Physicists at Argonne Computing Center use their Mira Supercomputer to perform experiment simulations of the Large Hadron Collider (LHC) to interpret future LHC data. Researchers at Argonne Leadership Computing installation (ALCF) helped the team to optimize their code for the supercomputer, which allowed them to simulate billions of particle collisions faster than ever.
Billions of particle coillisions per second analyzed
The Large Hadron Collider at CERN (LHC) is the world’s most powerful particle accelerator, scientists launch billions of particle collisions per second in their quest to understand the fundamental structure of matter. With each single particle collision producing about one megabyte of data, the LHC’s facility, located on the border of France and Switzerland, generates a massive amount of data. Even after filtering approximately 99% of the data, scientists have annually about 30 petabytes (or 30 million gigabytes) of data to analyze in a wide range of physics experiments, such as studies on the Higgs Boson and the Black Matter. To meet the considerable challenge of interpreting this amount of data, researchers at the US Department of Argonne National Laboratory have demonstrated the simulation of particle collisions with Mira, the 10 petaFLOP IBM Blue Gene / Q supercomputer. “Simulating collisions is essential to help us understand the response of particle detectors,” said lead researcher Tom LeCompte, Argonne physicist and former coordinator for the ATLAS experiment of the LHC, one of the four particle detectors. “The differences between the simulated data and experimental data can lead us to discover signs of new physics.”
This successful simulation marks the first time a supercomputer was used successfully to perform massively parallel simulations of the LHC particle collisions. This demonstrates that supercomputers can help drive future discoveries at the LHC while accelerating the pace at which the simulated data can be produced. The project also shows how IT resources can be used to facilitate the culmination of physics experiments. Since 2002, scientists at the LHC have relied on the Worldwide LHC Computing Grid for all their needs of data processing and simulation. Connecting thousands of computers and storage systems across 41 countries, this international grid computing infrastructure allows data to be viewed and analyzed in near real time by an international community of more than 8,000 physicists working on the four major LHC experiments. “Grid computing has been a success for the LHC, but there are still limits,” LeCompte said. “The first is that some of LHC events simulations are so complex that it would take weeks to complete. The second is that the LHC computing needs require to be increased tenfold in the coming years.” To investigate the use of supercomputers as a possible tool for the LHC, LeCompte sought and obtained computing time from the ALCF Advanced Computing Research Centre. His project focuses on the simulation of ATLAS events that are difficult to simulate with CERN’s existing grid computing infrastructure. Although the volume and nature of data from the LHC seem a natural choice for one of the fastest supercomputers in the world, it took extensive work to adapt an existing LHC simulation method for Mira’s massively parallel architecture. With the help of ALCF researchers Tom Uram, Hal Finkel and Venkat Vishwanath, the Argonne team has transformed ALPGEN, an application based on the Monte Carlo algorithm that generates the events in hadron collisions, from a single threaded simulation code into a massively multi-threaded code that could function effectively on Mira. By improving I/O performance of the code and reducing its memory use, they have managed to adapt ALPGEN to operate on the Mira system and run code 23 times faster than the original. This code optimization work allowed the team to simulate millions of LHC collision events in parallel. “In performing these tasks on Mira, the team conducted the equivalent of two years of ALPGEN simulations in weeks, and the CERN’s LHC Grid Computing Infrastructure became available toi perform other tasks,” said Uram.
If supercomputers like Mira are better integrated into the workflow of the LHC, LeCompte hopes that many more simulations could later be transferred to supercomputers, relieving the LHC computing infrastructure and accelerating significantly the simulations with the collected data. To move in that direction, his team plans to increase the range of executable code on Mira, the next candidates being Sherpa, another event generation code, and Geant4, a code to simulate the passage of particles through matter. “We also plan to help other physics groups to use supercomputers as Mira” said LeCompte. “According to our own experience it takes a year or two to adapt and rewrite the original code, and another year to run it on a large scale.”
© HPC Today 2019 - All rights reserved.
Thank you for reading HPC Today.