2020: Targeting Exascale
By   |  January 25, 2016

Originally, the LHCb detector was designed for operation with moderate luminosity and low pileup. However, in 2010, the collaboration opted for “luminosity levelling”, a novel solution which allows the experiment to adapt automatically to normal variations in luminosity which occur during an LHC run. In this way, the detector operates optimally at all times. “For the second LHC run, we will have to redefine the luminosity, to adapt to the conditions at the new energy of 13 TeV,” explains Patrick Koppenburg, physics coordinator of the LHCb collaboration. “However, the most important experimental challenge for us will be the new trigger system.”

The trigger rapidly sorts the most interesting data from the data that can be discarded without a significant loss of information. The zero-level (i.e. the first level) trigger system of LHCb “only” lets through one sixteenth of the initial data, but even that is too much to be stored permanently. “During the first run, the data which got through the zero-level trigger was treated using the Hlt1 and Hlt2 algorithms, almost in real-time,” says Patrick Koppenburg. “Then we observed that on certain crucial parameters such as lifetime acceptance, differences started appearing between the values of the trigger system and those calculated after analysis. So from January 2015, we will temporarily store to disk all the data filtered by Hlt1. Then, we’ll run Hlt2 on the data after having calibrated the detector. This procedure will enable us to eliminate most discrepancies at source, as the reconstruction of the data will be the same in the trigger system as in the final analysis.”

The expectations of the LHCb collaboration for the second LHC run are focused on two topics: “In the first months, running at low luminosity, we’ll do cross-section measurements at 13 TeV of the production of charm, the B particle, and the Z and W particles and, more generally, we’ll measure the charged forward multiplicity,” Patrick Koppenburg concludes. “Then we will continue to accumulate statistical data for our precision studies of b and charm physics.”

PERFORMANCE IMPROVEMENTS
The experiments at the Large Hadron Collider (LHC) will start taking data at the new energy frontier of 13 teraelectonvolts (TeV) – nearly double the energy of collisions in the LHC’s first three-year run. These collisions, which will occur up to 1 billion times every second, will send showers of particles through the detectors.

With every second of run-time, gigabytes of data will come pouring into the CERN Data Centre to be stored, sorted and shared with physicists worldwide. To cope with this massive influx of Run 2 data, the CERN computing teams focused on three areas: speed, capacity and reliability.

“During Run 1, we were storing 1 gigabyteper-second, with the occasional peak of 6 gigabytes-per-second,” says Alberto Pace, who leads the Data and Storage Services group within the IT Department. “For Run 2, what was once our “peak” will now be considered average, and we believe we could even go up to 10 gigabytes-per-second if needed.”

At CERN, most of the data is archived on magnetic tape using the CERN Advanced Storage system (CASTOR) and the rest is stored on the EOS disk pool system – a system optimized for fast analysis access by many concurrent users. Magnetic tapes may be seen as an old-fashioned technology. They are actually a robust storage material, able to store huge volumes of data and thus ideal for long-term preservation. The computing teams have improved the software of the tape storage system CASTOR, allowing CERN’s tape drives and libraries to be used more efficiently, with no lag times or delays. This allows the Data Centre to increase the rate of data that can be moved to tape and read back.

REDUCING THE RISK OF DATA LOSS
Reducing the risk of data loss – and the massive storage burden associated with this – was another challenge to address for Run 2. The computing teams introduced a data ‘chunking’ option in the EOS storage disk system. This splits the data into segments and enables recently acquired data to be kept on disk for quick access. “This allowed our online total data capacity to be increased significantly,” Pace continues. “We have 140 petabytes of raw disk space available for Run 2 data, divided between the CERN Data Centre and the Wigner Data Centre in Budapest, Hungary. This translates to about 60 petabytes of storage, including back-up files.” 140 petabytes (which is equal to 140 million gigabytes) is a very large number indeed – equivalent to over a thousand of full HD-quality movies.

DATA SLICING AND REPLICATION
In addition to the regular “replication” approach – whereby a duplicated copy is kept for all data – experiments will now have an option to scatter the data across multiple disks. This “chunking” approach breaks the data into pieces. Use of reconstruction algorithms means that content will not be lost even if multiple disks fail. This not only decreases the probability of data loss, but also cuts in half the space needed for back-up storage. Finally, the EOS system has also been further improved to achieve the goal of more than 99.5% availability for the duration of Run 2. From quicker storage speeds to new storage solutions, CERN is well-prepared for all of the fantastic challenges of Run 2.

STORAGE IMPROVEMENTS FOR FUTURE NEEDS
The transition to season 2 highlighted new needs for the treatment and conservation of the data. This implies that 1 billion collisions occur each second, generating avalanches of particles in the detectors. Every second of operation of the LHC and its experiments, generates several gigabytes of data which need to be stored, sorted and shared with physicists worldwide. In order to cope with this massive influx of data, CERN’s Data Storage group focused on three aspects: speed, capacity and reliability. “During the first operating period, we stored 1 gigabyte per second, with occasional peaks of 6Gb / s says Alberto Pace, who runs the Storage group data and service data within CERN’s IT department. For the second operation period, 6 GB / s is now considered an average, and we believe we can go up to 10 GB / s if necessary. “At CERN, the bulk of the data is stored on magnetic tape through our CASTOR improved storage system and the rest is stored on EOS, our common storage discs with an optimized filesystem in order to allow quick access to a large number of concurrent users.

Tape technology may seem exceeded. Yet it is a robust storage solution for recording huge amounts of data and thus ideal for long-term preservation. The teams improved the CASTOR storage system software. The reelstands and magnetic tape libraries can thus be used more effectively, without downtime or delay. The centre can thus increase the pace the data is transferred to and read from tape.

ROADMAP DEVELOPMENTS AND EXASCALE IN 2020
CERN’s infrastructure has also evolved during the Season 2 of the particle accelerator. Dirk Düllmann, deputy head of group data and storage services in the IT Department at CERN, detailed in the month of June 2015 roadmap to achieve developments the goal of exascale 2020. Dirk Düllmann provides storage services and develops data management frameworks for the physics community at CERN. It is responsible for the development and evolution CERN storage components and high performance disk pools for analyzing LHC data. Projected data generated by CERN is slated to be multiplied by 14 by 2020, for a grand total of over 200 Peta-bytes. As a comparison, Google’s search engine handles 100 Petabytes per year, and Facebook 180 Petabytes per year.

FROM 2020 TO 2030: AFTER THE LHC, THE HL-LHC The current phase, dubbed Season 2, extends until the end of 2017, with a new hiatus planned in 2018, before resuming operations 3 for season 2019 to 2021. 2022 will a pivotal year, as developments planned for the LHC will evolve towards the HL-LHC (High Luminosity LHC) phase which is planned to last until 2030. These developments go hand in hand with a substantial increase in energy needs, since it will increase from 8 megawatts 20 megawatts. System memory will be increased to 64 Petabytes, and the LHC’s storage capacity should reach a staggering 1 exabyte (1000 terabytes). Despite these quantitatively significant developments, the availability will be multiplied by 7, with a MTTI (mean time to interrupt) reduced to 1 day instead of 7.

ROOT, THE DATA ANALYSIS FRAMEWORK
This framework now has the following features:

  • Scalable, efficient, format independent
  • Orthogonal object model
  • Object serialization
  • Auto Evolution
  • Object versioning
  • Integrated data compression
  • Granularity and easily tunable grouping
  • Remote access
  • HTTP, HDFS, Amazon S3, CloudFront and Google Storage compliant
  • self-describing file format
  • ROOT I/O is used to store all the LHC data

CONCLUSION
Since its inception CERN has a long tradition in the deployment systems of storage on a large scale for the scientific community worldwide as a whole. During the first period of operation of the LHC, CERN spent 100 mark Peta-bytes and contributed to the rapid confirmation of the Higgs boson and many other results resulting from the operation of the LHC. Through the first season, the deployment, storage and data management models have evolved significantly and CERN’s infrastructure has been evolving, being constantly modernized and optimized to continue meeting new needs as they arise.

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index