By David Turek
VP, Exascale Computing, IBM
I’m not usually a big fan of anniversaries (except my wedding day, of course), but I make an exception when it comes to IBM’s collaboration with the US Government on supercomputing.
Today is the 20th anniversary of the Accelerated Strategic Computing Initiative–a Department of Energy program that has safeguarded America’s nuclear weapon arsenal and, and the same time, helped IBM assert ongoing leadership in this most demanding of computer domains.
With help from National Laboratories scientists, teams of IBMers have produced five generations of supercomputers–repeatedly ranking among the fastest machines in the world. The journey led us to where we are today: developing a sixth generation of computers, data-centric systems designed from the ground up for the era of big data and cognitive computing.
The program was also instrumental in IBM’s rebound after the company’s near-collapse in the early 1990s.
I remember the day the original ASCI contract was signed. IBM and DOE people had gathered in a conference room at the IBM headquarters north of New York City. Unexpectedly, Lou Gerstner, IBM’s then-new CEO, popped in and gave off-the-cuff remarks. I remember him saying, “IBM is all about solving hard problems. This is the hardest problem there is. We’re all in.”
I was sitting in a chair and he was standing behind me. He put his hands on my shoulders and said, “Here’s the guy who will do it.”
The task of creating computers that are capable of simulating nuclear explosions so countries don’t have to test with actual bombs turned out to be difficult indeed.
The first years were the toughest.
I had been with IBM for 20 years by then and had experience in both hardware and software development. Most relevantly, I had been involved in an effort to transform IBM mainframes into supercomputers. That didn’t pan out, but in the process we learned a lot about what it would take to build high-performance computers. We had relaunched our supercomputing effort with a new technology strategy just before we engaged with the Department of Energy.
To ramp up the ASCI project development team quickly, I cherry-picked people from IBM’s offices and labs all over the Hudson Valley. Some of them were green, in their 20s, but they had the nerve to rethink computing.
We made a series of radical choices. We adapted processors and systems technologies that IBM had developed for its scientific workstation business. UNIX would be the operating system. We had to invent new networking to hook all the processors together. And we were one of the first groups at IBM to use open source software. We had to move too quickly to code everything ourselves.
We also had to develop a new process for developing and manufacturing such complex systems–with thousands, and, later, millions, of processors.
With each new generation, the requirements increased dramatically. The first machines produced 3 teraflops of computing performance, or 3 trillion floating point operations per second. The current generation produces 20 petaflops; 20 quadrillion operations per second. That meant we had to invent not just individual technologies but whole new approaches to computing.
For instance, in the early 2000s, IBM Research and scientists at Lawrence Livermore National Laboratory teamed up to create a new supercomputing architecture which harnessed millions of simple, low-powered processors. The first systems based on this architecture, called Blue Gene/L, were incredibly energy efficient and exceeded the performance of Japan’s Earth Simulator by greater than a factor of 10, helping the US recapture leadership in supercomputing.
Today, we’re developing yet another generation of supercomputers for the National Laboratories. They’re based on the principle that the only way to efficiently handle today’s enormous quantities of data is to rethink computing once again. We have to bring the processing to the data rather follow the conventional approach of transmitting all of the data to central processing units.
When we first proposed this solution, we were practically laughed out of the room. But, today, data-centric computing is becoming accepted across the tech industry as the way to go forward.
Through the ASCI project, I learned lessons that I think are critical for any large-scale development project in the computer industry. First, you must assemble an integrated team of specialists in all of the hardware and software technologies. You can’t negotiate to get the technologies and skills you need from a half dozen vice presidents who have their own priorities. Second, you must see the big picture. Don’t think of a server computer in isolation. Plan so you can integrate servers and other components in large systems capable of taking on the most demanding computing tasks.
I guess there’s one more critical lesson I learned from this tremendous experience: recruit bright and fearless people and ask them to do nearly impossible things. Chances are, they’ll rise to the challenge.
More around this topic...
© HPC Today 2019 - All rights reserved.
Thank you for reading HPC Today.