With the advent of high speed internet (100 Gbps) – the first transatlantic link of this kind, ANA100, was installed last year and five or six more are expected before the end of 2014 – it just may be that some of the main computing centers around the world can finally be directly connected. The goal: to aggregate their power and form a worldwide supercomputer of virtually limitless power. This, in any case, is the initiative that the A*STAR group and Obsidian Strategics revealed to a select group of analysts during ISC14.
Of course, high speed lines won’t be enough, which is why the group has also extended the InfiniBand protocol to allow applications to be deployed seamlessly across this global system. Testing InfiniBand regional long distance, especially between NASA’s AMES and Goddard Space Flight Centers, they were able to demonstrate a 30x improvement in bandwidth. In addition to extending the protocol, A*STAR (via the InfiniCortex project) develops mathematical tools and software for interconnecting arbitrary supercomputers topologies in what they call a “Galaxy”. The main idea here is to define a topology with the smallest diameter and the smallest number of links possible. In graphical terms, imagine a set of sub-graphs representing the topologies of each of the computers involved, combined in a single graph – and therefore optimized – representing the Galaxy as a whole.
Three test phases are currently underway. The first is being conducted at three centers via the Singapore Singaren National R & E network. The second connects A*STAR’s Singapore Computational Resource Centre to the Tokyo Institute of Technology, which hosts the Tsubame-KFC machine. Finally, the third phase, scheduled for September, will test longer distances between A*STAR, Oak Ridge’s Titan, and Stony Brook University in New York. The group aims to reveal conclusive results during SC14 in November, when the trans-pacific 100 Gbps line should be operational.
Clearly, this global supercomputer will not be suitable for all scientific applications, particularly those that require significant data exchange between nodes. However, it should be able to tackle problems involving the processing of heavily discretized data, such as genomics with the parallel decryption of DNA sequences.
More around this topic...
In the same section
© HPC Today 2017 - All rights reserved.
Thank you for reading HPC Today.
Most read articles