Optical processing: a paradigm shift for CFD?
By   |  September 27, 2014

Now in prototype form, revolutionary new technology from UK start-up Optalysys may cut HPC and Big Data applications such as CFD down to size – delivering a step change in computing power at a fraction of current costs.

Testing the limits of Moore’s Law, and the capabilities of most modern supercomputers, HPC and Big Data applications such as CFD create the need for enormous volumes of data to be processed and analyzed within realistic timeframes – that is: timeframes measured in minutes rather than days.

But with processing time dominated by memory access operations (rather than processor clock speeds), computing power is, invariably, a limiting factor: despite the use of large distributed processor arrays and supercomputers, the demands of these data-intensive applications frequently outstrip the capabilities of traditional electronic computing – resulting in projects becoming stalled for protracted periods of time, whilst Gigabytes of data are downloaded.

There is, however, light at the end of the tunnel (quite literally) thanks to UK based firm Optalysys, whose revolutionary new optical processor has the potential to deliver the equivalent of a quintillion floating point operations per second (an exaflop) in a traditional office setting – via a standard ‘desktop’ sized computer and conventional mains supply.

The constraints

Initially, Optalysys is focusing its attention on CFD: an essential tool for a quarter of a million engineers and scientists around the world, which is used in a range of sectors – from R&D and manufacturing to environmental and medical applications.

Optalysys’ Chairman James Duez explains, “The speed of processing needed to create CFD models is often constrained by current electrical computing capabilities. The speeds that optical processing can achieve could eliminate this problem. So the sector represents a logical target for us.

Company founder and CEO Dr Nick New continues, “The largest CFD models, such as weather systems and turbulent airflows, are based on Direct Numerical Simulation (DNS) – applying mathematical operators such as linear algebra, derivative functions, matrix inversion and interpolation. Such is the scale and complexity of these models, that even the most powerful supercomputers can take days, weeks – or even months – to simulate airflow over a relatively small area. Just one example, a 32,000 time-step submarine simulation by the DAAC [the Distributed Active Archive Centers, part of NASA’s Earth Observing System Data and Information System], took 45 hours to complete – despite running on a Cray XE6 using 12,288 processors.

Advantage: optical

The spectral derivative functions used in high-end CFD models are based on FT (Fourier Transform) operations, which – using Optalysys’ optical process – can actually be calculated using a combination of low power (mW) laser light, a lens, an input device and a CMOS (complementary metal–oxide–semiconductor) sensor to capture results.

According to Dr New, numerical data is entered using liquid crystal micro displays known as SLMs [Spatial Light Modulators – which are produced in large volumes for projectors with 4K resolutions, with pixel sizes of around eight microns]. The more SLM pixels that are employed, the higher will be the resolution of the mathematical process – and the greater the optical advantage. Frame speeds can exceed 1 KHz – allowing extremely high rates of data to be entered.

Navigation

<12>

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index