An inside look at the UberCloud HPC Experiment
By and   |  March 13, 2014

Team 1 – Structural analysis model using HPC in the cloud. The picture shows concrete anchor bolt tension capacity simulation with 1.9 million degrees of freedom.

6 – HPC cloud Use Cases from the UberCloud HPC Experiment

As a glimpse into the wealth of practical use case results so far, we chose to present five out of 125 UberCloud Experiment results demonstrating the wide spectrum of CAE applications in the cloud (more in the Teams pages of the UberCloud’s website).

Team 1: Heavy-duty Structural Analysis Using HPC in the cloud.

This first team consisted of engineer Frank Ding from Simpson Strong-Tie, software provider Matt Dunbar with Abaqus software from SIMULIA, resource provider Steve Hebert from Nimbix and team expert Sharan Kalwani from (at the time of this experiment) Intel. The functions of this team range from solving anchorage tensile capacity and steel and wood connector load capacity to special moment frame cyclic pushover analysis. The HPC cluster at Simpson Strong-Tie is modest (32 cores) so when emergencies arise, the need for cloud bursting is critical. Another challenge is the ability to handle sudden large data transfers, as well as the need to perform visualization to ensure that the design simulation is proceeding along the correct lines.

Team 2: Simulating new probe design for a medical device.

The end user’s corporation is one of the world’s leading analytical instrumentation companies. They use computer-aided engineering for virtual prototyping and design optimization for sensors and antenna systems used in medical imaging devices. Other participants in this team were software provider Felix Wolfheimer from Computer Simulation Technology, and Chris Dagdigian, CST and team expert from The BioTeam, Inc. The team used Amazon AWS Cloud resources. From time to time, the end-user needs large compute capacity in order to simulate and refine potential product changes and improvements. The periodic nature of the computing requirements makes it difficult to justify the capital expenditure for complex assets that will likely end up sitting idle for long periods of time. To date, the company has invested in a modest amount of internal computing capacity sufficient to meet base requirements. Additional computing resources would allow the end user to greatly expand the sensitivity of current simulations and may enable new product and design initiatives previously written off as “untestable“.

Hybrid cloud-bursting architecture permits local computing resources residing at the end-user site to be utilized along with Amazon cloud-based resources. The project explored the scaling limits of the Amazon EC2 instances and scaling runs designed to test computing task distribution via the Message Passing Interface (MPI). The use of MPI allows the leveraging of different EC2 instance type configurations. The team also tested the use of the Amazon EC2 Spot Market in which cloud-based assets can be obtained from an auction-like marketplace offering significant cost savings over traditional on-demand hourly prices.

Team 8: Flash Dryer Simulation with Hot Gas Used to Evaporate Water from a Solid

This team consisted of Sam Zakrzewski from FLSmidth, Wim Slagter from ANSYS as software provider, Marc Levrier[15] from Serviware/Bull as resource provider and HPC expert Ingo Seipp from Science + Computing. In this project, Computational Fluid Dynamics (CFD) multiphase flow models were used to simulate a flash dryer using CFD tools that are part of the end-user’s extensive CAE portfolio. On the in-house server, the flow model took about five days for a realistic particle-loading scenario. ANSYS CFX 14 was used as the solver. Simulations for this problem were using 1.4 million cells, five species and a time step of 1 millisecond for a total time of 2 seconds. A cloud solution allowed the end-user to run the models faster to increase the turnover of sensitivity analyses. It also allowed the end-user to focus on engineering aspects instead of using valuable time on IT and infrastructure problems.

Team 40: Simulation of Spatial Hearing

This team consisted of an anonymous engineer end-user from a manufacturer of consumer products, software providers Antti Vanne, Kimmo Tuppurainen and Tomi Huttunen from Kuava and HPC experts Ville Pulkki and Marko Hiipakka from Aalto University in Finland. The resource provider was Amazon AWS. The human perception of sound is a personal experience. Spatial hearing (i.e. the capability to distinguish the direction of sound) depends on the individual shape of the torso, head and pinna (the so-called head-related transfer function, or HRTF). To produce directional sounds via headphones, one needs to use HRTF filters that “model” sound propagation in the vicinity of the ear. These filters can be generated using computer simulations, but to date, the computational challenges of simulating HRTFs have been enormous. This project investigated the fast generation of HRTFs using simulations in the cloud. The simulation method relied on an extremely fast boundary element solver.

Team 58: Simulating Wind Tunnel Flow Around Bicycle and Rider

The team consisted of end-user Mio Suzuki from Trek Bicycle, software provider and HPC expert Mihai Pruna from CADNexus and resource provider Kevin Van Workum from Sabalcore Computing. CAPRI to OpenFOAM Connector and the Sabalcore HPC Computing Cloud infrastructure were used to analyze the airflow around bicycle design iterations from Trek Bicycle. The goal was to establish a greater synergy among iterative CAD design, CFD analysis and HPC cloud. Automating iterative design changes in CAD models coupled with CFD significantly enhanced the engineers’ productivity and enabled them to make better decisions. Using a cloud-based solution to meet the HPC requirements of computationally intensive applications decreased the turnaround time in iterative design scenarios and significantly reduced the overall cost of the design.

Team 118: Conjugate Heat Transfer to Support the Design of a Jet Engines in the cloud

This team consisted of the end user, thermal analyst Hubert Dengg from Rolls-Royce Germany, the resource providers Thomas Gropp and Alexander Heine from CPU 24/7, software provider ANSYS, Inc. represented by Wim Slagter, and HPC/CAE expert Marius Swoboda from Rolls-Royce in Germany.

The aim of this HPC experiment was to link the commercial CFD code ANSYS Fluent with an in-house FE element code. This conjugate heat transfer process is very consuming in terms of computing power, especially when 3D-CFD models with more than 10 million cells are required. Consequently, it was thought that using cloud resources would have a beneficial effect regarding computing time.

Two main challenges had to be addressed: bringing software from two different providers with two different licensing models together, and getting the Fluent process to run on several machines when called from the FE software. After a few adjustments, the coupling procedure worked as expected.

From the end user´s point of view there were multiple advantages of using the external cluster resources, especially when the in-house computing resources were already at their limit. Running bigger models in the cloud (in a shorter time) gave more precise insights into the physical behavior of the system. The end user also benefited from the HPC cloud provider’s knowledge of how to setup a cluster, to run applications in parallel based on MPI, to create a host file, to handle the FlexNet licenses and to prepare everything needed for turn-key access to the cluster.

Navigation

<12345>

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index