6 – The Goals of IT and HPC Are Also Different
CIOs who are new to HPC often make the mistake of treating it like a typical IT function. This misperception can lead to lost productivity and to conflict between the CIO’s office and the company’s HPC staff.
I recently interviewed Jim Barrese, chief technology officer of eBay’s PayPal subsidiary, which adopted HPC not long ago for real-time detection of online consumer fraud. In that interview, he advises CIOs to “clearly understand that HPC is not a mass consumption technology where we enable everyone in our organization with it. This is a deep engineering function. It’s custom built and includes writing software to solve cutting-edge problems… Think of HPC not as an IT function but as a competitive business advantage. There’s a hard link between HPC and PayPal’s top line and bottom line.“
In important respects, HPC is different from general IT deployments. IT is generally about provisioning – equipping each of the company’s knowledge workers with the basic computing tools they need to perform their jobs productively, and providing as little beyond that as possible to stay within the budget. HPC, on the other hand, is about enablement – providing a small subset of specialized knowledge workers with the most powerful computational tools the company can afford. A typical IT worker’s desktop or laptop system is capable of fully supporting the worker’s computing requirements, while there is often no limit to the amount of computing power an HPC user could exploit on the company’s behalf.
7 – Key IT Datacenter Technologies Have Trickled Down from HPC
There is a perennial debate between those who argue that key IT technologies “bubble up” from the low end, such as embedded and desktop devices, and those who counter that key technologies “trickle down” from the high end, especially HPC. In reality, of course, both arguments are correct. Technological innovation is bidirectional, flowing up and down. For example:
– Standard x86 processors bubbled up into HPC from the market for desktop/laptop computers.
– Conversely, x86-based clusters were born in the HPC market and later trickled down into enterprise IT datacenters.
– The Linux operating system is an HPC innovation that helped make clusters dominant in HPC. Linux clusters later began moving into financial services firms and other commercial datacenters.
– Grid computing and cloud computing are two more important technologies that have trickled down from HPC to mainstream commercial markets.
– On the bubble-up front, multiple processors and coprocessors have been making their way from the embedded systems market into HPC, including GPUs, ARM, and Atom devices.
8 – HPC Systems Now Start at Under $10,000
Decades ago, entry pricing for a supercomputer was in the $25 million to $30 million range. Thanks to the transition to clusters based on industry-standard technologies, pricing for HPC systems now starts at less than $10,000. With entry prices this low, HPC systems have become affordable for many more companies than ever before.
9 – Commercial Firms Are Also Adopting HPC for Challenging Big Data Problems
High-performance data analysis (HPDA) is the term IDC coined to describe the convergence of the established data-intensive HPC market and the high-end commercial analytics market that is starting to move up to HPC resources. The financial industry has been running analytics on HPC systems at least since the late 1980s. But newer methods, from MapReduce/Hadoop to graph analytics, have greatly expanded the opportunities for HPC-based analytics. A 2013 IDC worldwide study showed that 67% of HPC sites are running HPDA workloads. IDC forecasts that revenue for HPC servers acquired primarily for HPDA use will grow from $739 million in 2012 to $1.4 billion in 2017. Revenue for the whole HPDA ecosystem, including servers, storage and interconnects, software, and service should double the server figure alone.
10 – There Is More on Tap from HPC
One of the next important developments IDC expects to come out of the HPC market is more capable network technologies to speed communications between cores, processors, servers, and nodes. This development should help to address the so-called memory wall, the growing gap between escalating processor peak speeds and the lagging ability of internal networks to feed processors with enough data to keep them busy. Improving network bandwidths and latencies should be especially important for challenging Big Data tasks faced by businesses and government organizations alike.
What Does This Mean?
As more companies of all sizes in more markets learn to exploit HPC (and its close relative, HPDA) to speed and improve innovation, competitors lacking this advantage will fall behind. Successful CIOs will therefore need to gain a basic understanding of HPC and ensure that their organizations carefully consider whether to adopt this technology.
More around this topic...
© HPC Today 2020 - All rights reserved.
Thank you for reading HPC Today.