HPC Challenges That Must Be Overcome


By Paul Williams | April 11, 2015 | HPC, exascale, supercomputing, HPC Challenges That Must Be Overcome,



Organizations operating HPC environments in the U.S. need the government to help supply more power, math, and money to ensure they keep pace with international competitors, experts recently told a federal hearing.challenges

In a panel discussion before the House Subcommittee on Energy (video replay available), vendors and academics discussed how the Department of Energy's Advanced Scientific Computing Research (ASCR) program could better support their needs. This includes funding, of course, and those issues were covered in an HPCWire article. However panelists also discussed technical issues that may keep American supercomputing environments from reaching their full potential.

The energy required for HPC leadership

One of the most critical technology challenges currently affecting HPC involves the massive amounts of energy needed to power supercomputers. David Turek, vice president for technical computing at IBM, noted to the House subcommittee the need for increased innovation in supercomputer energy efficiency.

"We forecast that absent serious and sustained innovation in energy efficiency, the cost for electricity for the fastest computer at the end of this decade or the next could approach $100 million per year,” said Turek. “Clearly, the growing operating costs for supercomputers, left unchecked, could put a material damper on adoption and propagation throughout the economy and could have a serious downside effect on the nature and rate of economic and scientific innovation.”

Turek hopes for an era where HPC research scientists are able to leverage supercomputing power using a computer at their desk with a similar energy footprint as a normal PC. "The ubiquity of this caliber of tool in the hands of large numbers of people would be a tremendous catalyst to accelerating the pace of innovation in the economy," noted Turek.

Updating algorithms to better support exascale computing

Multiple panel participants noted that many of the algorithms used in supercomputing research today were developed decades ago. Dr. Roscoe Giles, chairman, DOE Advanced Scientific Computing Advisory Committee, feels these algorithms and the associated programming environments using them must be updated to better support computing at the exascale level. Giles also hopes that updated networking and data transfer technology will help improve the computing environments used by researchers, letting them accomplish more with less, an opinion echoed by Turek. 

According to Giles, the Department of Energy also must invest in robust and scalable mathematical algorithms, operating systems, runtime systems, and tools for the management of the data that will be generated and/or processed.

Of course, HPC users can’t wait for the public sector to fill these gaps, but as more awareness about some of the challenges are raised, it may help inform the way policies are developed to supply supercomputing talent and other resources. To learn more, read the full charter for this initiative, Supercomputing and American Technology Leadership.



Paul Williams has written for Information Week, Electric Cloud, and many other publications. He lives in Columbus, OH.

High Performance Computing eBook