Why the U.S. National Strategic Computing Initiative (NSCI) Matters


By Kristin Hansen | September 15, 2015 | HPC, NSCI, IDC



A centerpiece of last week’s IDC HPC User Forum agenda was the Sep 10th morning session dedicated to multiple, expert panel discussions on the U.S. National Strategic Computing Initiative (NSCI).  The NSCI, created by executive order on July 29th, defines a multi-agency framework for furthering U.S. “economic competitiveness and scientific discovery” through orchestrated advances in high performance computing (HPC).  Pursuing the objectives outlined by the NSCI will -- over the next several years -- have important consequences for HPC users, HPC vendors, and citizens at large.

The NSCI:  An “Apollo Project” For Our Time

While launched to relatively little media or public fanfare, the NSCI can reasonably be compared to other sweeping, government-led initiatives designed to accelerate economic, scientific, or other advances in a specific sector.  Various panel speakers at the IDC event cited the U.S. government’s role in expanding rural electrification, fostering space exploration, birthing atomic energy, and – more recently – shaping the Internet as apt comparisons for the NSCI’s vast, transformative potential in the years ahead.

What, exactly, is the NSCI expected to transform?  In a nutshell: the world’s capacity to calculate, analyze, and ultimately address some of the most pressing challenges we face as a global citizenry.  Think climate change, cancer and other serious illnesses, and economic disparity with its attendant threats to collective security and well-being.  Naturally, many purpose-built supercomputers and other high-performance, clustered systems all over the world are already engaged in important efforts to tackle these and other complex problems.  They perform calculations, run simulations, model outcomes, and consume massive levels of computational power (think “petaflops”) in the process.  These systems, prevalent in academic institutions, government agencies, and regional or national supercomputing centers, can be thought of as “computing intensive” in their theoretical problem-solving approaches.

Meanwhile, across various commercial sectors, a different kind of computational discipline is taking hold, as organizations grapple to discern meaning from reams and reams of actual market and consumer data.  As more and more industries make the painful shift to “big data,” a.k.a. data-driven decision-making and analysis, they are typically cobbling together large data warehouses with the capacity to store, filter, and analyze enormous volumes of data (think “petabytes”) in realtime or near-realtime.  These “data intensive” environments can look very different from their “computing intensive” counterparts, tending to be comprised of over-the-counter, commodity components rather than developed as purpose-built systems.  

Combining the best of computational and data-driven approaches

The NSCI lists five key objectives (read the document here in its entirety), which collectively seem to underscore one overarching goal: to revolutionize our problem-solving capabilities by combining the best attributes of today’s “computing intensive” and “data intensive” architectures.  On the one hand, systems that can perform complex modeling and simulation to derive insightful theoretical outcomes.  On the other hand, architectures fast and nimble enough to process and respond to massive volumes of real – rather than theoretical – information.  To cite just one example, think of how climate projections could be enhanced by routinely applying the latest modeling techniques alongside decades, or even centuries, of actual trended data.  

Supercomputing for the masses

What’s another way of saying all this?  In essence, the NSCI will accelerate convergence of high performance computing (HPC) and big data.  Initially, a handful of federal agencies -- the Department of Defense, the Department of Energy, and the National Science Foundation, to name a few -- will spearhead broader interagency initiatives to simplify and standardize supercomputing platforms, expand availability of technical training and skills, sweep away technical roadblocks (i.e. the practical limits of Moore’s Law) and achieve resulting leaps and bounds in computing performance (i.e. exascale).  Most of these initiatives will focus on enhancing performance at the very upper end of the supercomputing spectrum: the “top” of the Top 500.  

Over time, however, as with other broad government initiatives, several “trickle down” benefits are expected to emerge across broader segments of the economy.  The fifth stated objective of the NSCI, to “ensure that the benefits of the research and development advances are, to the greatest extent, shared between the United States Government and industrial and academic sectors,” speaks directly to this principle.  

As supercomputing becomes more powerful, more standardized, better staffed, and more capable of embracing a blend of computational and data-driven approaches to problem solving, all sectors and tiers of the economy stand to benefit.  The convergence of HPC and big data – of “petaflops” and “petabytes” – will bring supercomputing to the masses, enabling more and more of us to participate in solving the world’s biggest challenges.  


To learn more about Bright Computing’s management capabilities for HPC and big data clusters, click here.  

To watch extended video coverage of the IDC HPC Forum panel on the National Supercomputing Initiative, provided by InsideHPC, click here.

High Performance Computing eBook