New Parameters, New Atmospheres: High Performance Computing (HPC) in the Cloud


By Lionel Gibbons | July 26, 2016 | Cloud, HPC



hpc-in-the-cloud.jpgIn the world of HPC, there has been a high growth in demand. There has also been a great deal of growth in demand for cloud computing. Not surprisingly, both of these growth areas are combining.

The growth of HPC into the cloud has been hampered in some ways since it is still much cheaper to run big computing jobs on dedicated equipment.  As Valerie Silverthorne in points out, "Most enterprises that need HPC already have it -- and the necessary expertise to run it -- in-house."  According to a March 2016 article from, "the feedback that X-ISS (an HPC consulting company) is getting from its customers is that for regular, predictable workloads, the cloud is still at least twice as expensive as in-house solutions."  And there are other issues impeding the move of HPC to cloud for many enterprises, such as unsuitable licensing arrangements and technical hurdles. 

On the other hand, in the scientific research and development realms where test scenarios often call for the short-term use of enormously increased capacity, the use of HPC in the cloud has been a great boon.  That's also true for smaller software companies and others who need HPC capabilities but don't have them in-house.  "With its cheap costs, public cloud has democratised access to supercomputing or HPC, which used to be financially unviable for many organisations and limited to those who could spend tens of millions on hardware...HPC software company Cycle Computing, for example, would have had to spend $68M to run a supercomputer of the size and scale it needed in a traditional IT model, says Massingham. The AWS (Amazon Web Services) bill for the system was $33,000," according to an article by Archana Venkatraman in

Big banks and insurers also use HPC in the same off and on manner for estimation and forecast type calculations.  In the case of Spain’s sixth largest bank Bankinter, "With supercomputers on the cloud, the bank has brought down the average time for running simulations from 23 hours to 20 minutes. It estimates it would cost 100 times more in hardware alone if it chose to exit the cloud,"  from the same article cited above.

Finally for those who build and operate data centers, one set of developments in particular holds great promise.  An article in cited earlier discusses Bright OpenStack which "is open-source software for cloud computing that controls pools of hardware resources for processing, storage, and networking throughout a data centre."  According to Matthijs van Leeuwen, chief executive of Bright Computing, "As soon as we started playing with OpenStack, we realized that this would be a fantastic tool to allow our developers to reserve resources and build a cluster-as-a-service model. They start them up – all virtualized – and then shut them down.  Having used it for one and a half years, it’s become stable and versatile.”  In the same article Van Leeuwen goes on to say, “With Bright OpenStack they can build a data-centre infrastructure where they can offer their own internal users a choice: today, do they want to run today an HPC application, a big data application, or other workloads? They can also offer a choice of infrastructure: does this user want to run bare metal; does the other user want to run a virtual machine; and yet another user in a container?"  Also, from the same article, "the latest release of the software, in January, offers a scenario for bursting not just from bare metal, but also from a private to the public cloud."  The possibilities made available by such versatility are practically limitless and allow for whatever customized arrangement that meets your data centers' requirements in the most efficient and cost-effective manner.

To talk more about HPC in the cloud, please contact us. Thank you.

High Performance Computing eBook