Is OpenStack Blinded by Science?

     

If this is November, this must be SC. Yes, the annual Supercomputing conference is upon us, and I’ve already had several conversations about the role OpenStack should play in scientific computing. 

While running scientific workloads against bare-metal servers has long been the accepted way to do things. But lately, the need to squeeze the most out of every piece of hardware, and to handle a broader range of new workloads is making the idea of virtualization more compelling. 

If you’ve been thinking of how OpenStack would play a role in your scientific computing infrastructure, I suggest you visit this website on using OpenStack for Scientific Research by the OpenStack Foundation.

Last year, the foundation published a book that addressed many of the concerns researchers and sysadmins have about whether OpenStack is up to the task of providing infrastructure for scientific computing. This year brings version 2 of, “The Crossroads of Cloud and HPC: OpenStack for Scientific Research” which explores OpenStack cloud computing for scientific workloads. 

In the book, nine research organizations share how they use OpenStack for HPC-specific scenarios. They Cover all of the hot-button topics in-depth with case studies on:

  • virtualization
  • network fabrics
  • high-performance data
  • workload management
  • infrastructure management

So if you are exploring the benefits of cloud and want to know how to bring those benefits to HPC workloads, check out this free book. It will dispel the misinformation surrounding cloud & HPC and put you in a position to make the right decision for your organization.