What's new in Bright 9.1


Simplify building and managing clusters from edge to core to cloud

Welcome to Bright Cluster Manager 9.1— the latest version of our market leading software for building HPC, data science, and private cloud Linux clusters. 9.1 delivers more cluster management capabilities from edge to core to cloud, and comprises built-in expertise, integrated cluster build, management, and monitoring, and platform independence. Release 9.1 takes simplified cluster management to a whole new level. Here are some of the key features in 9.1:

Ansible Module

  • Bright provides an integration with Ansible in Bright Cluster Manager 9.1, enabling administrators to use their skills and knowledge of Ansible to build and manage Bright clusters using a familiar Ansible “playbook” approach.     
  • With this integration, Bright is helping organizations that have standardized on Ansible to gain the power and flexibility that Bright Cluster Manager provides, using their configuration tool of choice.

Cluster-as-a-Service for VMware

  • Bright Cluster Manager 9.1 provides an integration with vSphere that allows high-performance clusters to be created and managed without the need for skills and knowledge of vSphere.
  • This integration allows organizations to fully utilize their vSphere infrastructure (i.e., 24x7x365), saving money, quickly giving teams the ability to spin-up Clusters-as-a-Service and accelerating innovation for the business by increasing the capacity of high-performance infrastructure that drives innovation.   

  • Tapping into available vSphere capacity for high-performance needs will also reduce the need for organizations to use public cloud infrastructure, further reducing their costs, and also eliminate the need to move data into the public cloud.


OpenShift Integration

  • Bright Cluster Manager 9.1 includes an integration with OpenShift that allows organizations to use Bright to provide management of the underlying infrastructure in place of CoreOS. With Bright providing the cluster management for OpenShift, organizations that are using Bright to manage other clusters throughout their business - from edge-to-core-to-cloud - can leverage that expertise to manage their OpenShift infrastructure, gaining the increased capabilities, flexibility and extensibility that Bright provides.  

Auto-scaling Enhancements

  • Bright Cluster Manager has the ability to automatically increase or decrease the number of nodes (servers) available to an HPC workload manager or to Kubernetes in a cluster, regardless of whether those nodes are physical, virtual, on-premise, in the public cloud or at the edge. The allocation of nodes can be determined by demand and by policy.
  • Bright Cluster Manager 9.1 includes enhancements to auto-scaling that optimizes performance, provide more granular control, and improve ease of use. Features include:
       - Faster reallocation and re-purposing of nodes that share a common software image
       - Ability to control and prioritize which cluster resources can be used by different jobs
       - Ability to leverage priorities and job resource requirements from workload managers
       - Easier setup of auto scaling via a wizard
  • With Bright Cluster Manager’s auto-scaling capability, organizations can create a flexible and dynamic high-performance computing infrastructure that spans from edge-to-core-to-cloud and can adapt to changing needs of both users and the organization.
  • Computing resources can be redirected and aggregated to support high-priority jobs when needed, and virtual resources from the public cloud or VMware can be added and subtracted to meet outlier demands when needed
  • Bright’s auto-scaling capability eliminates silos in high-performance infrastructure, increases and optimizes overall compute utilization, and improves and accelerates responsiveness to end user’s needs. 
page_header_divider_lineJupyter Integration
  • Bright Cluster Manager’s integration with Jupyter accomplishes several important things that make Jupyter a more effective and powerful tool for users:
    • Bright makes it possible and easy for Jupyter Notebooks to run on a cluster, increasing the scope of work that can be performed through Jupyter by increasing the resources available for work within the notebook.  This is especially important for areas like machine learning, where the scope of work escalates quickly as data is leveraged to train models.
    • Bright’s integration with Jupyter provides a point-and-click interface for users that are not familiar with the complexity of submitting jobs to a cluster.
    • Bright’s integration with Jupyter Notebook includes support for HPC workload managers, making it possible for HPC practitioners to leverage Jupyter’s power and ease-of-use.
    • By running Jupyter Notebooks on Bright managed clusters, users also have control over the environment that their notebook is running in, such as where and how their kernels are run, the number of tasks, the consumable resources, the job name prefix, the directory the kernel runs in, and which queue the job is submitted to.

Redfish automated BIOS management

  • Bright Cluster Manager 9.1 introduces support for Redfish BIOS management. This allows Bright administrators to view and update BIOS parameters on any/all nodes of the cluster that support the Redfish specification, from the command line or from the BrightView GUI.
  • This reduces the complexity and the effort that admins face in updating BIOS settings across the cluster

Bright View UI Redesign

  • The Bright View graphical admin interface has been redesigned to improve navigation and usability, making it more efficient for admins to use.

Offloadable monitoring

  • In Bright Cluster Manager 9.1, the cluster monitoring function for the entire system can be “offloaded” from the system’s head node to a set of dedicated servers that perform system monitoring exclusively, freeing the head node to perform its other duties and allowing the system to continue scaling.  If a dedicated monitoring node fails, the remaining monitoring nodes will take over monitoring of the orphaned compute nodes until the failed monitoring node is reinstated.  
  • As a result, Bright Cluster Manager can build and manage clusters of 50,000 to 100,000 nodes depending on how many dedicated monitoring nodes are used.  In addition, the dedicated monitoring nodes automatically back-up monitoring data from the system, ensuring that monitoring data isn’t lost if the system goes down.   



"We are excited about the new features in 9.1. Our development team has been hard at work implementing a host of features designed to better extend on-premise clusters to the public cloud and edge, improve ease of use, lower administrative costs, and increase standardization across the enterprise."

-Martijn de Vries, CTO



Bright Cluster Manager 9.1 Resources

91-press-releasePRESS RELEASE: Bright Cluster Manager VERSION 9.1  

Bright Computing has announced the latest version of their Bright Cluster Manager (BCM) software. Version 9.1 further simplifies building and managing clusters from edge to core to cloud. 


Learn how Bright automates the process of building and managing modern high-performance Linux clusters - eliminating complexity, enabling flexibility, and supporting scalability


WEBINAR: BRIght cluster manager 9.1

Bright Cluster Manager 9.1 is now generally available. This webinar will dive into the key features and functionality in 9.1.



Bill Wagner discusses Bright Cluster Manager 9.1 with InsideHPC.



Bright Cluster Manager 9.1 Demos

See Bright Cluster Manager in action! Watch these demos to learn how Bright Cluster Manager offers an integrated solution for building and managing clusters that reduce complexity, accelerate time to value, and provide enormous flexibility:

Ansible Module

Bright Auto Scaler

Bright View UI Redesign


Jupyter - live demo

offloadable monitoring