Realize the Massive Potential of your AI Infrastructure

    

With an ever-expanding array of future-focused use-cases and solutions, Artificial Intelligence (AI) has rapidly become an essential business and research tool for enterprise, academic, and government end-users. Organizations everywhere are investing incredible amounts of resources into the development of these solutions, and while these AI capabilities are proving to add significant value to our lives, they are also proving to be some of the most demanding workloads in modern computing history. It should come as no surprise that the high-performance computing (HPC) clusters required to run AI workloads place significant strain on traditional (and in some cases “legacy”) IT infrastructure, as businesses struggle to keep pace with ever-expanding sets of hot and warm data.

What if I told you that for a growing number of companies operating in the AI arena, there was a solution that effortlessly makes AI compute capacity accessible across business units and data science teams? During ISC 2019, we met up with Rich Brueckner from InsideHPC to discuss this revolutionary new solution that features NVIDIA DGX servers managed by Bright Cluster Manager. Alongside Carlo Ruiz, Head of AI Datacenter Solution at NVIDIA, our CEO, Bill Wagner discussed how this game-changing solution exists as a shared infrastructure approach that is owned and managed by IT (i.e., AI as a Service) and is a preferred strategy for enterprises committed to AI development. You can check out the full interview here: insideHPC ISC interview

Delivering more of what AI infrastructure users need, NVIDIA DGX servers delivered as a service and managed by Bright Cluster Manager provide the enhanced technology and support for today’s most demanding AI workloads. This new solution delivers the perfect balance of increased concurrency for handling data science workloads, the massive compute requirements associated with those workloads, and the seamless management of all resources across the HPC cluster. With the NVIDIA/Bright Solution, HPC Users benefit from:

  • Solid scale-up architecture for handling data-/compute-intensive workloads
  • Easy cluster setup and provisioning
  • End-to-end cluster monitoring, health checking, and automated updates
  • Automated deployment and configuration of HPC workload managers, Kubernetes, machine learning/deep learning frameworks and libraries, and NGC containers
  • Ability to run all the above HPC workloads on the same DGX cluster

As organizations continue to leverage GPUs for HPC applications and also host AI workloads on existing HPC clusters, a growing trend is to add DGX servers to these clusters alongside traditional CPU-based Linux servers. Because Bright Cluster Manager is platform independent, combining DGX servers with servers from another vendor is easy and seamless, resulting in a single unified cluster that Bright can centrally manage and monitor. For both DGX POD and DGX servers added to existing clusters, the combined power of the DGX system with Bright Cluster Manager translates to:

  • Faster time to value for your AI projects
  • Bigger AI projects and larger models through effective DGX clustering
  • Reduced complexity, reduced administrative burden, and maximum resource utilization
  • Flexibility and extensibility, from desk to data center, to cloud

Once thought of as an exclusive tool used by elite members of scientific organizations, AI has moved to the mainstream and is now viewed as an essential business and research tool by agencies, companies, and institutions worldwide. For those looking for a highly flexible, scalable, and extensible AI compute solution, a shared infrastructure built on clusters of high-performance NVIDIA DGX servers delivered as a service and managed by Bright Cluster Manager is the solution of choice.

To learn more about our joint solution with NVIDIA, please download our white paper:

Realize the Massive Potential of Your AI Infrastructure: Bright Cluster Manager on NVIDIA DGX Systems