the Bright blog

page_header_divider_line

Bill Wagner

Recent Posts

The future of high-performance computing infrastructure can be summed up in one word: “Different”

By Bill Wagner | January 07, 2020

At the beginning of each new year, many articles are written about what lies ahead in the coming year, especially in the fast-changing, ever-evolving area of tech.  When the new year is also the beginning of a new decade, the number of articles multiply accordingly.  

Read More >

page_section_divider_line

Vendors put all of their chips on the table at SC19

By Bill Wagner | December 03, 2019

One of the key things that solidified for me coming out of the Supercomputing 2019 conference in Denver this year is how different high-performance computing infrastructure is going to look in a few years. Here are just a handful of the many chip-related announcements at SC that foreshadow what the future holds:

Read More >

page_section_divider_line

Bright Easy8 - Sometimes there is a “free lunch”

By Bill Wagner | November 20, 2019

On November 19th at the Supercomputing Conference in Denver, Bright Computing announced a free version of Bright Cluster Manager software for clusters up to 8 nodes, which we’re referring to as “Easy8”.  This is the same award-winning, full-featured Bright product in use by thousands of organizations, with free support provided by Bright’s newly launched “BEACON” user community. 

Read More >

page_section_divider_line

The Countdown to 2020 and What Lies Ahead

By Bill Wagner | September 30, 2019

With the start of 2020 only 3 months away, we’re now in the process of building out plans for the year ahead in order to hit the ground running on January 1st. 

Read More >

page_section_divider_line

The Convergence of HPC and A.I.

By Bill Wagner | July 31, 2019

The convergence of HPC & A.I. is a hot topic right now, and for good reasons.  In practice, there is a symbiotic relationship between (for example) HPC simulations and machine learning.  HPC simulations generate tons of data, and as luck would have it, machine learning requires tons of data to train models.  If we turn the value proposition around, the quality and effectiveness of HPC simulations can be improved if we use machine learning to identify the parameters in simulation data that have a significant effect on the outcome and focus subsequent iterations of simulation on those parameters. 

Read More >

page_section_divider_line

Managing Linux Clusters: Everything old is new again

By Bill Wagner | June 11, 2019

It’s becoming clearer every day that the world is racing towards a state of constant, insatiable need for more and more computing power to make sense of the growing sea of data all around us – and we’ve barely scratched the surface.  Just wait until IoT and machine learning kick into high gear.  Not surprisingly, Linux clusters – in various evolving forms – continue to be the preferred approach for providing the computing horsepower to tackle these jobs, yet Linux clusters are notoriously complex and difficult to manage.  For this reason, Bright Computing commissioned a market survey through Hyperion Research to understand how organizations are grappling with the challenges of Linux clusters in the face of growing demand. 

Read More >

page_section_divider_line

The Top 5 Benefits That Our Customers Value Most In Our Product

By Bill Wagner | May 10, 2019

I recently completed my third year as the CEO of Bright Computing, and it seems that a day doesn’t go by without me discovering some new (to me) capability within our product, Bright Cluster Manager.  With more than a decade of development and customer implementations under its belt, coupled with a product team of 40 people working on the product every day, it’s obviously hard to keep up with everything this product can do.  But while the product has grown in capability, it has remained true to its reputation for being easy to use. 

Read More >

page_section_divider_line

Bright for DGX Clusters

By Bill Wagner | April 30, 2019

At the 2019 GPU Technology Conference in March, NVIDIA Founder & CEO Jensen Huang positioned NVIDIA DGX servers at the intersection of scale-up and scale-out architectures, sitting squarely in the sweet spot of data science driven by the combination of increased concurrency of data science workloads and the massive compute requirements associated with those workloads.  As a standalone server, the DGX delivers a solid scale-up architecture for data/compute-intensive workloads, and NVIDIA’s announced acquisition of Mellanox with its high-speed interconnects will only enhance that position and help enable a new realm of scale-out architectures as well. 

Read More >

page_section_divider_line

Homegrown Cluster Management...Just because you can, doesn't mean you should Pt. 2

By Bill Wagner | March 28, 2019

Picking up from last week, I illustrated a common dialogue between IT administrators and executive leadership concerning the decision to build an HPC cluster management solution. Now that the green light has been given to build, all of that money you “saved” by not using commercial cluster management software allowed you to buy more hardware to support jobs from end users, right?  You can see the extra servers on the floor, so you must be providing your users with more capacity for work, right?  Maybe not. Does the do-it-yourself approach you developed tell you precisely which system resources are actually being used by end users, and for which jobs?  Or, are users requesting more resources for their jobs than they really need and sitting on them (unused), preventing other users from gaining access to do real work?  And one more thing … how much server/system resource is being inefficiently consumed by the processes of your do-it-yourself cluster management solution at the expense of real work for users?  The point is, that cluster that appears to be 95% utilized is very likely to be far less productive than you think. 

Read More >

page_section_divider_line

Homegrown Cluster Management…Just because you can, doesn’t mean you should Pt. 1

By Bill Wagner | March 19, 2019

The historical thinking behind building and maintaining your own HPC cluster management solution using open source tools goes something like this:  “We have smart people that can build this. We have limited capital budget. Commercial cluster management software costs money, but open source tools like xCAT, Rocks, OpenHPC and others are free. If we build a solution ourselves using open source tools, we can use the savings to buy more hardware.”  

Read More >

page_section_divider_line

COMMENTS