How to easily switch from one workload manager to another with Bright


By Robert Stober | August 06, 2012 | Slurm, HPC job schedulers, PBS Pro, Moab, Maui, LSF, Grid Engine, Torque, Workload managers



When you install Bright Cluster Manager on your system, Bright automatically installs the workload manager of your choice, from among the following:

* PBS Professional



* openlava (the open source version of LSF)


Torque with Moab

Torque with Maui

* Grid Engine

These workload managers are pre-configured in Bright, some with temporary licences as necessary. No need to waste time integrating and tuning these products— Bright does it for you. That's good news.

More good news: no need to stick with the choice you initially made; you can easily switch from one to another.

This article shows you how to switch workload managers, using an example of moving from SLURM to Torque with the Moab scheduler.

Initial Conditions

The roles tab of the head node shows that the this cluster is running the SLURM workload manager. The SLURM Server Role is assigned and the backfill scheduler is selected.

Slurm screenshot blog7

Now let's set up Torque.

Use the wlm-setup command to setup the Torque workload manager.

# wlm-setup -w torquemoab -s

Disabling torque services ..... [ OK ]

Creating default torque config ..... [ OK ]

Initializing torque setup ..... [ OK ]

Setting permissions ..... [ OK ]

Updating images ..... [ OK ]

Enabling torque services ..... [ OK ]

The result is that the Torque Server role has now been assigned to the head node, and the Moab scheduler has been selected. Note that the Moab scheduler hasn't yet been installed. We'll do that next.

Slurm blog 7b

The Torque Client Role has also been added to all the nodes in the default node category. You can choose to assign nodes to queues by node category as shown here, and you can override this on individual nodes.

You can also specify the number of jobs slots (typically administrators set this to the number of cores per node) and the number of GPUs that are attached to these nodes. All of these settings can be customized on a per-node basis.

Slurm blog 7c

High Performance Computing eBook