How to quickly configure a number of Torque job slots per server


By Robert Stober | July 17, 2012 | workload manager, Slurm, Job Scheduler, PBS Pro, Moab, Maui, LSF, Grid Engine, openlava, Torque



This article is about how to configure a number of Torque job slots per server, but before I begin, I would like to mention the alternatives you have when you use Bright.

Bright Cluster Manager is pre-configured with a number of workload managers:

  • Torque/Moab
  • Torque/Maui
  • PBS Professional
  • LSF
  • openlava (the open source version of LSF)
  • Grid Engine
Click here to learn more about Bright's workload manager integration.
For the commercial products, a license from the vendor is required for use, but the hard work of integrating the workload manager is done for you. Life is good. 

So, back to the main topic. I will use the Bright Cluster Management Shell (CMSH) to configure the number of job slots on each server by configuring the torqueclient role at the category level. This can also be accomplished at the node level.

[headnode]% category
[headnode->category]% use default
[headnode->category[default]]% roles
[headnode->category[default]->roles]% list
Name (key)

Let's see how many job slots are currently configured.

[headnode->category[default]->roles]% use torqueclient
[headnode->category[default]->roles[torqueclient]]% show
Parameter                      Value
------------------------------ ------------------------------------------------
All Queues                     no
GPUs                           0
Name                           torqueclient
Queues                         shortq longq
Slots                          8
Type                           TorqueClientRole

Now we'll assign 12 job slots to each server.

[headnode->category[default]->roles[torqueclient]]% set slots 12
[headnode->category*[default*]->roles*[torqueclient*]]% commit

You're done.

High Performance Computing eBook