Because the impact of unmanaged interactive sessions can be significant, the concept of login nodes in Bright Cluster Manager was introduced in Part 1 of this series. Although login nodes address many considerations relating to interactive use, they are designed to do so in a limited way. For example, in Part 1, the following consideration was outlined (emphasis added here):
This article show how you can easily manage Slurm jobs using the Bright Cluster Management Shell (CMSH). In job mode, the CMSH allows you to perform the same job management operations as the CMGUI through a convenient shell interface. For an example of managing jobs using the Bright CMGUI, check out my previous article on this topic.
The Bright Cluster Manager CMGUI makes tasks intuitively easy. This article shows how you can view and control workload manager jobs using the Bright CMGUI. I am using an OGS (SGE) job to provide examples, but Bright works the same way with all Bright's supported workload managers: PBS Professional, Slurm, Univa Grid Engine, LSF, openlava, TORQUE/Moab, TORQUE/Maui.
This article describes basic Slurm usage for Linux clusters. Brief "how-to" topics include, in this order:
- A simple Slurm job script
- Submit the job
- List jobs
- Get job details
- Suspend a job (root only)
- Resume a job (root only)
- Kill a job
- Hold a job
- Release a job
- List partitions
- Submit a job that's dependant on a prerequisite job being completed