Introduction to SLURM¶
All work on Cheaha must be submitted to the queueing system, Slurm. This doc gives a basic overview of Slurm and how to use it.
Slurm is software that gives researchers fair allocation of the cluster's resources. It schedules jobs based using resource requests such as number of CPUs, maximum memory (RAM) required per CPU, maximum run time, and more.
Batch Job Workflow¶
- Stage data to
$USER_SCRATCH, or a project directory.
- Research how to run your directives in 'batch' mode. In other words, how to run your analysis pipeline from the command line, with no GUIs or researcher input.
- Identify the appropriate resources necessary to run the jobs (CPUs, time, memory, etc)
- Write a job script specifying these parameters using Slurm directives.
- Submit the job (
- Monitor the job (
- Review the results, and modify/rerun if necessary (
- Remove data from