Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
outlinetrue

...

Notice 

This guide assumes you have read and setup your environments as per the Using Slurm and Spack documentation.


...

Resource Requests

Slurm provides several different options of allocating CPU's/threads. Below are some scenarios and the appropriate allocations.

CPU request scenarios

  1. Single-threaded process requires 1 thread on any node: --ntasks=1 --cpus=per-task=1
  2. Multi-threaded process requires 16 threads on any node: --ntasks=1 --cpus-per-task=16
  3. Multi-threaded process requires 16 threads on a node with exclusivity (use only if undoubtedly needed): --ntasks=1 --cpus-per-task=16 --exclusive=1 --cpus-per-task=16 --exclusive
  4. 16 concurrent but isolated tasks, each task with 1 thread (no multithreading): --ntasks=16 --nodes=1
  5. 16 concurrent but isolated tasks, each task with 2 threads (multithreading only within each single task): --ntasks=16 --nodes=1 --cpus-per-task=2


The scenarios' options have the following implications

  • cpus-per-task: Amount of threads for each task. Supports multithreading and intratask communication.
  • ntasks: Amount of isolated threads. Can not communicate with any other task, including tasks in the same job submission.
  • nodes: Amount of nodes in which to split tasks between. Should only be > 1 if utilizing MPI or Job Steps.


Warning

Unless using MPI or , slurm "job steps", or isolated concurrently running tasks: ntasks should remain = 1 and --nodes should either not be used or be = 1. The "--cpus-per-task" option should be used for thread allocations. For resource allocations with MPI, see: MPI with Slurm.

Memory request scenarios

  • If you have run the program before on SCU resources you can view the job's memory statistics. From there, you can view the job's Maximum used memory and request a small amount over it for your next iteration of the job. See: Requesting appropriate amounts of resources
  • If you have never run the program before; refer to the program's documentation for recommended memory requirements. If none exist, start with 8GB and increase if necessary.

...

Anchor
Requesting
Requesting

Requesting appropriate amounts of resources

Slurm allocates resources based on the listed hardware requirements in your srun command or sbatch script. It is imperative that users neither exaggerate nor underestimate their resource requirements. Slurm provides tools and resources to help users understand their jobs' resource requirements. See Consequences of requesting inappropriate resources.

Memory 

RAM is the SCU's most consumed resource. Therefor, the SCU recommends that your jobs request memory no more than 4GB or 20% greater than your previous jobs' maximum used memory (whichever is higher). This command can be used to query your completed jobs' memory allocation vs their maximum memory use in the past 7 days (modify date field as desired): 

...

eg. If the maximum memory used by the previous iteration of your job was 12GB, you should request a maximum of 16GB of RAM (12GB + 4GB > 12GB x 1.2). To query currently running jobs, use sstat.

Compute

For non-parallelizable or non-multithreading programs, it is best to keep the thread count (--cpus-per-task) to 1. See: Amdahl's Law.

...

Anchor
Consequences
Consequences

Consequences of requesting inappropriate resources 


Slurm allocates and isolates your requested resources. Therefor, if resources are either overestimated, or underestimated, it could detriment the timeliness of your workflows and others'.

  1. Unused resources are resources that could have been used to run other jobs, including yours. If your job requests 100GB of RAM but only uses 12GB, the unused 88GB will have been made unavailable to all other jobs in the queue; slowing down the cluster's churn rate, and causing your jobs and others' to lag behind.
  2. Unused resources are counted towards your FairShare allocation, lowering your priority for your future jobs. See FairShare
  3. Underestimated resources may cause slurm to kill your jobs. So it is important to neither underestimate nor greatly overestimate the resources needed. Slurm will only kill jobs that exceed their memory requests. Threads are individually allocated and isolated so your jobs cannot exceed their allocated thread count.


...

Anchor
mpi
mpi

MPI with Slurm 

When utilizing MPI, resource requests in Slurm should be allocated differently. Slurm works well with MPI and other parallel environments and as such, srun can be called directly to run mpi programs (see sample script below).

Note: Code and programs must be compiled with explicit support for MPI to utilize MPI capabilities. Also, this does not apply to relion MPI calculations. For SCU documentation on Relion, see: Relion

CPU request scenarios - MPI

  1. 8 MPI processes that are unconcerned with node placement: --ntasks=8
  2. 8 MPI processes to be split amongst 4 nodes: --ntasks=8 --ntasks-per-node=2 OR --ntasks=8 --nodes=4 (Both options are functionally identical)
  3. 8 MPI processes to be split amongst 8 nodes: --ntasks=8 --ntasks-per-node=1 OR --ntasks=8 --nodes=8
  4. 8 MPI processes to be split amongst 4 nodes with 2 allocated threads per process: --ntasks=8 --ntasks-per-node=2 --cpus=per-task=2. (Total allocated threads would be equal to 16, with each node reserving 4 threads split amongst 2 tasks)

Sample MPI script

In the below script, we will be requesting 8 tasks to be split amongst 4 nodes. Script assumes you have setup both your Spack and Slurm environments.

Code Block
titleSample_MPI_sbatch.sh
#SBATCH --job-name=mpi_expl
#SBATCH --ntasks=8
#SBATCH --nodes=4
#SBATCH --mem=64G
#SBATCH --output=mpi1.out
#SBATCH --partition=panda


InputData=/athena/mpi/data
OutputData=/athena/mpi/outputData


spack load openmpi@3.0.0 schedulers=slurm ~cuda

#Copy data to TMPDIR if program is read or write intensive
rsync -a $InputData $TMPDIR

#slurm integrates well with mpi, so srun can handle the allocation of tasks, threads, nodes, and the calling of mpirun.
srun mpi_prog

#Copy data back to athena
rsync -a $TMPDIR $OutputData


...

Job monitoring 

In addition to the methods found in the Using Slurm documentation, there are other commands which can be used to query job status, history, accounting data, etc

Status

In the Using Slurm documentation, the commands squeue -u <user> as well as squeue_long -u <user> are shared and explained. In addition to these there are several other commands slurm supports.

...

The above command displays several useful entries. Including the assigned node, resource configurations that slurm understood from the sbatch scripts, path to the sbatch script, stdout, etc. 

History

To view statistics for your previously running jobs, sacct should be used. Modify the # of days as desired. The following command displays all jobs run over the past week

...

For CPU and Memory statistics, see Requesting appropriate amounts of resources


...

Anchor
fairshare
fairshare

FairShare

Slurm's "FairShare" algorithm regulates cluster scheduling and prioritization to ensure each lab is able to utilize cluster resources fairly and equitably. The two main factors considered in slurm's calculation of a lab's fairshare is the lab's cluster usage and its amount of priority shares.

  • Priority shares

A large factor in Slurm's FairShare calculation is a lab's amount of priority shares. Slurm uses priority shares to identify labs' expected share of compute on the cluster. A lab's amount of priority shares is equal to their total amount of leased Athena storage x 100. (10T of scratch + 10T of store = 2000 priority shares). This amount of priority shares would be divided by the total amount of priority shares, the result being the lab's expected share of the cluster. Slurm would then prioritize that lab's jobs to use their compute shares until their share of the cluster has been usedutlized.

eg. A lab has procured 50T of storage on athena, which translates to 5000 priority shares. As of when this document was written, slurm would grant this lab priority up to its allocated ~3% of the cluster's use. The lab's jobs would be prioritized at first, but would be deprioritized as jobs run run, until it has consumed its allocated 3 percent effective usage

  • Cluster usage

Cluster usage inversely impacts your fairshare and thus your job priority. As your usage increases, your fairshare and priority decrease. Slurm tracks cluster usage based on the requested Trackable Resources (TRES). TRES currently records requested CPU and RAM allocations, and multiplies that by the runtime of the job. The product being the job's resource allocation, which gets applied towards their allocated percent of cluster use.

Anchor
fairshare_score
fairshare_score

FairShare score

Fairshare is the resulting score after Slurm calculates the aforementioned factors. Fairshare scores are the deciding factor when it comes to job prioritization and are represented from 0 to 1.  1 0 being the highest lowest priority and 0 1 being the lowesthighest. Below are fairshare scores and their implications.

  • A fairshare score of 1: This lab has not run any jobs. It's jobs will be prioritized over all others.
  • A fairshare score 0.5 < n < 1: This lab has not utilized it's expected share of cluster usage. It's jobs will also be prioritized over all jobs with a lower fairshare.
  • A fairshare score 0.0 < n < 0.5: This lab has overutilized the cluster according to it's expected share of cluster usage. It's jobs will be deprioritized 
  • A fairshare score of 0: This lab has greatly overutilized their share of the cluster. Its jobs will be deprioritized and will run only when all other prioritized jobs have begun to run.

FairShare query

The sshare command can be run from curie or any slurm node to display fairshare statistics for all labs.

...

In the output, we see the two factors that are considered in fairshare calculation, cluster usage and priority shares.

Going column Reviewing the output by column, we can the analyze the values of a lab's usage and shares and usage as well as how slurm takes that them into account when calculating fairshare.

  1. Account: Name of lab in Slurm's database.
  2. User: When using sshare -a, slurm displays the fairshare calculation for all users in all labs. This can be used for more thorough accounting.
  3. RawShares: The amount of shares a lab has been allotted. This is reached by calculating total athena storage * 100 = RawShares
  4. NormShares: This is the percentage of a lab's RawShares to the total shares for the entire cluster. This number is used by slurm to allocate n percentage of cluster usage to said lab.
  5. RawUsage: The amount of compute usage a lab has run or has requested in their jobs. sshare -l  will display a column which lists exact CPU and RAM usages.
  6. EffectvUsage: The percentage of actual cluster use by the lab. This is compared with the lab's NormShares to determine the lab's FairShare score in the next column. If EffectvUsage is greater than the lab's NormShares, the fairshare will always be < 0.5. The reverse is also true. See FairShare Score
  7. FairShare: The lab's resulting FairShare score. See FairShare Score.

Halflife

Slurm FairShare calculations have a halflife of 7 days. So if a lab exhausts its allocated usage and its resulting FairShare is = 0, it's FairShare should reset to 0.5 if 7 days pass with no usage.

...