This guide assumes you have read and setup your environments as per the Using Slurm and Spack documentation.
Slurm provides several different options of allocating CPU's/threads. Below are some scenarios and the appropriate allocations.
Unless using MPI or slurm "job steps", ntasks should remain = 1 and --nodes should either not be used or be = 1. The "--cpus-per-task" option should be used for thread allocations. For resource allocations with MPI, see: MPI with Slurm. |
Slurm allocates resources based on the listed hardware requirements in your srun command or sbatch script. It is imperative that users neither exaggerate nor underestimate their resource requirements. Slurm provides tools and resources to help users understand their jobs' resource requirements. See Consequences of requesting inappropriate resources.
RAM is the SCU's most consumed resource. Therefor, the SCU recommends that your jobs request memory no more than 4GB or 20% greater than your previous jobs' maximum used memory (whichever is higher). This command can be used to query your completed jobs' memory allocation vs their maximum memory use in the past 7 days (modify date field as desired):
sacct -S $(date -d "-7 days" +%D) --state=CD -o "user,JobID,JobName,ReqMem,MaxRSS,state,exit" |
eg. If the maximum memory used by the previous iteration of your job was 12GB, you should request a maximum of 16GB of RAM (12GB + 4GB > 12GB x 1.2). To query currently running jobs, use sstat.
For non-parallelizable or non-multithreading programs, it is best to keep the thread count (--cpus-per-task) to 1. See: Amdahl's Law.
For parallelizable or multithread-supporting programs, many provide options that allow for the specification of the amount of threads to utilize. If the option exists, specify the same amount of threads in your srun or sbatch script that you'll have listed in your program command.
sacct -S $(date -d "-7 days" +%D) --state=CD -o "user,JobID,JobName,cputime,avecpu,ncpus,state,exit" |
In the output of the above command, if cputime (column 4) is orders of magnitude greater than avecpu (column 5), then there were allocated CPU's idle and you should decrease your CPU allocation accordingly. Column 6 is the amount of threads that was requested.
eg. This single-threaded program was run with 16 slurm-allocated threads. The CPUtime was ~11 minutes while the AveCPU was 28 seconds. This is an example of an overallocation of CPU threads
rahmed 511232 sbatch 00:10:56 16 COMPLETED 0:0 511232.batch batch 00:10:56 00:00:28 16 COMPLETED 0:0 |
When allocated with 1 thread rather than 16, the CPUtime decreased to 41 seconds, while the AveCPU remained the same at 28 seconds.
rahmed 511211 sbatch 00:00:41 1 COMPLETED 0:0 511211.batch batch 00:00:41 00:00:28 1 COMPLETED 0:0 |
Slurm allocates and isolates your requested resources. Therefor, if resources are either overestimated, or underestimated, it could detriment the timeliness of your workflows and others'.
When utilizing MPI, resource requests in Slurm should be allocated differently. Slurm works well with MPI and other parallel environments and as such, srun can be called directly to run mpi programs (see sample script below).
Note: Code and programs must be compiled with explicit support for MPI to utilize MPI capabilities. Also, this does not apply to relion MPI calculations. For SCU documentation on Relion, see: Relion
In the below script, we will be requesting 8 tasks to be split amongst 4 nodes. Script assumes you have setup both your Spack and Slurm environments.
#SBATCH --job-name=mpi_expl #SBATCH --ntasks=8 #SBATCH --nodes=4 #SBATCH --mem=64G #SBATCH --output=mpi1.out #SBATCH --partition=panda InputData=/athena/mpi/data OutputData=/athena/mpi/outputData spack load openmpi@3.0.0 schedulers=slurm ~cuda #Copy data to TMPDIR if program is read or write intensive rsync -a $InputData $TMPDIR #slurm integrates well with mpi, so srun can handle the allocation of tasks, threads, nodes, and the calling of mpirun. srun mpi_prog #Copy data back to athena rsync -a $TMPDIR $OutputData |
In addition to the methods found in the Using Slurm documentation, there are other commands which can be used to query job status, history, accounting data, etc
In the Using Slurm documentation, the commands squeue -u <user> as well as squeue_long -u <user> are shared and explained. In addition to these there are several other commands slurm supports.
To view details of a job submission, this command can be used: scontrol show jobid #job_id
scontrol show jobid 595278 JobId=595278 JobName=sbatch UserId=rahmed(8992) GroupId=pbtech(1053) MCS_label=N/A Priority=4140804828 Nice=0 Account=scu QOS=normal JobState=RUNNING Reason=None Dependency=(null) Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0 RunTime=00:00:09 TimeLimit=7-00:00:00 TimeMin=N/A SubmitTime=2019-05-16T17:05:07 EligibleTime=2019-05-16T17:05:07 StartTime=2019-05-16T17:05:08 EndTime=2019-05-23T17:05:08 Deadline=N/A PreemptTime=None SuspendTime=None SecsPreSuspend=0 Partition=panda AllocNode:Sid=curie:11574 ReqNodeList=(null) ExcNodeList=(null) NodeList=node140 BatchHost=node140 NumNodes=1 NumCPUs=1 NumTasks=1 CPUs/Task=1 ReqB:S:C:T=0:0:*:* TRES=cpu=1,mem=1000M,node=1 Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=* MinCPUsNode=1 MinMemoryNode=1000M MinTmpDiskNode=0 Features=(null) DelayBoot=00:00:00 Gres=(null) Reservation=(null) OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null) Command=/pbtech_mounts/homes024/rahmed/sbatch WorkDir=/pbtech_mounts/homes024/rahmed StdErr=/pbtech_mounts/homes024/rahmed/slurm-595278.out StdIn=/dev/null StdOut=/pbtech_mounts/homes024/rahmed/slurm-595278.out Power= |
The above command displays several useful entries. Including the node, resource configurations that slurm understood from the sbatch scripts, path to the sbatch script, stdout, etc.
To view statistics for your previously running jobs, sacct should be used. Modify the # of days as desired. The following command displays all jobs run over the past week
sacct -S $(date -d "-7 days" +%D) -o "user,JobID,JobName,ReqMem,MaxRSS,NCPUS,start,end,node,state,exit" |
To view jobs between certain dates (more options are available for sacct. run "man sacct".):
sacct -S 04/01/19 -E 04/31/19 -o "user,JobID,JobName,ReqMem,MaxRSS,NCPUS,start,end,node,state,exit" |
For CPU and Memory statistics, see Requesting appropriate amounts of resources
Slurm's "FairShare" algorithm regulates cluster scheduling and prioritization to ensure each lab is able to utilize cluster resources fairly and equitably. The two main factors considered in slurm's calculation of a lab's fairshare is the lab's cluster usage and its amount of priority shares.
A large factor in Slurm's FairShare calculation is a lab's amount of priority shares. Slurm uses priority shares to identify labs' expected share of compute on the cluster. A lab's amount of priority shares is equal to their total amount of leased Athena storage x 100. (10T of scratch + 10T of store = 2000 priority shares). This amount of priority shares would be divided by the total amount of priority shares, the result being the lab's expected share of the cluster. Slurm would then prioritize that lab's jobs to use their compute shares until their share of the cluster has been used.
eg. A lab has procured 50T of storage on athena, which translates to 5000 priority shares. As of when this document was written, slurm would grant this lab priority up to its allocated ~3% of the cluster's use. The lab's jobs would be prioritized at first, but would be deprioritized as jobs run until it has consumed its allocated 3 percent effective usage.
Cluster usage inversely impacts your fairshare and thus your job priority. As your usage increases, your fairshare and priority decrease. Slurm tracks cluster usage based on the requested Trackable Resources (TRES). TRES currently records requested CPU and RAM allocations, and multiplies that by the runtime of the job. The product being the job's resource allocation, which gets applied towards their allocated percent of cluster use.
Fairshare is the resulting score after Slurm calculates the aforementioned factors. Fairshare scores are the deciding factor when it comes to job prioritization and are represented from 0 to 1. 1 being the highest priority and 0 being the lowest. Below are fairshare scores and their implications.
The sshare command can be run from curie or any slurm node to display fairshare statistics for all labs.
sshare Account User RawShares NormShares RawUsage EffectvUsage FairShare root 1.000000 694362950 1.000000 0.500000 abc 2000 0.011481 28 0.000000 0.999998 accardi_lab 3000 0.017221 45166247 0.065072 0.072870 aksaylab 100 0.000574 0 0.000000 1.000000 angsd_class 300 0.001722 19770 0.000028 0.988597 ... |
In the output, we see the two factors that are considered in fairshare calculation, cluster usage and priority shares.
Going column by column, we can the the values of a lab's usage and shares and how slurm takes that into account when calculating fairshare.
Slurm FairShare calculations have a halflife of 7 days. So if a lab exhausts its allocated usage and its resulting FairShare is = 0, it's FairShare should reset to 0.5 if 7 days pass with no usage.