Table of Contents | ||
---|---|---|
|
...
NoticeNotice
This guide assumes you have read and setup your environments as per the Using Slurm and Spack documentation.
...
- Single-threaded process requires 1 thread on any node: --ntasks=1 --cpus=per-task=1
- Multi-threaded process requires 16 threads on any node: --ntasks=1 --cpus-per-task=16
- Multi-threaded process requires 16 threads on a node with exclusivity (use only if undoubtedly needed): --ntasks=1 --cpus-per-task=16 --exclusive
- 16 concurrent but isolated tasks, each task with 1 thread (no multithreading): --ntasks=16 --nodes=1
- 16 concurrent but isolated tasks, each task with 2 threads (multithreading only within each single task): --ntasks=16 --nodes=1 --cpus-per-task=2
The scenarios indicate the following ' options have the following implications
- cpus-per-task: Amount of threads for each task. Supports multithreading and intratask communication.
- ntasks: Amount of isolated threads. Can not communicate with any other task, including tasks in the same job submission.
- nodes: Amount of nodes in which to split tasks between. Should only be > 1 if utilizing MPI or Job Steps.
Warning |
---|
Unless using MPI, slurm "job steps", or isolated concurrently running tasks: ntasks should remain = 1 and --nodes should either not be used or be = 1. The "--cpus-per-task" option should be used for thread allocations. For resource allocations with MPI, see: MPI with Slurm. |
Memory request scenarios
- If you have run the program before on SCU resources you can view the job's memory statistics. From there, you can view the job's Maximum used memory and request a small amount over it for your next iteration of the job. See: Requesting appropriate amounts of resources
- If you have never run the program before; refer to the program's documentation for recommended memory requirements. If none exist, start with 8GB and increase if necessary.
...
Slurm allocates resources based on the listed hardware requirements in your srun command or sbatch script. It is imperative that users neither exaggerate nor underestimate their resource requirements. Slurm provides tools and resources to help users understand their jobs' resource requirements. See Consequences of requesting inappropriate resources.
Memory
RAM is the SCU's most consumed resource. Therefor, the SCU recommends that your jobs request memory no more than 4GB or 20% greater than your previous jobs' maximum used memory (whichever is higher). This command can be used to query your completed jobs' memory allocation vs their maximum memory use in the past 7 days (modify date field as desired):
...
Anchor | ||||
---|---|---|---|---|
|
Consequences of requesting inappropriate resources
Slurm allocates and isolates your requested resources. Therefor, if resources are either overestimated, or underestimated, it could detriment the timeliness of your workflows and others'.
- Unused resources are resources that could have been used to run other jobs, including yours. If your job requests 100GB of RAM but only uses 12GB, the unused 88GB will have been made unavailable to all other jobs in the queue; slowing down the cluster's churn rate, and causing your jobs and others' to lag behind.
- Unused resources are counted towards your FairShare allocation, lowering your priority for your future jobs. See FairShare.
- Underestimated resources may cause slurm to kill your jobs. So it is important to neither underestimate nor greatly overestimate the resources needed. Slurm will only kill jobs that exceed their memory requests. Threads are individually allocated and isolated so your jobs cannot exceed their allocated thread count.
...
For CPU and Memory statistics, see Requesting appropriate amounts of resources
...
Anchor | ||||
---|---|---|---|---|
|
FairShare
...
- Account: Name of lab in Slurm's database.
- User: When using sshare -a, slurm displays the fairshare calculation for all users in all labs. This can be used for more thorough accounting.
- RawShares: The amount of shares a lab has been allotted. This is reached by calculating total athena storage * 100 = RawShares
- NormShares: This is the percentage of a lab's RawShares to the total shares for the entire cluster. This number is used by slurm to allocate n percentage of cluster usage to said lab.
- RawUsage: The amount of compute usage a lab has run or has requested in their jobs. sshare -l will display a column which lists exact CPU and RAM usages.
- EffectvUsage: The percentage of actual cluster use by the lab. This is compared with the lab's NormShares to determine the lab's FairShare score in the next column. If EffectvUsage is greater than the lab's NormShares, the fairshare will always be < 0.5. The reverse is also true. See FairShare Score
- FairShare: The lab's resulting FairShare score. See FairShare Score.
Halflife
Slurm FairShare calculations have a halflife of 7 days. So if a lab exhausts its allocated usage and its resulting FairShare is = 0, it's FairShare should reset to 0.5 if 7 days pass with no usage.
...