Table of Contents | ||
---|---|---|
|
...
Please see About SCU for more information about our HPC infrastructure.
...
Slurm partitions
Slum partitions are cluster specific. All users will have access to the general partitions
...
BRB: scu-cpu
and scu-gpu
BRB CLUSTER | CAYUGA CLUSTER |
---|---|
SCU cluster partitions:
| SCU cluster partitions:
|
CryoEM partitions:
| |
PI-specific cluster partitions:
|
...
|
...
|
...
| |
Other specific cluster partitions:
|
Of course, the above will be updated as needed; regardless, to see an up-to-date description of all available partitions, using the command sinfo
scu-login02 on the login node. For a description of all the nodes' # CPU cores, memory (in Mb), runtime limits, and partition, use this command:
...
Code Block |
---|
srun -n1 --pty --partition=scu-cpu --mem=8G bash -i |
To request specific numbers of GPUs, you should add your request to your srun/sbatch:
Below is an example of requesting 1 GPU - can request up to 4 GPUs on a single node
Code Block |
---|
--gres=gpu:1 |
...
A simple job submission example
...