Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
outlinetrue

...

Please see About SCU for more information about our HPC infrastructure.

...

Slurm partitions 

Slum partitions

...

BRBare cluster specific. All users will have access to the general partitions: scu-cpu and scu-gpu

BRB CLUSTER

CAYUGA CLUSTER

SCU cluster partitions:

  • scu-cpu: 22 cpu nodes, 7-day runtime limit

  • scu-gpu: 4 gpu nodes, 2-day runtime limit

SCU cluster partitions:

  • scu-cpu: 19 cpu nodes, 7-day runtime limit

  • scu-gpu: 3 gpu nodes, 7-day runtime limit

CryoEM partitions:

  • cryo-cpu: 14 CPU-only nodes, 7-day runtime limit

  • cryo-gpu: 6 GPU nodes, 2-day runtime limit

  • cryo-gpu-v100: 2 GPU, 2-day runtime limit

  • cryo-gpu-p100: 3 GPU, 2-day runtime limit

PI-specific cluster partitions:

  • accardi_gpu: 4 GPU nodes, 2-day runtime lim

  • accardi_cpu: 1 CPU node, 7-day runtime limit

  • boudker_gpu: 2 GPU nodes,

...

  •  infinite

  • boudker_gpu-p100: 3 GPU nodes,

...

  •  infinite

  • boudker_cpu: 2 CPU nodes,

...

  •  infinite

  • sackler_ cpu: 1 CPU node, 7-day runtime limit

  • sackler_ gpu: 1 GPU node, 7-day runtime limit

  • hwlab-rocky_gpu: 12 GPU nodes, 7-day runtime limit

  • eliezer-gpu: 1 GPU node, 7-day runtime limit

Other specific cluster partitions:

  • scu-res: 1 GPU, 7-day runtime limit

Of course, the above will be updated as needed; regardless, to see an up-to-date description of all available partitions, using the command sinfo scu-login02 on the login node. For a description of all the nodes' # CPU cores, memory (in Mb), runtime limits, and partition, use this command:

...

Code Block
srun -n1 --pty --partition=scu-cpu --mem=8G bash -i

To request specific numbers of GPUs, you should add your request to your srun/sbatch:  

Below is an example of requesting 1 GPU - can request up to 4 GPUs on a single node

Code Block
--gres=gpu:1

...

A simple job submission example

...

Cluster resources (such as specific nodes, CPUs, memory, GPUs, etc) can be assigned to groups, called partitions. Additionally, the same resources (e.g. a specific node) may belong to multiple cluster partitions. Finally, partitions may be assigned different job priority weights, so that jobs in one partition move through the job queue more quickly than jobs in another partition.

Every job submission script must request a specific partition--otherwise, the default is used. To see what partitions are available on your cluster, click here, or execute the command: sinfo

...

Code Block
languagebash
srun -n1 --pty --partition=pandascu-cpu --mem=8G bash -i
 
> your_CWID@nodeXXXCWID@scu-nodeXXX ~ $

Depending on the resource request, your interactive session might start right away, or it may take a very long time to start (for the above command, the interactive session should almost always start right away).

...

Runs task zero in pseudo terminal mode (this is what grants you a terminal)

--partition=pandascu-cpu:

Cluster resources (such as specific nodes, CPUs, memory, GPUs, etc.) can be assigned to groups, called partitions. Additionally, the same resources (e.g. a specific node) may belong to multiple cluster partitions. Finally, partitions may be assigned different job priority weights, so that jobs in one partition move through the job queue more quickly than jobs in another partition.

Every job submission script must request a specific partition--otherwise, the default is used. To see what partitions are available on your cluster, click here, or execute the command: sinfo

...