Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
outlinetrue

...

Please see About SCU for more information about our HPC infrastructure.

...

Slurm partitions 

Slum partitions are cluster specific. All users will have access to the general partitions

...

BRB: scu-cpu and scu-gpu

BRB CLUSTER

CAYUGA CLUSTER

SCU cluster partitions:

  • scu-cpu: 22 cpu nodes, 7-day runtime limit

  • scu-gpu: 4 gpu nodes, 2-day runtime limit

SCU cluster partitions:

  • scu-cpu: 19 cpu nodes, 7-day runtime limit

  • scu-gpu: 3 gpu nodes, 7-day runtime limit

CryoEM partitions:

  • cryo-cpu: 14 CPU-only nodes, 7-day runtime limit

  • cryo-gpu: 6 GPU nodes, 2-day runtime limit

  • cryo-gpu-v100: 2 GPU, 2-day runtime limit

  • cryo-gpu-p100: 3 GPU, 2-day runtime limit

PI-specific cluster partitions:

  • accardi_gpu: 4 GPU nodes, 2-day runtime lim

  • accardi_cpu: 1 CPU node, 7-day runtime limit

  • boudker_gpu: 2 GPU nodes,

...

  •  infinite

  • boudker_gpu-p100: 3 GPU nodes,

...

  •  infinite

  • boudker_cpu: 2 CPU nodes,

...

  •  infinite

  • sackler_ cpu: 1 CPU node, 7-day runtime limit

  • sackler_ gpu: 1 GPU node, 7-day runtime limit

  • hwlab-rocky_gpu: 12 GPU nodes, 7-day runtime limit

  • eliezer-gpu: 1 GPU node, 7-day runtime limit

Other specific cluster partitions:

  • scu-res: 1 GPU, 7-day runtime limit

Of course, the above will be updated as needed; regardless, to see an up-to-date description of all available partitions, using the command sinfo scu-login02 on the login node. For a description of all the nodes' # CPU cores, memory (in Mb), runtime limits, and partition, use this command:

...

Code Block
srun -n1 --pty --partition=scu-cpu --mem=8G bash -i

To request specific numbers of GPUs, you should add your request to your srun/sbatch:  

Below is an example of requesting 1 GPU - can request up to 4 GPUs on a single node

Code Block
--gres=gpu:1

...

A simple job submission example

...