Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
sinfo -N -o "%25N %5c %10m %15l %25R" -p panda # for the partition panda



...

Slurm partitions -  BRB Cluster 

BRB

SCU cluster partitions:

  • scu-cpu: 28 cpu nodes, 7-day runtime limit
  • scu-gpu: 5 gpu nodes, 2-day runtime limit

CryoEM partitions:

  • cryo-cpu: 14 CPU-only nodes, 7-day runtime limit
  • cryo-gpu: 6 GPU nodes, 2-day runtime limit
  • cryo-gpu-v100: 3 GPU, 2-day runtime limit
  • cryo-gpu-p100: 3 GPU, 2-day runtime limit

PI-specific cluster partitions:

  • accardi_gpu: 4 GPU nodes, 2-day runtime lim
  • accardi_cpu: 1 CPU node, 7-day runtime limit
  • boudker_gpu: 2 GPU nodes, 7-day runtime limit

  • boudker_gpu-p100: 3 GPU nodes, 7-day runtime limit
  • boudker_cpu: 2 CPU nodes, 7-day runtime limit
  • sackler_ cpu: 1 CPU node, 7-day runtime limit
  • sackler_ gpu: 2 GPU node, 7-day runtime limit

Other specific cluster partitions:

  • covid19: 1 CPU, 7-day runtime limit


Of course, the above will be updated as needed; regardless, to see an up-to-date description of all available partitions, using the command sinfo scu-login02. For a description of all the nodes' # CPU cores, memory (in Mb), runtime limits, and partition, use this command:

Code Block
sinfo -N -o "%25N %5c %10m %15l %25R"

Interactive Session

Code Block
srun --pty -n1 --mem=8G -p scu-cpu /bin/bash -i


...

A simple job submission example

...