Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
outlinetrue

...

  • panda: 70 CPU-only nodes, 7-day runtime limit

Edison cluster:

  • edison: 9 GPU nodes, 2-day runtime limit
  • edison_k40m: 5 GPU (k40m) nodes, 2-day runtime limit
  • edison_k80: 4 4  GPU (k80) nodes, 2-day runtime limit

...

SCU cluster partitions:

  • scu-cpu: 21 22 cpu nodes, 7-day runtime limit
  • scu-gpu: 6 gpu nodes, 2-day runtime limit

...

  • cryo-cpu: 14 CPU-only nodes, 7-day runtime limit
  • cryo-gpu: 6 GPU nodes, 2-day runtime limit
  • cryo-gpu-v100: 3 2 GPU, 2-day runtime limit
  • cryo-gpu-p100: 3 GPU, 2-day runtime limit

PI-specific cluster partitions:

  • accardi_gpu: 4 GPU nodes, 2-day runtime lim
  • accardi_cpu: 1 CPU node, 7-day runtime limit
  • boudker_gpu: 2 GPU nodes, 7-day runtime limit

  • boudker_gpu-p100: 3 GPU nodes, 7-day runtime limit
  • boudker_cpu: 2 CPU nodes, 7-day runtime limit
  • sackler_ cpu: 1 CPU node, 7-day runtime limit
  • sackler_ gpu: 1 GPU node, 7-day runtime limit
  • hwlab-rocky__gpu: 12 GPU nodes, 7-day runtime limit
  • eliezer-gpu: 12  1 GPU nodesnode, 7-day runtime limit

Other specific cluster partitions:

  • covid19scu-res: 1 CPUGPU, 7-day runtime limit


Of course, the above will be updated as needed; regardless, to see an up-to-date description of all available partitions, using the command sinfo scu-login02. For a description of all the nodes' # CPU cores, memory (in Mb), runtime limits, and partition, use this command:

...

Code Block
srun -n1 --pty --partition=scu-cpu --mem=8G bash -i


To request specific numbers of GPUs, you should add your request to your srun/sbatch:  

Below is an example of requesting 1 GPU - can request up to 4 GPUs on a single node

Code Block
--gres=gpu:1


...

A simple job submission example

...