...
cryo-cpu
: 15 CPU-only nodes, 2-day runtime limitcryo-gpu
3 GPU nodes (P100), 2-day runtime limit
Edison
cluster:
edison
: 9 GPU nodes, 2-day runtime limitedison_k40m
: 5 GPU (k40m) nodes, 2-day runtime limitedison_k80
: 4 GPU (k80) nodes, 2-day runtime limit
PI-specific cluster partitions:
accardi_huang_reserve
: node178, GPU node, 7-day runtime limitboudker_reserve
: node179, GPU (P100) node, 7-day runtime limitblanchard_reserve
: node180, GPU (P100) node, 7-day runtime limitcantley-gpu
: 2 GPU (V100) nodes, 7-day runtime limit
Of course, the above will be updated as needed; regardless, to see an up-to-date description of all available partitions, using the command sinfo
on curie. For a description of all the nodes' # CPU cores, memory (in Mb), runtime limits, and partition, use this command:
Code Block |
---|
sinfo -N -o "%25N %5c %10m %15l %25R" |
Or if you just want to see a description of the nodes in a given partition:
Code Block |
---|
sinfo -N -o "%25N %5c %10m %15l %25R" -p panda # for the partition panda |
...
A simple job submission example
...