...
General SciComp Partition for all users on BRB Cluster
scu-cpu: 22 cpu nodes, 7-day runtime limit
scu-gpu: 6 gpu nodes, 2-day runtime limit
...
|
SRUN: Interactive Session
Example:
Code Block |
---|
srun --gres=gpu:1 --partition=partition_name --time=01:00:00 --mem=8G --cpus-per-task=4 --pty bash |
...
--gres=gpu:1
: Allocates 1 GPU to your job.--partition=partition_name
: Specifies the partition to run the job in. Replacepartition_name
with the appropriate partition, likescu-gpu
.--time=01:00:00
: Requests 1 hour of runtime. Adjust the time as needed.--mem=8G
: Requests 8 GB of memory.--cpus-per-task=4
: Requests 4 CPU cores.--pty bash
: Launches an interactive bash shell after resources are allocated.
SBATCH: submission script
Code Block |
---|
#!/bin/bash #SBATCH --job-name=gpu_job # Job name #SBATCH --output=output_file.txt # Output file #SBATCH --partition=gpu_partition # Partition to run the job (e.g., scu-gpu) #SBATCH --gres=gpu:1 # Request 1 GPU #SBATCH --time=01:00:00 # Max runtime (1 hour) #SBATCH --mem=8G # Memory requested #SBATCH --cpus-per-task=4 # Number of CPU cores per task # Your commands here srun python my_script.py |