Table of Contents | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
This cluster features high-memory nodes, Nvidia GPU servers (A100, A40 and L40), InfiniBand interconnect, and specialized storage designed for AI workloads.
Info |
---|
AI cluster allows special time-limited projects, dealing with clinical data. Resources for such projects are granted upon special requests, contact scu@med.cornell.edu for more information about this. |
Login to the AI cluster
AI cluster is accessible via terminal SSH sessions. You need to be connecting from the WCM network, or have VPN installed and enabled. Replace <cwid>
with your credentials.
...
Name | Mount point | Size | Use | Is backed up? | Comment |
---|---|---|---|---|---|
Home |
| 2Tb | home filesystem. Used to keep small files, configs, codes, scripts, etc | no | have has limited space. It is only used for small files |
Midtier |
| varies per lab | each lab has an allocation under intended for data that is actively being used or processed, research datasets | no | |
AI GPFS |
| 700Tb | tbd | no | Parallel file system for data intensive workloads. Limited access, granted on special requests. |
...
Software applications
Access to applications is managed with modules
. Refer to <placeholder> for detailed tutorial on modules
but here is a quick list of commands that can be used on the AI cluster:
Code Block | ||
---|---|---|
| ||
# Listlist all files and directories in the scratch directory ls /midtier/labname/scratch/ # Navigate to a specific subdirectory cd /midtier/labname/scratch/cwid # Copy a file from the current directory to another directory cp data.txt /midtier/labname/scratch/cwid/ # Move the copied file to a different directory mv /midtier/labname/scratch/cwid/data.txt /midtier/labname/scratch/backup/ # Create a new directory mkdir /midtier/labname/scratch/cwid/new_project/the available modules: module avail # list currently loaded modules: module load <module_name> # unload the module: module unload <module_name> # swap versions of the application module swap <module_name>/<version1> <module_name>/<version2> # unload all modules module purge # get help module help # get more info for a particular module module help <module_name> |
Info |
---|
If you can’t find an application that you need listed in the |
Running jobs
Computational jobs on the AI cluster are managed with a SLURM job manager. We provide an in-depth tutorial on how to use SLURM <placeholder>, but some basic examples that are immediately applicable on the AI cluster will be discussed in this section.
Important notice
...
Warning |
---|
Do not run computations on login nodes |
Running your application code directly without submitting it through the scheduler is prohibited. Login nodes are shared resources and they are reserved for light tasks like file management and job submission. Running heavy computations on login nodes can degrade performance for all users. Instead, please submit your compute jobs to the appropriate SLURM queue, which is designed to handle such workloads efficiently.
Batch vs interactive jobs
There are two mechanisms to run SLURM jobs: “batch” and “interactive”. Interactive jobs are an inefficient way to utilize the cluster. By their nature, these jobs require the system to wait for user input, leaving the allocated resources idle during those periods. Since HPC clusters are designed to maximize resource utilization and efficiency, having nodes sit idle while still consuming CPU, memory, or GPU resources is counterproductive.
...
Code Block |
---|
gcc -fopenmp code.c |
it It will generate an executable a.out
that we will use to illustrate how to submit jobs. This code accepts two arguments: number of Monte Carlo samples, and number of parallel threads to be used and can be executed as ./a.out 1000 8
(1000 samples, run with 8 parallel threads).
SLURM Batch job example
Here is an example of the batch script that can be ran on the cluster. In this script we are requesting a single node, with 4 CPU cores, used in an SMP mode.
Code Block | ||
---|---|---|
| ||
#!/bin/bash #SBATCH --job-name=testjup <jobname> # give your job a name #SBATCH --nodes=1 # asking for 1 compute node #SBATCH --ntasks=1 # 1 task #SBATCH --cpus-per-task=4 # 4 CPU cores per task, so 4 cores in total #SBATCH --time=00:30:00 # set this time according to your need, 30 minutes here #SBATCH --mem=8GB # request appropriate amount of RAM ##SBATCH --gres=gpu:1 # if you need to use a GPU, note that this line is commented out #SBATCH -p <partition_name> # specify your partition. Make sure to run in the cd <code_directory> export OMP_NUM_THREADS=4 # OMP_NUM_THREADS should be equal to the number of cores you are requesting ./a.out 1000000000 4 # running 1000000000 samples on 4 cores |
Submit this job to the queue and inspect the output once it’s finished:
Code Block |
---|
sbatch script_name |
You can view your jobs in the queue with:
Code Block |
---|
squeue -u <cwid> |
Try to run a few jobs using different number of cores and see how it scales almost linearly.
Note |
---|
The more resources you request from SLURM, the harder it will be for SLURM to allocate space for your job. For parallel jobs, the more CPUs, the faster it runs, but the job may be stuck in the queue for longer. Be aware of this trade-off, there is no universal answer on what’s the best strategy, it usually depends on what kind of resources your particular job needs and how busy is the cluster at the moment. |
SLURM interactive job example
Even though interactive jobs are inefficient are not recommended, sometimes there is no other way to do certain things. If you need to run an interactive job, here is how it can be done:
Code Block |
---|
srun --nodes 1 \ --tasks-per-node 1 \ --cpus-per-task 4 \ --partition=<partition_name> \ --gres=gpu:1 \ --pty /bin/bash -i |
Once the job is successfully started, you will be dropped into interactive BASH session on one of the compute nodes. The same scheduling considerations apply here – the more resources you are requesting, the longer is the potential wait time.
Once you are done with your interactive work, simply run exit
command. It will kill your bash process and therefore the whole SLURM job will be cancelled.
Jupyter job
These instructions will allow users to launch jupyter notebook server on the AI cluster’s compute nodes and connect to this server using local browser. Similar approach can be used to connect to other services.
Note |
---|
Jupyter jobs are interactive by nature, so all the consideration about interactive jobs and their inefficiencies apply here |
Prepare python environment (using conda or pip) on the cluster:
working with conda
load python module
Code Block language bash # list existing modules with module avail # load suitable python module (miniconda in this example) module load miniconda3-4.10.3-gcc-12.2.0-hgiin2a
create new environment
Code Block language bash conda create -n myvenv python=3.10 # and follow prompts...
once this new environment is created, activate it with
Code Block language bash # initialize conda conda init bash # or "conda init csh" if csh is used # list existing environments conda env list # activate newly installed env from previous step conda activate myvenv # if you need to deactivate conda deactivate myvenv
After all this is done, conda will automatically load ~base~ environment upon every login to the cluster. To prevent this, run
Code Block language bash conda config --set auto_activate_base false
Use this env to install jupyter (and other required packages)
Code Block language bash conda install jupyter
working with pip
load python module
Code Block language bash # list existing modules with module avail # load suitable python module (miniconda in this example) module load miniconda3-4.10.3-gcc-12.2.0-hgiin2a
Create new environment and activate it
Code Block language bash # create venv python -m venv ~/myvenv # and activate it source ~/myvenv/bin/activate
Install jupyter into this new virtual environments:
Code Block language bash pip install jupyter
Submit a SLURM job
Once conda is installed, submit a SLURM job. Prepare this SLURM batch script similar to this (use your own SBATCH arguments, like job name and the amount of resources you need, this is only an example):
Code Block | ||
---|---|---|
| ||
#!/bin/bash #SBATCH --job-name=<myJobName> # give your job a name #SBATCH --nodes=1 #SBATCH --cpus-per-task=1 #SBATCH --time=00:30:00 # set this time according to your need #SBATCH --mem=3GB # how much RAM will your notebook consume? #SBATCH --gres=gpu:1 # if you need to use a GPU #SBATCH -p ai-gpu # specify partition module purge module load miniconda3-4.10.3-gcc-12.2.0-hgiin2a # if using conda conda activate myvenv # if using pip # source ~/myvev/bin/activate # set log file LOG="/home/${USER}/jupyterjob_${SLURM_JOB_ID}.txt" # gerenerate random port PORT=`shuf -i 10000-50000 -n 1` # print useful info cat << EOF > ${LOG} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~ Slurm Job $SLURM_JOB_ID ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hello from the Jupyter job! In order to connect to this jupyter session setup a tunnel on your local workstation with: ---> ssh -t ${USER}@ai-login01.med.cornell.edu -L ${PORT}:localhost:${PORT} ssh ${HOSTNAME} -L ${PORT}:localhost:${PORT} (copy above command and paste it to your terminal). Depending on your ssh configuration, you may be prompted for your password. Once you are logged in, leave the terminal running and don't close it until you are finished with your Jupyter session. Further down look for a line similar to ---> http://127.0.0.1:10439/?token=xxxxyyyxxxyyy Copy this line and paste in your browser EOF # start jupyter jupyter-notebook --no-browser --ip=0.0.0.0 --port=${PORT} 2>&1 | tee -a ${LOG} |
Save this file somewhere in your file space on the login node and submit a batch SLURM job with
Code Block | ||
---|---|---|
| ||
sbatch script_name.txt |
Set up connection to the jupyter servers
SLURM job will generate a text file ~/jupyterjob_<JOBID>.txt
. Follow instructions in this file to connect to the jupyter session. Two steps that need to be take are:
Setup an SSH tunnel
Point your browser to
127.0.0.1:port/token=...
Note |
---|
Once you are done with your jupyter work, save your progress if needed, close browser tabs and make sure to stop the SLURM job |
...
with |
Stopping and monitoring SLURM jobs
To stop (cancel) a SLURM job use
Code Block | ||
---|---|---|
| ||
scancel <job_id> |
Once the job is running, there are a few tool that can help monitoring the status. Again, refer to <placeholder> for detailed SLURM tutorial, but here is a list of some useful commands:
Code Block | ||
---|---|---|
| ||
# show status of the queue
squeue -l
# only list jobs by a specific user
squeue -l -u <cwid>
# print partitions info
sinfo
# print detailed info about a job
scontrol show job <job id>
# print detailed info about a job
scontrol show node <node_name>
# get a list of all the jobs executed within last 7 days:
sacct -u <cwid> -S $(date -d "-7 days" +%D) -o "user,JobID,JobName,state,exit" |