Table of Contents | ||
---|---|---|
|
...
AI cluster has the following storage systems configured:
Name | Mount point | Size | Use | Is backed up? | Comment |
---|---|---|---|---|---|
Home |
| 2Tb | home filesystem. Used to keep small files, configs, codes, scripts, etc | no | have limited space. It is only used for small files |
Midtier |
| varies per lab | each lab has an allocation under intended for data that is actively being used or processed, research datasets | no | |
AI GPFS |
| 700Tb | tbd | no | Parallel file system for data intensive workloads. Limited access, granted on special requests. |
...
Code Block |
---|
# List all files and directories in the scratch directory ls /midtier/labname/scratch/ # Navigate to a specific subdirectory cd /midtier/labname/scratch/cwid # Copy a file from the current directory to another directory cp data.txt /midtier/labname/scratch/cwid/ # Move the copied file to a different directory mv /midtier/labname/scratch/cwid/data.txt /midtier/labname/scratch/backup/ # Create a new directory mkdir /midtier/labname/scratch/cwid/new_project/ |
Running jobs
Important notice:
!!! Do not run computations on login nodes !!!
Running your application code directly without submitting it through the scheduler is prohibited. Login nodes are shared resources and they are reserved for light tasks like file management and job submission. Running heavy computations on login nodes can degrade performance for all users. Instead, please submit your compute jobs to the appropriate SLURM queue, which is designed to handle such workloads efficiently.
Computational jobs on the AI cluster are managed with a SLURM job manager. We provide an in-depth tutorial on how to use SLURM
SLURM Batch job example
SLURM interactive job example
...