Table of Contents | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
This cluster features high-memory nodes, Nvidia GPU servers (A100, A40 and L40), InfiniBand interconnect, and specialized storage designed for AI workloads.
Info |
---|
AI cluster allows special time-limited projects, dealing with clinical data. Resources for such projects are granted upon special requests, contact scu@med.cornell.edu for more information about this. |
Login to the AI cluster
AI cluster is accessible via terminal SSH sessions. You need to be connecting from the WCM network, or have VPN installed and enabled. Replace <cwid>
with your credentials.
...
Name | Mount point | Size | Use | Is backed up? | Comment |
---|---|---|---|---|---|
Home |
| 2Tb | home filesystem. Used to keep small files, configs, codes, scripts, etc | no | have has limited space. It is only used for small files |
Midtier |
| varies per lab | each lab has an allocation under intended for data that is actively being used or processed, research datasets | no | |
AI GPFS |
| 700Tb | tbd | no | Parallel file system for data intensive workloads. Limited access, granted on special requests. |
Running jobs
...
Software applications
Access to applications is managed with modules
. Refer to <placeholder> for detailed tutorial on modules
but here is a quick list of commands that can be used on the AI cluster:
Code Block | ||
---|---|---|
| ||
# list all the available modules:
module avail
# list currently loaded modules:
module load <module_name>
# unload the module:
module unload <module_name>
# swap versions of the application
module swap <module_name>/<version1> <module_name>/<version2>
# unload all modules
module purge
# get help
module help
# get more info for a particular module
module help <module_name> |
Info |
---|
If you can’t find an application that you need listed in the |
Running jobs
Computational jobs on the AI cluster are managed with a SLURM job manager. We provide an in-depth tutorial on how to use SLURM <placeholder>, but some basic examples that are immediately applicable on the AI cluster will be discussed in this section.
Important notice
Warning |
---|
Do not run computations on login nodes |
Running your application code directly without submitting it through the scheduler is prohibited. Login nodes are shared resources and they are reserved for light tasks like file management and job submission. Running heavy computations on login nodes can degrade performance for all users. Instead, please submit your compute jobs to the appropriate SLURM queue, which is designed to handle such workloads efficiently.
Batch vs interactive jobs
There are two mechanisms to run SLURM jobs: “batch” and “interactive”. Interactive jobs are an inefficient way to utilize the cluster. By their nature, these jobs require the system to wait for user input, leaving the allocated resources idle during those periods. Since HPC clusters are designed to maximize resource utilization and efficiency, having nodes sit idle while still consuming CPU, memory, or GPU resources is counterproductive.
...