The purpose of this documentation is to describe how to run Relion calculations on SCU resources, including some general suggestions regarding how to achieve better performance. It is NOT intended to teach you how to run Relion–there is already an excellent Relion tutorial that serves this purpose.
General Environment Setup to Work with SCU Resources
Setup Slurm
In order to use SCU resources, you'll first need to setup your environment to work properly with our implementation of the job scheduler, Slurm. This is described here: Using Slurm
Setup Spack
Once this is done, you'll need to setup your environment to use Spack, our cluster-wide package manager. This is described here: Spack
Relion-specific Environment Setup
In order to use the Relion graphical user interface (GUI), you'll need to do log on to curie,
the Slurm submission node (see Using Slurm for more details), and add this to your ~/.bashrc
:
export RELION_QSUB_EXTRA_COUNT=2 export RELION_QSUB_EXTRA1="Number of nodes:" export RELION_QSUB_EXTRA1_DEFAULT="1" export RELION_QSUB_EXTRA2="Number of GPUs:" export RELION_QSUB_EXTRA2_DEFAULT="0" export RELION_CTFFIND_EXECUTABLE=/softlib/apps/EL7/ctffind/ctffind-4.1.8/bin/ctffind
After editing your ~/.bashrc
, log out of curie
.
Relion Versions Installed
Log on to curie, with X11, and request an interactive node
ssh -Y pascal # if off campus, use ssh -Y pascal.med.cornell.edu ssh -Y curie
The -Y
enables the use of the GUI.
In addition to pascal,
you can also use the other gateway nodes, aphrodite
and aristotle.
If you plan on running calculations on a desktop/workstation (e.g. a system not allocated by Slurm), then you'll need to log in there (make sure you use -Y
in each of your ssh
commands)
For Cluster/reserved nodes-only: to request an interactive session, use this command (for more information, see: Using Slurm)
srun -n1 --pty --x11 --partition=cryo-cpu --mem=8G bash -l
Seeing which Relion versions are available
To see what's available, use this command (for more information on spack command, see Spack):
spack find -l -v relion
Here is output from the above command that is current as of 11/11/19:
[root@node175 ~]# spack find -l -v relion cuda_arch=60 ==> 3 installed packages -- linux-centos7-broadwell / gcc@8.2.0 -------------------------- citbw3p relion@3.0.7 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=cluster qgb6abl relion@3.1_beta build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=cluster iyusobn relion@3.0.8 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=cluster [root@node175 ~]# spack find -l -v relion cuda_arch=70 ==> 3 installed packages -- linux-centos7-skylake_avx512 / gcc@8.2.0 --------------------- gzhr4k3 relion@3.0.7 build_type=RelWithDebInfo +cuda cuda_arch=70 +double~double-gpu+gui purpose=cluster 5dknaqs relion@3.1_beta build_type=RelWithDebInfo +cuda cuda_arch=70 +double~double-gpu+gui purpose=cluster ar4poio relion@3.0.8 build_type=RelWithDebInfo +cuda cuda_arch=70 +double~double-gpu+gui purpose=cluster [root@node175 ~]# spack find -l -v relion~cuda ==> 3 installed packages -- linux-centos7-broadwell / gcc@8.2.0 -------------------------- xy5wtb3 relion@3.0.7 build_type=RelWithDebInfo ~cuda cuda_arch=none +double~double-gpu+gui purpose=cluster 7juireg relion@3.1_beta build_type=RelWithDebInfo ~cuda cuda_arch=none +double~double-gpu+gui purpose=cluster bytbti4 relion@3.0.8 build_type=RelWithDebInfo ~cuda cuda_arch=none +double~double-gpu+gui purpose=cluster
There's a lot in the above output, so let's break it down!
First, in the above, we have several production-ready Relion versions:
relion@3.0.7
relion@3.0.8
relion@3.1_beta
Some of these Relion installations are intended for use on nodes/workstations with GPUs, whereas others are intended for CPU-only nodes/workstations:
+cuda
# Relion installation that supports GPU-use~cuda
# Relion installation that does not support GPU use
Do NOT try to run a +cuda installation of Relion on a CPU-only node–this will result in errors, as these versions of Relion expect CUDA to be present (which it is not on the CPU-only nodes).
Finally, we have versions of Relion that are specific to different GPU models:
cuda_arch=60
# For use on nodes with Nvidia P100s - cro-gpu-v100 Slurm partitioncuda_arch=70
# For use on nodes with Nvidia V100s - cryo-gpu Slurm partition
What Relion version should I use?!?
Which version of Relion you use (3.0.7, 3.0.8, or 3.1_beta) is more of a scientific question than technical one.
After the specific version of Relion is selected, selecting for the other installation parameters is straightforward:
To load Relion 3 version 3.0.8 cluster (or reserved nodes) on CPU-only nodes (i.e. no GPUs):
spack load -r relion@3.0.8~cuda
This is the command you want if you wish to launch the Relion GUI–don't worry that this version doesn't use GPUs. The GUI is simply used for creating the job and either submitting it directly to Slurm, or writing out a submission script. This Relion installation is only for running the GUI or for running jobs on CPU-only nodes.
The following spack load
command are for loading Relion with the intent to submitting jobs via a script, or running the GUI on a desktop/workstation.
To run relion3.0_beta on the cluster (or reserved nodes) with GPUs (P100s):
spack load -r relion@3.0.7 cuda_arch=60
To run relion3.0_beta on the cluster (or reserved nodes) with GPUs (V100s):
spack load -r relion@3.0.8 cuda_arch=70
Note, you can also load Relion by hash.
For example "spack load -r /5dknaqs" for Relion on V-100s.
Launching the Relion GUI
Note for Cluster/reserved nodes-only: This assumes you are already in an interactive session (as described in the previous section)–if not, request an interactive session!
If you haven't already, load relion@1.0_beta
with this command:
spack load -r relion@3.1_beta~cuda
This is the command you want if you wish to launch the Relion GUI–don't worry that this version doesn't use GPUs. The GUI is simply used for creating the job and either submitting it directly to Slurm, or writing out a submission script. This Relion installation is only for running the GUI or for running jobs on CPU-only nodes.
Next, change to the directory where you wish to keep your Relion files for a given project and execute this command:
relion
This should launch the Relion GUI.
Relion generates a lot of subdirectories and metadata; to keep everything organized, we recommend that each Relion project (i.e. analysis workflow for a given set of data) is given its own directory.
Submitting Jobs from the GUI:
Should I run jobs locally or submit jobs to the Slurm queue?
If you are RUNNING jobs on a workstation (i.e. actually using your workstation to PERFORM the calculation, and not just submitting from the workstation) AND that workstation is not managed by Slurm, just run all jobs locally.
Some jobs should just be run from the interactive session, and NOT submitted to the Slurm queue. These are jobs are generally lightweight in terms of computational demand and/or require the Relion GUI. Here are a few examples of jobs that should be run from the GUI (and not submitted to the Slurm queue):
- Import
- Manual picking
- Subset selection
- Join star files
Here are some jobs that should never be run directly (i.e. these jobs should always be submitted to the Slurm queue, unless running on a local workstation):
- Motion correction
- CTF estimation
- Auto-picking
- Particle Sorting
- 2D/3D classification
- 3D initial model
- 3D auto-refine
- 3D multi-body
In general, if the job takes a long time to run or requires a lot of compute resources, it should be submitted to the Slurm queue (or run on a local workstation).
How to run jobs locally?
This is very straight-forward. Set up your calculation in the Relion GUI (see the Relion tutorial for calculation-specific settings). Under the "Running" tab for a given calculation, just make sure the option "Submit to queue?" is set to "No". Otherwise, follow the procedure as described in the Relion tutorial.
How to submit jobs to the Slurm queue?
Set up your calculation in the Relion GUI (see the Relion Tutorial for calculation-specific settings).
- Under the "Running" tab for a given calculation, set the option "Submit to queue?" to "Yes".
- You'll need to set "Queue name:" to the Slurm partition you wish to use:
- If you want to only run on CPUs, then set this to "cryo-cpu"
- If you want to run on the shared cryoEM GPUs, then set this to "cryo-gpu"
- If you want to run on a lab-reserved node, then set this to the appropriate partition (e.g. "blanchard_reserve")
- Set "Queue submit command" to "sbatch"
- Set the number of nodes/GPUs to the desired values
Relion requires a set of template Slurm submission scripts that enables it to submit jobs. You'll need to set "Standard submission script" to the path where the needed template script is. A set of template submission scripts, which are no longer maintained, can be found here:
/softlib/apps/EL7/slurm_relion_submit_templates # select the template script that is appropriate for your job
- Give the "Current job" an alias if desired and click "Run!" # assumes everything else is set. Checking job status (and other Slurm functionality) is described here and here.
Single node jobs:
Users commonly submit a single job to one node. An example 3D Refine script shown below:
#!/bin/bash #SBATCH --job-name=n1_bench #SBATCH -p cryo-gpu-v100 #SBATCH --mem=170g #SBATCH --nodes=1 #SBATCH --ntasks=5 #SBATCH --ntasks-per-node=5 #SBATCH --cpus-per-task=4 #SBATCH --gres=gpu:4 cd /athena/scu/scratch/dod2014/relion_2 source /software/spack/centos7/share/spack/setup-env.sh # 3.1_beta skylake openmpi 4.0.1 spack load -r /ii7uzb5 # 3.0.8 w openmpi 4 + slurm #spack load -r /sfp6sf5 # 3.1_beta w openmpi 4 + slurm #spack load -r /ii7uzb5 mkdir -pv Refine3D/quackmaster/run_single_node/${user}_${SLURM_JOB_ID} mpirun -display-allocation -display-map -v -np 5 relion_refine_mpi --o Refine3D/quackmaster/run_single_node/${user}_${SLURM_JOB_ID} --split_random_halves --i Select/job088/particles.star --ref 012218_CS_256.mrc --firstiter_cc --ini_high 30 --dont_combine_weights_via_disc --scratch_dir /scratchLocal --pad 2 --ctf --particle_diameter 175 --flatten_solvent --zero_mask --oversampling 1 --healpix_order 2 --auto_local_healpix_order 4 --offset_range 5 --offset_step 2 --sym C1 --low_resol_join_halves 40 --norm --scale --j 4 --gpu --pool 100 --auto_refine # report whether relion crashes # remove particles from scratch # if it crashes if [ $? -eq 0 ] then echo -e "$SLURM_JOB_ID exited successfully" else user=$SLURM_JOB_USER MY_TMP_DIR=/scratchLocal/${user}_${SLURM_JOB_ID} echo -e "$SLURM_JOB_ID failed so cleaning up $MY_TMP_DIR" rm -rf $MY_TMP_DIR fi
Multi node jobs:
If you want your job to finish sooner, and there are idle nodes, then you can run a single job on multiple nodes at once. This will allow each iteration, and the entire job, to finish sooner than it would on a single node.
To achieve this goal change the Slurm submission options and Relion command line arguments. For example the above script, for a single node, can be submitted to 3 nodes with the following changes:
#!/bin/bash #SBATCH --job-name=n3_bench #SBATCH -p cryo-gpu-v100 #SBATCH --mem=170g #SBATCH --nodes=3 #SBATCH --ntasks=13 #SBATCH --ntasks-per-node=5 #SBATCH --cpus-per-task=4 #SBATCH --gres=gpu:4 mpirun -display-allocation -display-map -v -n 13 relion_refine_mpi --o Refine3D/quackmaster/run_multi_node/${user}_${SLURM_JOB_ID} --split_random_halves --i Select/job088/particles.star --ref 012218_CS_256.mrc --firstiter_cc --ini_high 30 --dont_combine_weights_via_disc --scratch_dir /scratchLocal --pad 2 --ctf --particle_diameter 175 --flatten_solvent --zero_mask --oversampling 1 --healpix_order 2 --auto_local_healpix_order 4 --offset_range 5 --offset_step 2 --sym C1 --low_resol_join_halves 40 --norm --scale --j 6 --gpu --pool 100 --auto_refine
In the above notice --ntasks=13 and mpirun -n 13
These will ensure 13 total MPI processes are launched, across the 3 nodes in cryo-gpu-v100 partition, with 5 processes on the first node and 4 on the remaining. Note, you ideally want 1 MPI process per node given each node has 4 GPUs. Any more than 1 process per node, thus GPU, will result in poor performance.
In Slurm log output we can see the allocation, with three nodes, and the correct number of MPI processes:
mkdir: created directory ‘Refine3D/quackmaster/run_multi_node/_1349014’ ====================== ALLOCATED NODES ====================== cantley-node01: flags=0x11 slots=5 max_slots=0 slots_inuse=0 state=UP cantley-node02: flags=0x11 slots=4 max_slots=0 slots_inuse=0 state=UP node183: flags=0x11 slots=4 max_slots=0 slots_inuse=0 state=UP ================================================================= Data for JOB [8540,1] offset 0 Total slots allocated 13 ======================== JOB MAP ======================== Data for node: cantley-node01 Num slots: 5 Max slots: 0 Num procs: 5 Process OMPI jobid: [8540,1] App: 0 Process rank: 0 Bound: UNBOUND Process OMPI jobid: [8540,1] App: 0 Process rank: 1 Bound: UNBOUND Process OMPI jobid: [8540,1] App: 0 Process rank: 2 Bound: UNBOUND Process OMPI jobid: [8540,1] App: 0 Process rank: 3 Bound: UNBOUND Process OMPI jobid: [8540,1] App: 0 Process rank: 4 Bound: UNBOUND Data for node: cantley-node02 Num slots: 4 Max slots: 0 Num procs: 4 Process OMPI jobid: [8540,1] App: 0 Process rank: 5 Bound: N/A Process OMPI jobid: [8540,1] App: 0 Process rank: 6 Bound: N/A Process OMPI jobid: [8540,1] App: 0 Process rank: 7 Bound: N/A Process OMPI jobid: [8540,1] App: 0 Process rank: 8 Bound: N/A Data for node: node183 Num slots: 4 Max slots: 0 Num procs: 4 Process OMPI jobid: [8540,1] App: 0 Process rank: 9 Bound: N/A Process OMPI jobid: [8540,1] App: 0 Process rank: 10 Bound: N/A Process OMPI jobid: [8540,1] App: 0 Process rank: 11 Bound: N/A Process OMPI jobid: [8540,1] App: 0 Process rank: 12 Bound: N/A =============================================================
Cleaning up /scratch:
Relion can either load particles into RAM before processing, using --preread_images, or copy them to local scratch, with --o scratch_dir. Please see Relion documentation. Should you load particles to local scratch, if Relion crashes, then these files will not be automatically deleted. Slurm will clear out old data, after a period of several days, though you should do this yourself by adding the following bash code at the end of your Slurm submission script:
# report whether relion crashes # remove particles from scratch # if it crashes if [ $? -eq 0 ] then echo -e "$SLURM_JOB_ID exited successfully" else user=$SLURM_JOB_USER MY_TMP_DIR=/scratchLocal/${user}_${SLURM_JOB_ID} echo -e "$SLURM_JOB_ID failed so cleaning up $MY_TMP_DIR" rm -rf $MY_TMP_DIR fi
Related articles