General Environment Setup to Work with SCU Resources
Setup Slurm
In order to use SCU resources, you'll first need to setup your environment to work properly with our implementation of the job scheduler, Slurm. This is described here: Using Slurm
Setup Spack
Once this is done, you'll need to setup your environment to use Spack, our cluster-wide package manager. This is described here: Spack
Relion-specific Environment Setup
In order to use the Relion graphical user interface (GUI), you'll need to do log on to curie,
the Slurm submission node (see Using Slurm for more details), and add this to your ~/.bashrc
:
export RELION_QSUB_EXTRA_COUNT=2 export RELION_QSUB_EXTRA1="Number of nodes:" export RELION_QSUB_EXTRA1_DEFAULT="1" export RELION_QSUB_EXTRA2="Number of GPUs:" export RELION_QSUB_EXTRA2_DEFAULT="0"
After editing your ~/.bashrc
, log out of curie
.
Relion Versions Installed
Log on to curie, with X11, and request an interactive node
ssh -Y pascal # if off campus, use ssh -Y pascal.med.cornell.edu ssh -Y curie
The -Y
enables the use of the GUI.
In addition to pascal,
you can also use the other gateway nodes, aphrodite
and aristotle.
If you plan on running calculations on a desktop/workstation (e.g. a system not allocated by Slurm), then you'll need to log in there (make sure you use -Y
in each of your ssh
commands)
For Cluster/reserved nodes-only: to request an interactive session, use this command (for more information, see: Using Slurm)
srun -n1 --pty --x11 --partition=cryo-cpu --mem=8G bash -l
Seeing which Relion versions are available
To see what's available, use this command (for more information on spack command, see Spack):
spack find -l -v relion
Here is output from the above command that is current as of 5/3/19:
==> 14 installed packages. -- linux-centos7-x86_64 / gcc@4.8.5 ----------------------------- pbsqju2 relion@2.0.3 build_type=RelWithDebInfo ~cuda cuda_arch= +double~double-gpu+gui purpose=cluster ua3zs52 relion@2.0.3 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=cluster djy46i6 relion@2.1 build_type=RelWithDebInfo +cluster+cuda cuda_arch=60 ~desktop+double~double-gpu+gui cchnbyc relion@2.1 build_type=RelWithDebInfo ~cuda cuda_arch= +double~double-gpu+gui xxisr7j relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=60 +desktop+double~double-gpu+gui lzd4ktq relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui v6jckz3 relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=cluster 6cpdlsc relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=desktop ltaf3x6 relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=70 +double~double-gpu+gui purpose=cluster u6dzm4v relion@3.0_beta build_type=RelWithDebInfo ~cuda cuda_arch= +double~double-gpu+gui purpose=cluster olcttts relion@3.0_beta build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=cluster f2fjevh relion@3.0_beta build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=desktop opdhbcs relion@3.0_beta build_type=RelWithDebInfo +cuda cuda_arch=70 +double~double-gpu+gui purpose=cluster 2ilkf4r relion@develop build_type=RelWithDebInfo +cuda+double~double-gpu+gui
There's a lot in the above output, so let's break it down!
First, in the above, we have several production-ready Relion versions:
relion@2.0.3
relion@2.1
relion@3.0_beta
In addition, these versions of Relion can run on the following platforms:
purpose=desktop
# used on workstationspurpose=cluster
# used on our Slurm-allocated cluster
Some of these Relion installations are intended for use on nodes/workstations with GPUs, whereas others are intended for CPU-only nodes/workstations:
+cuda
# Relion installation that supports GPU-use-cuda
# Relion installation that does not support GPU use
Do NOT try to run a +cuda installation of Relion on a CPU-only node–this will result in errors, as these versions of Relion expect CUDA to be present (which it is not on the CPU-only nodes).
Finally, we have versions of Relion that are specific to different GPU models:
cuda_arch=60
# For use on nodes with Nvidia P100scuda_arch=70
# For use on nodes with Nvidia V100s
What Relion version should I use?!?
Which version of Relion you use (2.0.3, 2.1, or 3.0_beta) is more of a scientific question than technical one; however, in general, most users seem to be using 3.0_beta, unless they have legacy projects that were started with an older version (consult with PI if you have questions about this).
After the specific version of Relion is selected, selecting for the other installation parameters is straightforward:
To load Relion 3 beta the cluster (or reserved nodes) on CPU-only nodes (i.e. no GPUs):
spack load -r relion@3.0_beta~cuda purpose=cluster
This is the command you want if you wish to launch the Relion GUI–don't worry that this version doesn't use GPUs. The GUI is simply used for creating the job and either submitting it directly to Slurm, or writing out a submission script. This Relion installation is only for running the GUI or for running jobs on CPU-only nodes.
The following spack load
command are for loading Relion with the intent to submitting jobs via a script, or running the GUI on a desktop/workstation.
To run relion3.0_beta on the cluster (or reserved nodes) with GPUs (P100s):
spack load -r relion@3.0_beta+cuda purpose=cluster cuda_arch=60
To run relion3.0_beta on the cluster (or reserved nodes) with GPUs (V100s):
spack load -r relion@3.0_beta+cuda purpose=cluster cuda_arch=70
To run relion3.0_beta on a desktop/workstation with GPUs (P100s):
spack load -r relion@3.0_beta +cuda cuda_arch=60 purpose=desktop
Launching the Relion GUI
Note for Cluster/reserved nodes-only: This assumes you are already in an interactive session (as described in the previous section)–if not, request an interactive session!
If you haven't already, load relion@3.0_beta
with this command:
spack load -r relion@3.0_beta~cuda purpose=cluster
This is the command you want if you wish to launch the Relion GUI–don't worry that this version doesn't use GPUs. The GUI is simply used for creating the job and either submitting it directly to Slurm, or writing out a submission script. This Relion installation is only for running the GUI or for running jobs on CPU-only nodes.
If you are running the GUI on a desktop/workstation, load the appropriate version of Relion, as described in the previous section.
Next, change to the directory where you wish to keep your Relion files for a given project and execute this command:
relion
This should launch the Relion GUI.
Relion generates a lot of subdirectories and metadata; to keep everything organized, we recommend that each Relion project (i.e. analysis workflow for a given set of data) is given its own directory.
Submitting Jobs from the GUI:
Related articles