Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
outlinetrue



General Environment Setup to Work with SCU Resources

Setup Slurm

In order to use SCU resources, you'll first need to setup your environment to work properly with our implementation of the job scheduler, Slurm. This is described here: Using Slurm

Setup Spack

Once this is done, you'll need to setup your environment to use Spack, our cluster-wide package manager. This is described here: Spack



Relion-specific Environment Setup

In order to use the Relion graphical user interface (GUI), you'll need to do log on to curie, the Slurm submission node (see Using Slurm for more details), and add this to your ~/.bashrc:

Code Block
languagebash
export RELION_QSUB_EXTRA_COUNT=2
export RELION_QSUB_EXTRA1="Number of nodes:"
export RELION_QSUB_EXTRA1_DEFAULT="1"
export RELION_QSUB_EXTRA2="Number of GPUs:"
export RELION_QSUB_EXTRA2_DEFAULT="0"


After editing your ~/.bashrc, log out of curie.



Launching the Relion GUI

Log on to curie, with X11, and request an interactive node

Code Block
ssh -Y pascal # if off campus, use ssh -Y pascal.med.cornell.edu
ssh -Y curie

The -Y enables the use of the GUI.

Info

In addition to pascal, you can also use the other gateway nodes, aphrodite and aristotle.


Info

If you plan on running calculations on a desktop/workstation (e.g. a system not allocated by Slurm), then you'll need to log in there (make sure you use -Y in each of your ssh commands)


Cluster-only: to request an interactive session, use this command (for more information, see: Using Slurm)

Code Block
srun -n1 --pty --x11 --partition=cryo-cpu --mem=8G bash -l

Relion versions available

To see what's available, use this command (for more information on spack command, see Spack):

Code Block
languagebash
spack find -l -v relion


Here is output from the above command that is current as of 5/3/19:

Code Block
==> 14 installed packages.
-- linux-centos7-x86_64 / gcc@4.8.5 -----------------------------
pbsqju2 relion@2.0.3 build_type=RelWithDebInfo ~cuda cuda_arch= +double~double-gpu+gui purpose=cluster
ua3zs52 relion@2.0.3 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=cluster
djy46i6 relion@2.1 build_type=RelWithDebInfo +cluster+cuda cuda_arch=60 ~desktop+double~double-gpu+gui
cchnbyc relion@2.1 build_type=RelWithDebInfo ~cuda cuda_arch= +double~double-gpu+gui
xxisr7j relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=60 +desktop+double~double-gpu+gui
lzd4ktq relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui
v6jckz3 relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=cluster
6cpdlsc relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=desktop
ltaf3x6 relion@2.1 build_type=RelWithDebInfo +cuda cuda_arch=70 +double~double-gpu+gui purpose=cluster
u6dzm4v relion@3.0_beta build_type=RelWithDebInfo ~cuda cuda_arch= +double~double-gpu+gui purpose=cluster
olcttts relion@3.0_beta build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=cluster
f2fjevh relion@3.0_beta build_type=RelWithDebInfo +cuda cuda_arch=60 +double~double-gpu+gui purpose=desktop
opdhbcs relion@3.0_beta build_type=RelWithDebInfo +cuda cuda_arch=70 +double~double-gpu+gui purpose=cluster
2ilkf4r relion@develop build_type=RelWithDebInfo +cuda+double~double-gpu+gui


There's a lot in the above output, so let's break it down!

First, in the above, we have several production-ready Relion versions:

  • relion@2.0.3
  • relion@2.1
  • relion@3.0_beta


In addition, these versions of Relion can run on the following platforms:

  • purpose=desktop # used on workstations
  • purpose=cluster   # used on our Slurm-allocated cluster


Some of these Relion installations are intended for use on nodes/workstations with GPUs, whereas others are intended for CPU-only nodes/workstations:

  • +cuda # Relion installation that supports GPU-use
  • -cuda  # Relion installation that does not support GPU use
Warning

Do NOT try to run a +cuda installation of Relion on a CPU-only node–this will result in errors, as these versions of Relion expect CUDA to be present (which it is not on the CPU-only nodes).


Finally, we have versions of Relion that are specific to different GPU models:

  • cuda_arch=60  # For use on nodes with Nvidia P100s
  • cuda_arch=70  # For use on nodes with Nvidia V100s


What Relion version should I use?!?

Which version of Relion you use (2.0.3, 2.1, or 3.0_beta) is more of a scientific question than technical one; however, in general, most users seem to be using 3.0_beta, unless they have legacy projects that were started with an older version (consult with PI if you have questions about this). 


After the specific version of Relion is selected, selecting for the other installation parameters is straightforward:

To run relion3.0_beta on the cluster (or reserved nodes) with GPUs (P100s):

Code Block
spack load -r relion@3.0_beta+cuda purpose=cluster cuda_arch=60


To run relion3.0_beta on the cluster (or reserved nodes) with GPUs (V100s):

Code Block
spack load -r relion@3.0_beta+cuda purpose=cluster cuda_arch=70


To load Relion 3 beta the cluster (or reserved nodes) on CPU-only nodes (i.e. no GPUs):

Code Block
spack load -r relion@3.0_beta~cuda purpose=cluster


To run relion3.0_beta on a desktop/workstation with GPUs (P100s):

Code Block
spack load -r  relion@3.0_beta +cuda cuda_arch=60 purpose=desktop


Warning

Launch the 


Launching the GUI

After loading 

Filter by label (Content by label)
showLabelsfalse
max5
spacescom.atlassian.confluence.content.render.xhtml.model.resource.identifiers.SpaceResourceIdentifier@28a7b0
showSpacefalse
sortmodified
reversetrue
typepage
cqllabel in ( "relion" , "cryoem" , "software" ) and type = "page" and space = "WIKI"
labelstest


Page Properties
hiddentrue


Related issues