Table of Contents | ||
---|---|---|
|
...
Note |
---|
The purpose of this documentation is to describe how to run Relion calculations on SCU resources, including some general suggestions regarding how to achieve better performance. It is NOT intended to teach you how to run Relion–there is already an excellent Relion tutorial that serves this purpose. |
...
General Environment Setup to Work with SCU Resources
...
Info |
---|
This is the command you want if you wish to launch the Relion GUI–don't worry that this version doesn't use GPUs. The GUI is simply used for creating the job and either submitting it directly to Slurm, or writing out a submission script. This Relion installation is only for running the GUI or for running jobs on CPU-only nodes. |
...
Info |
---|
Relion generates a lot of subdirectories and metadata; to keep everything organized, we recommend that each Relion project (i.e. analysis workflow for a given set of data) is given its own directory. |
...
Submitting Jobs from the GUI:
Should I run jobs locally or submit jobs to the Slurm queue?
Info |
---|
If you are RUNNING jobs on a workstation (i.e. actually using your workstation to PERFORM the calculation, and not just submitting from the workstation) AND that workstation is not managed by Slurm, just run all jobs locally. |
Some jobs should just be run from the interactive session, and NOT submitted to the Slurm queue. These are jobs are generally lightweight in terms of computational demand and/or require the Relion GUI. Here are a few examples of jobs that should be run from the GUI (and not submitted to the Slurm queue):
- Import
- Manual picking
- Subset selection
- Join star files
Here are some jobs that should never be run directly (i.e. these jobs should always be submitted to the Slurm queue, unless running on a local workstation):
- Motion correction
- CTF estimation
- Auto-picking
- Particle Sorting
- 2D/3D classification
- 3D initial model
- 3D auto-refine
- 3D multi-body
In general, if the job takes a long time to run or requires a lot of compute resources, it should be submitted to the Slurm queue (or run on a local workstation).
How to run jobs locally?
This is very straight-forward. Set up your calculation in the Relion GUI (see the Relion tutorial for calculation-specific settings). Under the "Running" tab for a given calculation, just make sure the option "Submit to queue?" is set to "No". Otherwise, follow the procedure as described in the Relion tutorial.
How to submit jobs to the Slurm queue?
Set up your calculation in the Relion GUI (see the Relion Tutorial for calculation-specific settings).
- Under the "Running" tab for a given calculation, set the option "Submit to queue?" to "Yes".
- You'll need to set "Queue name:" to the Slurm partition you wish to use:
- If you want to only run on CPUs, then set this to "cryo-cpu"
- If you want to run on the shared cryoEM GPUs, then set this to "cryo-gpu"
- If you want to run on a lab-reserved node, then set this to the appropriate partition (e.g. "blanchard_reserve")
- Set "Queue submit command" to "sbatch"
- Set the number of nodes/GPUs to the desired values
Relion requires a set of template Slurm submission scripts that enables it to submit jobs. You'll need to set "Standard submission script" to the path where the needed template script is. A set of template submission scripts can be found here:
Code Block /softlib/apps/EL7/slurm_relion_submit_templates # select the template script that is appropriate for your job
- Give the "Current job" an alias if desired and click "Run!" # assumes everything else is set. Checking job status (and other Slurm functionality) is described here and here.
...
Related articles
Filter by label (Content by label) | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...