Notice
This guide assumes you have a valid SCU account and CWID.
Access SCU Cluster - BRB
The following gateway nodes can be reached to access the SCU Cluster.
scu-login01.med.cornell.edu
scu-login02.med.cornell.edu
These gateway nodes can be reached from any of the SCU Gateways or nodes. SCU credentials must be used to access the SCU gateways. WCMC (CWID) credentials must be used to access the BRB Cluster.
[user@laptop ~]$ ssh scu_user@aphrodite.med.cornell.edu scu_user@aphrodite's password: Enter SCU password [scu_user@aphrodite ~]$ ssh CWID@scu-login01.med.cornell.edu CWID@scu-login01's password: Enter CWID password
Configure SSH Keys
SSH key configuration allows users to traverse the BRB infrastructure without entering passwords at each step. The following will create an SSH keypair and authorize its use within the infrastructure.
ssh-keygen
Accept default options at each prompt and an SSH private and public key will be created in your ~/.ssh/ directory. Enter the following commands to authorize its use within the cluster.
cd ~/.ssh/ cat id_rsa.pub >> authorized_keys
While your public key in id_rsa.pub can be shared freely, your private key in id_rsa must be kept secret. If another person gains access to this file, they will be able to impersonate you thereby accessing all of your files.
Do not ever copy or move your private key from your ~/.ssh folder or set permissions so that other users can access this file. Permissions on this file should always be "chmod 600," set my ssh-keygen, which means only accessible by the owner. Do not change permissions on the private key.
More custom key configurations can be done to allow for quicker access to the infrastructure. View SSH key setup for more information, or contact SCU@med.cornell.edu for assistance.
Access Cluster Nodes
Cluster nodes are managed through Slurm. Accessing and running jobs on login nodes or by ssh’ing directly to a node is Strictly forbidden.
In-depth SCU documentation on Slurm can be found here: Using Slurm.
Access to the nodes and slurm commands can be run from scu-login02.med.cornell.edu
.
Interactive Session
srun --pty -n1 --mem=8G -p cryo-cpu /bin/bash
For instructions on how to craft a batch submission file or how to access a node with X forwarding support, please view our full documentation here: Using Slurm
Athena Notice
The filesystem accessible from the scu-login gateways, as well as scu-node* nodes, will have access to a filesystem that is named athena; however, this filesystem is separate and distinct from the athena that is accessible from Aphrodite, Pascal, Curie, etc. This was done to allow easier migrations and preserve paths and filesystem structure from one cluster to the other. Please consider this when managing data between the two clusters.
Softwares and Programs
Spack
Spack is the SCU package manager and is accessible from all nodes within the SCU infrastructure.
For more information on Spack, view our documentation here: Spack. Note: The “Spack Introduction and Environment Setup” portion is not required when on the BRB network. Within the SCU BRB network, no setup is needed. Once logged onto a node, all spack commands and options will be available.
Modules
Packages that are installed on the SCU network drive, rather than in spack will be distributed as modules. These modules, once loaded, will perform all variable setups needed to properly run the program in question.
The following will display all available modules:
[rma3001@scu-node001 ~]$ module avail ------------------------------------------------------------------------------------------- /software/apps/modulefiles -------------------------------------------------------------------------------------------- pyem relion/3.1.0/cpu relion/3.1.0/gpu
Modules within paths other than /software/apps/modulefiles must be loaded through spack modules for proper setup
To load a package into your environment for use:
[rma3001@scu-node001 ~]$ module load relion/3.1.0/cpu # The program is now ready for use [rma3001@scu-node001 ~]$ relion
Python Packages
Nodes within the SCU environment support conda and pip installations. Conda and pip allow users to install packages within their own user space. Certain packages may include documentation on installation steps for pip or conda installations. View below for example installations.
Install Jupyter-notebook within Conda Environment
Conda will create a virtual Python environment for easy installations and idempotency.
[rma3001@scu-node001 ~]$ conda create -n jupyter-notebook [rma3001@scu-node001 ~]$ conda activate jupyter-notebook [rma3001@scu-node001 ~]$ conda install -c anaconda jupyter
Once the program is installed, access to the environment can be regained through conda activate jupyter-notebook
. View Conda User Guide for more in depth tutorials. Or contact SCU@med.cornell.edu for support.
Install Python Packages Using Pip
Pip will install packages within a user’s local python directories. The following will install several commonly used Python programs within a user’s home directory
[rma3001@scu-node001 ~]$ pip3 install --user cython numpy scipy
View Pip Documentation for more information. Or contact SCU@med.cornell.edu for support.
Request a Program
If a required program is not installed in Spack or within our modules, contact SCU@med.cornell.edu. The SCU will determine whether the package can be installed in Spack, as a module, or can provide instructions on how to install the program within your user space.
Cryo-EM
Relion
Relion is now being managed by modules with different installations for CPU or GPU versions.
To load Relion (version 3.1.0 - GPU-optimized) enter module load relion/3.1.0/gpu
. View module instructions above for more information about loading and querying modules.
CryoSparc
CryoSparc will be accessible in the same way as it was previously, with the exceptions being that the hostnames and credentials must be updated. Please contact SCU@med.cornell.edu for more information.