Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »


Connection Instructions

SCU - BRB Cluster

Prerequisites

An active WCMC CWID with access to the scu-login nodes must be configured to access the cryoSPARC BRB instance. More information can be found here.

cryoSPARC access must have been configured by SCU admins prior to connection. This can be done by emailing scu@med.cornell.edu and receipt of email confirmation from lab P.I.

SSH keys setup is highly recommended for seamless access to the environment.

Connection - Mac or Linux

  1. Modify the ~/.ssh/config in your local workstation to include the following entries:

    Host *
       StrictHostKeyChecking no
    
    Host scu-login02
        HostName scu-login02.med.cornell.edu
        ServerAliveInterval 60
        TCPKeepAlive yes
  2. Connect to the WCM VPN Network

  3. Run the following from the terminal (Replace port with your lab’s specific port number here):

    ssh node164 -L port:localhost:port
  4. Enter http://localhost:port within your local browser and authenticate using the credentials found in your cluster ~/cryo_creds file.

SCU - Weill Greenberg Cluster

Prerequisites

An active SCU account with access to the named gateway nodes (pascal, aphrodite) must be configured to access the cryoSPARC BRB instance.

cryoSPARC access must have been configured by SCU admins prior to connection. This can be done by emailing scu@med.cornell.edu and receipt of email confirmation from lab P.I.

SSH keys setup is highly recommended for seamless access to the environment.

Connection - Mac or Linux

  1. Modify the ~/.ssh/config in your local workstation to include the following entries:

    Host *
       StrictHostKeyChecking no
    
    Host pascal
        HostName pascal.med.cornell.edu
        ServerAliveInterval 60
        TCPKeepAlive yes
    
    Host node164
        Hostname node164.panda.pbtech
        ServerAliveInterval 60
        TCPKeepAlive yes
        ProxyCommand ssh -A -W %h:%p pascal.med.cornell.edu
  2. Run the following from the terminal (Replace port with your lab’s specific port number here):

    ssh node164 -L port:localhost:port
  3. Enter http://localhost:port within your local browser and authenticate using the credentials found in your cluster ~/cryo_creds file.


cryoSPARC Notices - Things to Know

Process Ownership

All cryoSPARC calculations are mediated by the user account “cryosparc_lab”. This user account runs cryoSPARC, manages the cryoSPARC database, submit jobs to Slurm, as well as performs other tasks. This user belongs to the lab’s linux group. All files/directories that cryoSPARC interacts with/writes to, MUST have linux group read/write permissions. 

Permissions Required

cryoSPARC requires the group read/write permissions described above. As such, each member of the lab will be able to read/write any file that is read/writable to the user ‘cryosparc_lab’. This is not necessarily bad (assuming there is a certain level of trust amongst all lab members); for example, if UserA is running cryoSPARC calculations in /athena/xyzlab/scratch/userA/cryosparc_stuff, then UserB could delete this entire directory if they wished. One way to limit this exposure is for you to create a specific directory for which to run cryoSPARC calculations in (thereby isolating group access). Once the calculations are complete, you could conceivably move the completed results into a more restricted directory (however, you should read the documentation carefully about migrating projects, as I don’t know how cryoSPARC will respond to this if additional work was needed). However, assuming everyone in the lab is nice/behaves ethically, you don’t have too worry much about this.   

CPU vs GPU

It’s clear for most jobs whether they require GPUs or not (obviously, if the calculation has an input for the # GPUs available, then it needs GPUs!). However, some are ambiguous. We recommend you assume that the calculation does not require GPUs (if it’s not clear) and submit to `cryo-cpu`. If the calculation does require GPUs, it will almost immediately error out complaining about `pycuda`—if you see this, just clear the job and submit to a GPU-queue. If trial-and-error is not your thing, I’m sure the cryoSPARC documentation describes this in detail for each calculation 


Contact

Please contact SCU@med.cornell.edu with any questions or concerns

  • No labels