Skip to content

Resource information

Jeffrey Chang edited this page Aug 18, 2023 · 2 revisions

Lab workstations: npl.in.hwlab, np-cpu-1, np-gpu-1, np-gpu-2

  • For medium-sized tasks and development
  • VPN access is required to log in to the workstation from outside networks.
  • Your home directory is at /nfs/polizzi/$USER and your username is (first initial) + (last name).
  • You have read access to others' home directories.
  • You have read + write access to the shared directory at /nfs/polizzi/shared/.
  • Tasks can be run directly from the command line. Please be conscious this is a shared resource, so make sure to leave computational resources for others to use!
    • If you are running a task on multiple cores, you can specify how many cores to use; e.g. with GNU parallel you can specify with the -P flag: parallel -P 32 'python pdb2fasta.py {} > fastas_protassign/{/}' ::: /nfs/polizzi/npolizzi/Combs2/database/pdb_protassign_2p5_0p3_c/*
  • The workstations a access shared network filesystem (nfs) mounted under /nfs. These files are backed up and shared across computers, but are comparatively slow to access. Therefore, if you have I/O intensive jobs, it is advisable to use a local storage disk, mounted under /scratch. These are not backed up or shared but they are faster to access.
  • These are managed by the SBGrid folks.

o2 cluster: o2.hms.harvard.edu

  • For larger-scale or repetetive tasks
  • Your home directory is at /home/$USER and your username is (first two letters of first name) + (first letter of last name) + (4 numbers).
  • You do not have read access to your labmates home directories by default!
    • If you want to give your labmates access to your home directory, you can change the group of your home directory to polizzi with chgrp -R polizzi ~, and then give everyone in the polizzi access to your home dir with chmod -R g-w ~.
  • You should have write and read access to the shared lab resources at /n/data1/hms/bcmp/polizzi/lab; if not email IT and ask to be added to the user group polizzi.
  • When you log in, you are taken to a login node; you are not allowed to run compute-intensive jobs on this. Rather, jobs are sent through the SLURM software, which schedules and manages jobs for the hundreds (thousands)? of users of the shared resources.