| 68 | You can also use the virtual environment from a jupyter notebook on the fry cluster. In this way, you can require more computational resources (like multiple cpus/gpus or big memory). |
| 69 | |
| 70 | To do it, first you need to install jupyterlab in your virtual environment: |
| 71 | |
| 72 | {{{ |
| 73 | conda activate RNAseq_2024a |
| 74 | conda install jupyterlab |
| 75 | conda deactivate |
| 76 | }}} |
| 77 | |
| 78 | Next, create a slurm script as below -- here we will save it as 'jupyterOnCluster.sbatch' |
| 79 | |
| 80 | {{{ |
| 81 | #!/bin/bash |
| 82 | # Configuration values for a SLURM batch job. |
| 83 | # One leading hash(#) before the word SBATCH is not a comment, but two are. |
| 84 | #SBATCH --job-name=jobName |
| 85 | #SBATCH --nodes=1 # Ensure that all cores are on one machine |
| 86 | #SBATCH --ntasks=1 # Run a single task |
| 87 | #SBATCH --ntasks-per-core=8 # Enter number of cores/threads you wish to request |
| 88 | #SBATCH --time 02:00:00 # max time that the job/jupyter instance actually needs to be running (in hh:mm:ss format) |
| 89 | #SBATCH --mem=16gb # Enter amount of memory you wish to request |
| 90 | #SBATCH --partition=20 # partition (queue) to use |
| 91 | #SBATCH --output %x-%j.out # name of output file. %x is the job-name %j is jobid |
| 92 | #SBATCH --mail-type=ALL |
| 93 | #SBATCH --mail-user=your.name@wi.mit.edu |
| 94 | |
| 95 | # If using a conda environment |
| 96 | # (if not, delete or comment out next 3 lines): |
| 97 | # Activate conda itself |
| 98 | eval "$(/nfs/BaRC/USER/conda/bin/conda shell.bash hook)" |
| 99 | # Activate your specific conda environment |
| 100 | conda activate RNAseq_2024a |
| 101 | # If using a python virtual environment |
| 102 | # (if not, delete or comment out next line): |
| 103 | #source /path/to/your/python/virtualenv/directory/bin/activate |
| 104 | |
| 105 | |
| 106 | # Workaround for jupyter bug |
| 107 | unset XDG_RUNTIME_DIR |
| 108 | |
| 109 | |
| 110 | jupyter-lab \ |
| 111 | --no-browser \ |
| 112 | --port-retries=0 \ |
| 113 | --ip=0.0.0.0 \ |
| 114 | --port=`shuf -i 8900-10000 -n 1` \ |
| 115 | --notebook-dir=/ \ |
| 116 | --LabApp.default_url="/lab/tree/home/$(whoami)" |
| 117 | |
| 118 | |
| 119 | |
| 120 | |
| 121 | #/usr/bin/mail -s "$SLURM_JOB_NAME $SLURM_JOB_ID" yourwhiteheadusername@wi.mit.edu < %x-%j.out #uncomment (remove hash at beginning of this line) if you want your job output emailed to you |
| 122 | |
| 123 | }}} |
| 124 | |
| 125 | Then submit the slurm job: |
| 126 | |
| 127 | {{{ |
| 128 | sbatch jupyterOnCluster.sbatch |
| 129 | }}} |
| 130 | |
| 131 | After the slurm job gets running on the cluster, an output file named 'jobName-%j.out' will be created. |
| 132 | |
| 133 | Open the output file and you will find a URL pointing to the jupyter notebook. Copy and paste the URL into your web browser, you will get access to the jupyter notebook with your virual environment. |
| 134 | |
| 135 | Make sure to set a reasonable time limit to your slurm job or cancel it after you finish working on it, to avoid occupying computational resources unnecessarily. |
| 136 | |
| 137 | Reference to IT's instructions on this topic: https://docs.google.com/document/d/1eYGVn5M402n2b9pueWdHoeLx84Ue-IGQG8t6e2QK7To |
| 138 | |