| | 112 | Nextflow manages each process as a separate job that is submitted to the cluster using the sbatch command. The jobs can be distributed across multiple nodes depending on your request computing resources. |
| | 113 | |
| | 114 | The pipeline must be launched from a node where the sbatch command is available, which is typically the cluster login node. |
| | 115 | |
| | 116 | To enable the SLURM executor, set process.executor = 'slurm' in the nextflow.config file. |
| | 117 | |
| | 118 | SLURM partitions can be specified with the queue directive. |
| | 119 | |
| | 120 | To submit the nf-core pipelines to the slurm cluster, you could provide a configure file like this (take nf-core/cutandrun pipeline as an example and save it as 'cutandrun.config'): |
| | 121 | |
| | 122 | |
| | 123 | {{{ |
| | 124 | process{ |
| | 125 | executor = 'slurm' |
| | 126 | queue = '20' |
| | 127 | slurm.queueSize = 10 |
| | 128 | memory = '200 GB' |
| | 129 | cpus = 36 |
| | 130 | |
| | 131 | withName: 'NFCORE_CUTANDRUN:CUTANDRUN:DEDUPLICATE_PICARD:BAM_SORT_STATS_SAMTOOLS:SAMTOOLS_SORT' { |
| | 132 | cpus = { 6 * task.attempt } |
| | 133 | memory = { 15.GB * task.attempt } |
| | 134 | } |
| | 135 | withName: 'NFCORE_CUTANDRUN:CUTANDRUN:PREPARE_PEAKCALLING:BEDTOOLS_SORT' { |
| | 136 | cpus = { 1 * task.attempt } |
| | 137 | memory = { 12.GB * task.attempt } |
| | 138 | } |
| | 139 | withName: 'NFCORE_CUTANDRUN:CUTANDRUN:SAMTOOLS_CUSTOMVIEW' { |
| | 140 | cpus = { 2 * task.attempt } |
| | 141 | memory = { 8.GB * task.attempt } |
| | 142 | } |
| | 143 | withName: 'NFCORE_CUTANDRUN:CUTANDRUN:FRAG_LEN_HIST' { |
| | 144 | cpus = { 4 * task.attempt } |
| | 145 | memory = { 12.GB * task.attempt } |
| | 146 | } |
| | 147 | withName: 'NFCORE_CUTANDRUN:CUTANDRUN:DEEPTOOLS_PLOTHEATMAP_GENE_ALL' { |
| | 148 | cpus = { 4 * task.attempt } |
| | 149 | memory = { 32.GB * task.attempt } |
| | 150 | } |
| | 151 | } |
| | 152 | |
| | 153 | }}} |
| | 154 | |
| | 155 | And then submit the job using the command line like below: |
| | 156 | |
| | 157 | |
| | 158 | {{{ |
| | 159 | sbatch --partition=20 --job-name=NextF --output=NextF-%j.out --mem=200gb --nodes=1 --ntasks=1 --cpus-per-task=36 --wrap \ |
| | 160 | " /nfs/BaRC_Public/apps/nextflow/nextflow run nf-core/cutandrun -profile singularity --normalisation_binsize 1 --input samplesheet.csv -c cutandrun.config --normalisation_mode CPM \ |
| | 161 | --peakcaller 'MACS2' --replicate_threshold 2 --end_to_end FALSE --multiqc_title 'multiQCReport' --skip_removeduplicates true \ |
| | 162 | --skip_preseq false --skip_dt_qc false --skip_multiqc false --skip_reporting false --dump_scale_factors true --email 'username@wi.mit.edu' --genome GRCh38 \ |
| | 163 | --extend_fragments false --macs2_qvalue 0.01 --outdir ./nextFlow_CUTTAG " |
| | 164 | }}} |
| | 165 | |
| | 166 | |
| | 167 | Reference link: |
| | 168 | https://www.nextflow.io/docs/latest/executor.html |