Version 38 (modified by 4 years ago) ( diff ) | ,
---|
Single cell RNA-Seq to quantify gene levels and assay for differential expression
Create a matrix of gene counts by cells
- For 10x Genomics experiments, we use cell ranger to get this counts matrix.
- The main command is cellranger count, which requires a reference transcriptome indexed specifically for cellranger.
- Pre-built reference transcriptomes are available from 10x Genomics. Several of them are available at Whitehead on tak under /nfs/genomes/[ASSEMBLY]/10x where ASSEMBLY is specific to our nomenclature. Note that only certain gene types are included in these pre-built references.
- Custom reference transcriptomes can be created with cellranger commands:
- Filter the gtf to include only a subset of the annotated gene biotypes, for example,
bsub cellranger mkgtf Homo_sapiens.GRCh38.93.gtf Homo_sapiens.GRCh38.93.filtered.gtf --attribute=gene_biotype:protein_coding
- Create the cellranger index using a command such as
bsub cellranger mkref --genome=MyGenome --fasta=genome.fa --genes=Genes.filtered.gtf --ref-version=1.0
- Filter the gtf to include only a subset of the annotated gene biotypes, for example,
- Optional: How to create a STAR index with parameters different than the defaults
- Run the "cellranger mkref" as specified above.
- Look inside the "mkref" output folder for the "star" folder and the "genomeParameters.txt" file with the STAR command
- Rerun the "STAR --runMode genomeGenerate" command with the new parameters and use the output from this step to replace the original "star" folder inside the cellranger mkref output.
- Run the actual cellranger count command using syntax like
bsub cellranger count –id=ID –fastqs=PATH –transcriptome=DIR –sample=SAMPLE_LIST –project=PROJECT
- The output of 'cellranger count' includes
- An indexed BAM file of all mapped reads (possorted_genome_bam.bam)
- A Loupe Browser visualization and analysis file (cloupe.cloupe)
- The quality control summary is "web_summary.html" in the 'outs' folder and has important quality metrics and graphs such as: Estimated Number of Cells, Mean Reads per Cell, Mean Reads per Cell, Sequencing Saturation, etc.
- The "matrix" output files are not in the usual matrix structure. To create a standard 2-dimensional matrix, one can use R commands such as
library(monocle) library(cellrangerRkit) cellranger_data_path = "/path/to/dir/with/outs/dir" crm = load_cellranger_matrix(cellranger_data_path) crm.matrix = as.matrix(exprs(crm)) write.table(crm.matrix, "My.cellranger.matrix.txt", sep="\t", quote=F)
Run quality control and filter cells
We typically use the Seurat R package for these steps. The commands below are for Seurat 3.
- Start out by loading the counts matrix from cellranger:
library("Seurat") message("Loaded Seurat version", packageDescription("Seurat")$Version) # Load the barcodes*, features*, and matrix* files in your 10x Genomics directory counts.data <- Read10X(data.dir = input.counts.filename)
- This is how you would load the counts from a file with gene counts:
library("Seurat") message("Loaded Seurat version", packageDescription("Seurat")$Version) counts.data <- read.table(file = paste0("./exp1_forSeurat.txt"))
- Example of input data if starting with gene counts (only a few samples):
E25-35_A4 E25-35_A5 E25-35_A6 E25-35_B4 E25-35_B5 E25-35_B6 E25-35_C4 E25-35_C5 E25-35_C6 ENSMUSG00000064372_mt-Tp 7 10 7 10 3 17 13 4 ENSMUSG00000064371_mt-Tt 0 0 0 1 0 5 0 0
- Make the Seurat object and calculate the percentage of mitochondrial reads
seuratObject <- CreateSeuratObject(counts = counts.data,project = "ProjectName") seuratObject[["percent.mt"]] <- PercentageFeatureSet(object = seuratObject, pattern = "^mt-")
Filter cells with high % reads mapping to mitochondrial transcripts and with low number of genes detected
These cutoffs are specific for each experiment
MIN.NUM.GENES = 200 MAX.NUM.GENES = 8000 MAX.PERCENT.MITO = 20 all_Filt <- subset(x = seuratObject, subset = nFeature_RNA > MIN.NUM.GENES & nFeature_RNA < MAX.NUM.GENES & percent.mt < MAX.PERCENT.MITO)
Normalize data
all_Filt <- NormalizeData(object = all_Filt, normalization.method = "LogNormalize", scale.factor = 10000)
Identify of highly variable features
num.variable.features.to.find = 2000 all_Filt <- FindVariableFeatures(object = all_Filt, selection.method = "vst", nfeatures = num.variable.features.to.find)
Scale data
all.genes <- rownames(x = all_Filt) all_Filt <- ScaleData(object = all_Filt, features = all.genes)
Perform and visualize dimensional analysis
Perform principal components analysis
all_Filt <- RunPCA(object = all_Filt, features = VariableFeatures(object = all_Filt)) pdf("./PCAPlot.pdf", w=11, h=8.5) DimPlot(object = all_Filt, reduction = "pca") DimPlot(object = all_Filt, dims = c(3, 4), reduction = "pca") dev.off() pdf("ElbowPlot.pdf", w=11, h=8.5) ElbowPlot(object = all_Filt) dev.off()
Based on the elbow plot decide how many components to use to run UMAP, tSNE and the Louvain clustering. Run non-linear dimensional reduction (UMAP/tSNE)
all_Filt <- RunUMAP(object = all_Filt, dims = 1:20) all_Filt <- RunTSNE(object = all_Filt, dims = 1:20) pdf("./UMAP_colorByExp.pdf", w=11, h=8.5) DimPlot(object = all_Filt, reduction = "umap") dev.off() pdf("./TSNE_colorByExp.pdf", w=11, h=8.5) TSNEPlot(object = all_Filt) dev.off()
Partition cells into clusters
Run Louvain clustering using different resolutions to then decide which one to follow up on for further analysis. The resolution chosen depends on the granularity we want to work with and the cell heterogeneity.
all_Filt <- FindNeighbors(object = all_Filt, dims = 1:20) all_Filt <- FindClusters(object = all_Filt, resolution = 0.5) pdf("./UMAP_colorByCluster_Res0.5.pdf", w=11, h=8.5) UMAPPlot(object = all_Filt, label= TRUE) dev.off() all_Filt <- FindClusters(object = all_Filt, resolution = 0.4) pdf("./UMAP_colorByCluster_Res0.4.pdf", w=11, h=8.5) UMAPPlot(object = all_Filt, label= TRUE) dev.off() all <- FindClusters(object = all_Filt, resolution = 0.3) pdf("./UMAP_colorByCluster_Res0.3.pdf", w=11, h=8.5) UMAPPlot(object = all_Filt, label= TRUE) dev.off()
The clustering at different resolutions are stored in all_Filt$RNA_snn_res.0.3, all_Filt$RNA_snn_res.0.4, and all_Filt$RNA_snn_res.0.5
Identify genes that differentially expressed between samples or clusters
Years of research has led to effective algorithms to quantify differential expression between RNA-seq samples that have been assayed genome-wide. Single cell expression profiles, however, typically assay only a small fraction of all genes, and this single property greatly complicates differential expression analysis. Two general approaches exist for differential expression:
- consider each cell as a sample
- aggregate counts across all cells in a group/cluster, and treat them as one sample
Seurat has 2 functions "FindAllMarkers" and "FindMarkers" that work well as long as the fold change and percentage of cells expressing the gene thresholds are not too relaxed. We recommend logfc.threshold = 0.7, min.pct = .25. These functions by default add one pseudocount, "pseudocount.use = 1", to avoid dividing by zero. How we set this parameter, pseudocount.use, will have a strong effect on the LogFC output, especially when there are few cells in the groups compared.
- example command to compare each cluster to all other clusters:
MIN_LOGFOLD_CHANGE = 0.7 MIN_PCT_CELLS_EXPR_GENE = .25 all.markers.pos.wilcox = FindAllMarkers(all_Filt, min.pct = MIN_PCT_CELLS_EXPR_GENE, logfc.threshold = MIN_LOGFOLD_CHANGE, only.pos = TRUE, test.use="wilcox")
- example command to compare 2 or more clusters or cell identities (i.e. control and treatment)
markers1_versus_18.pos.wilcox = FindMarkers(all_Filt, min.pct = MIN_PCT_CELLS_EXPR_GENE, logfc.threshold = MIN_LOGFOLD_CHANGE, only.pos = TRUE, test.use="wilcox", ident.1 = "1", ident.2 = "18"
Add biological annotations to cells or cell clusters
Drawing biological conclusions from a single-cell experiment usually requires that one classify cells (or at least cell clusters) by type. Traditionally this is a time-consuming process of exploring marker genes and manually assigning cell type to each numbered cluster. Given that a number of public scRNA-seq experiments already have these annotations, one can leverage automated software with these ideally "gold standard" datasets to classify current experiments, either via expression profiles or marker genes.
Perform trajectory analysis
This step is relevant for projects that include cells at different stages of a developmental process or other change that is associated with a time course. Specific methods/algorithms for dimensional reduction are available to do this. Most of the methods have some concept of pseudotime, metric that one expects is correlated with actual time, but given that they aren't identical, interpretation needs to be performed with caution. Diffusion maps work well for this step and they have been implemented in R (https://bioconductor.org/packages/release/bioc/html/destiny.html) and python (https://scanpy.readthedocs.io/en/stable/api/scanpy.tl.diffmap.html).
Example R code to make diffusion maps with destiny
A comprehensive comparison among different trajectory methods have been done using dynverse. You can select the methods based on the type of topology on their website.
Here is the sample R code for with Slingshot, which is the top ranked method for bifurcation.
Perform trajectory-based differential expression analysis
Example R code to run trajectory-based differential expression analysis with tradeSeq
Combine multiple scRNA-seq datasets
Many experiments are especially informative when compared to other experiments, either performed by the same or different laboratories. This is challenging, however, especially when the different experiments profile different types of cells. In these cases, biological and technical differences are confounded, and one needs to make thoughtful assumptions about how to perform batch correction and achieve "success" during dataset integration.
As of 2020 there are more than a dozen algorithms available for integrating single cell RNA-seq data-sets. Three such methods are canonical correlation analysis (implemented in Seurat), iterative linear correction based on soft clustering (implemented in Harmony) and integrative nonnegative matrix factorization (implemented in LIGER). Commands for using each of these methods from within a Seurat workflow are given below.
- Using CCA in Seurat (please see T. Stuart et al. “Comprehensive Integration of Single-Cell Data”, Cell 177, 1888-1902 (2019), the associated Seurat v.3 vignette and the documentation for the FindIntegrationAnchors function):
library(Seurat) # Merge two or more Seurat objects, objA and objB, from different batches. all <- merge(x=objA,y=objB,add.cell.ids=c("A","B")) # Split and re-integrate the merged object according to the batch slot. s3.list <- SplitObject(all, split.by = "batch") # This loop normalizes each experiment separately first. for (i in 1:length(s3.list)) { s3.list[[i]] <- NormalizeData(s3.list[[i]], verbose = FALSE) s3.list[[i]] <- FindVariableFeatures(s3.list[[i]], selection.method = "vst", nfeatures = 2000, verbose = FALSE) } # Find so-called anchors and carry out the integration. s3.anchors <- FindIntegrationAnchors(object.list = s3.list) s3.integrated <- IntegrateData(anchorset = s3.anchors) DefaultAssay(s3.integrated) <- "integrated"
- Using Harmony from within Seurat (please see I. Korsunsky et al. “Fast, sensitive and accurate integration of single-cell data with Harmony”, Nature Methods 16, 1289-1296 (2019), the details of the Harmony algorithm linked below and the documentation for the RunHarmony function):
library(Seurat) library(harmony) # Merge two or more Seurat objects, objA and objB, from different batches. all <- merge(x=objA,y=objB,add.cell.ids=c("A","B")) # In anticipation of using Harmony to integrate data-sets below, first use Seurat to run PCA on the un-corrected data. all <- NormalizeData(all, normalization.method = "LogNormalize", scale.factor = 10000) all <- FindVariableFeatures(all, selection.method = "vst", nfeatures = 2000) all <- ScaleData(all, features = rownames(all)) all <- RunPCA(all, features = VariableFeatures(object = all)) # Do the integration using Harmony, indexing samples by the batch slot: all <- RunHarmony(all, "batch") # When generating UMAP or another embedding, be sure to use the integrated "harmony" reduction. all <- RunUMAP(all,reduction = "harmony")
- Using LIGER (v 0.4.2.9000) from within Seurat (please see J.D. Welsh et al. “Single-Cell Multi-omic Integration Compares and Contrasts Features of Brain Cell Identity”, Nature Biotechnology 37, 1873–1887 (2019), the LIGER documentation/vignettes the documentation for the RunOptimizeALS and RunQuantileAlignSNF functions):
library(Seurat) library(SeuratWrappers) library(liger) # Merge two or more Seurat objects, objA and objB, from different batches. all <- merge(x=objA,y=objB,add.cell.ids=c("A","B")) # In anticipation of using LIGER to integrate data-sets below, first use Seurat to scale the data without centering. all <- NormalizeData(all, normalization.method = "LogNormalize", scale.factor = 10000) all <- FindVariableFeatures(all, selection.method = "vst", nfeatures = 2000) all <- ScaleData(all, do.center=FALSE, split.by = "batch") # Do the integration using LIGER, indexing samples by the batch slot: all <- RunOptimizeALS(all, split.by = "batch") all <- RunQuantileAlignSNF(all, split.by = "batch") # When generating UMAP or another embedding, be sure to use the reduction from the integrated nonnegative factorization ("iNMF"). all <- RunUMAP(all, dims = 1:ncol(all[["iNMF"]]), reduction = "iNMF") # The commands above use default values for the rank, k, of the NMF and the homogeneity parameter, lambda. Generally, the rank # should be chosen to be large enough to capture structure in the data matrix, yet small enough that the factorization gives reproducible # interpretations under multiple non-unique solutions. Increasing the homogeneity parameter places greater emphasis on the common # factors in the integration. The heuristic functions suggestK and suggestLambda can be used to guide the data-set-specific # adjustment of these two parameters.
Export expression and dimensional analysis data for interactive viewing
We prefer using UCSC's Cell Browser environment for this task.
- Prerequisites. To make the most of this interactive viewing tool,
- Run dimensional reduction (such as PCA, tSNE, UMAP).
- Cluster/partition the cells (such as with Seurat's FindClusters()).
- Identify cluster-specific marker genes (such as with Seurat'sFindAllMarkers()) and assemble/print information about them with commands such as
all.markers.forCB = cbind(as.numeric(all.markers$cluster), all.markers$gene, all.markers$p_val_adj, all.markers$avg_logFC, all.markers$pct.1, all.markers$pct.2) write.table(all.markers.forCB, file="all.markers.exported.txt", quote = FALSE, sep = "\t", row.names=F)
- Add info/links about the marker genes with the CellBrowser command
cbMarkerAnnotate all.markers.exported.txt markers.txt
- Export the key data from the Seurat object:
ExportToCellbrowser(seurat, dir=export.dir, dataset.name=dataset.name, markers.file=markers.file, reductions=c("pca", "tsne", "umap"))
- Run Cell Browser's cbBuild to create the web-viewable directory of files.
- Move the cbBuild output to a web server, which creates a page that looks something like https://cells.ucsc.edu/
Links to recommended scRNA-seq analysis tutorials and resources
- Seurat vignettes and guided analysis
- Analysis of single cell RNA-seq data course, Hemberg Group.
- Analysis of single cell RNA-seq data workshop, Broad Institute
- 2017/2018 Single Cell RNA Sequencing Analysis Workshop at UCD,UCB,UCSF
- Single cell RNA sequencing, NYU.
- Awesome-single-cell, Sean Davis
- Detailed Harmony walkthrough
- LIGER instructions and vignettes