diff --git a/course_files/seurat.Rmd b/course_files/seurat.Rmd index c8eff66a5..34ce803a1 100644 --- a/course_files/seurat.Rmd +++ b/course_files/seurat.Rmd @@ -18,10 +18,13 @@ set.seed(1234567) __Note__ We recommend using `Seurat` for datasets with more than $5000$ cells. For smaller dataset a good alternative will be `SC3`. __Note__ In this chapter we use an exact copy of [this tutorial](https://satijalab.org/seurat/pbmc3k_tutorial.html). + + + ## Setup the Seurat Object -We will be analyzing the a dataset of Peripheral Blood Mononuclear Cells (PBMC) freely available from 10X Genomics. There are 2,700 single cells that were sequenced on the Illumina NextSeq 500. The raw data can be found [here](https://s3-us-west-2.amazonaws.com/10x.files/samples/cell/pbmc3k/pbmc3k_filtered_gene_bc_matrices.tar.gz). +We will be analyzing the dataset of Peripheral Blood Mononuclear Cells (PBMC) freely available from 10X Genomics. There are 2,700 single cells that were sequenced on the Illumina NextSeq 500. The raw data can be found [here](https://s3-us-west-2.amazonaws.com/10x.files/samples/cell/pbmc3k/pbmc3k_filtered_gene_bc_matrices.tar.gz). We start by reading in the data. All features in Seurat have been configured to work with sparse matrices which results in significant memory and speed savings for Drop-seq/inDrop/10x data. @@ -52,7 +55,6 @@ pbmc <- CreateSeuratObject(counts = pbmc.data, min.cells = 3, min.features = 20 The steps below encompass the standard pre-processing workflow for scRNA-seq data in Seurat. These represent the creation of a Seurat object, the selection and filtration of cells based on QC metrics, data normalization and scaling, and the detection of highly variable genes. - ## QC and selecting cells for further analysis While the `CreateSeuratObject` imposes a basic minimum gene-cutoff, you may want to filter out cells at this stage based on technical or biological parameters. Seurat allows you to easily explore QC metrics and filter cells based on any user-defined criteria. In the example below, we visualize gene and molecule counts, plot their relationship, and exclude cells with a clear outlier number of genes detected as potential multiplets. Of course this is not a guaranteed method to exclude cell doublets, but we include this as an example of filtering user-defined outlier cells. We also filter cells based on the percentage of mitochondrial genes present. @@ -72,11 +74,15 @@ percent.mito <- Matrix::colSums(pbmc@assays[["RNA"]][mito.genes, ])/Matrix::colS # AddMetaData adds columns to object@meta.data, and is a great place to # stash QC stats -#Seurat v2 function, but shows compatibility in Seurat v3 +# Seurat v2 function, but shows compatibility in Seurat v3 pbmc <- AddMetaData(object = pbmc, metadata = percent.mito, col.name = "percent.mito") -#in case the above function does not work simply do: +# in case the above function does not work simply do: pbmc$percent.mito <- percent.mito +# With v3, the [[ operator can add columns to object metadata. This is a great place to stash QC stats +pbmc[["percent.mt"]] <- PercentageFeatureSet(pbmc, pattern = "^MT-") + +# Visualize QC metrics as a violin plot VlnPlot(object = pbmc, features = c("nFeature_RNA", "nCount_RNA", "percent.mito"), ncol = 3) ``` @@ -93,8 +99,8 @@ FeatureScatter(object = pbmc, feature1 = "nCount_RNA", feature2 = "nFeature_RNA" ```{r} # We filter out cells that have unique gene counts (nFeature_RNA) over 2,500 or less than -# 200 Note that > and < are used to define a'gate'. -#-Inf and Inf should be used if you don't want a lower or upper threshold. +# 200 Note that > and < are used to define a 'gate'. +# -Inf and Inf should be used if you don't want a lower or upper threshold. pbmc <- subset(x = pbmc, subset = nFeature_RNA > 200 & nFeature_RNA < 2500 & percent.mito > -Inf & percent.mito < 0.05 ) ``` @@ -108,15 +114,25 @@ pbmc <- NormalizeData(object = pbmc, normalization.method = "LogNormalize", scal ## Detection of variable genes across the single cells -Seurat calculates highly variable genes and focuses on these for downstream analysis. `FindVariableGenes` calculates the average expression and dispersion for each gene, places these genes into bins, and then calculates a z-score for dispersion within each bin. This helps control for the relationship between variability and average expression. This function is unchanged from (Macosko et al.), but new methods for variable gene expression identification are coming soon. We suggest that users set these parameters to mark visual outliers on the dispersion plot, but the exact parameter settings may vary based on the data type, heterogeneity in the sample, and normalization strategy. The parameters here identify ~2,000 variable genes, and represent typical parameter settings for UMI data that is normalized to a total of 1e4 molecules. +Seurat detects highly variable genes and focuses on these for downstream analysis. `FindVariableGenes` calculates the average expression and dispersion for each gene, places these genes into bins, and then calculates a z-score for dispersion within each bin. This helps control for the relationship between variability and average expression. This function is unchanged from (Macosko et al.), but new methods for variable gene expression identification are coming soon. We suggest that users set these parameters to mark visual outliers on the dispersion plot, but the exact parameter settings may vary based on the data type, heterogeneity in the sample, and normalization strategy. The parameters here identify ~2,000 variable genes, and represent typical parameter settings for UMI data that is normalized to a total of 1e4 molecules. ```{r} pbmc <- FindVariableFeatures(object = pbmc, mean.function = ExpMean, dispersion.function = LogVMR, x.low.cutoff = 0.0125, x.high.cutoff = 3, y.cutoff = 0.5, nfeatures = 2000) ``` -To view the output of the FindVariableFeatures output we use this function. The genes appear not to be stored in the object, but can be accessed this way. +To view the output of the FindVariableFeatures we use the HVFInfo() function. The genes appear not to be stored in the object, but can be accessed this way. + ```{r} head(x = HVFInfo(object = pbmc)) + + # or with VariableFeatures() + # Identify the 10 most highly variable genes +top10 <- head(VariableFeatures(pbmc), 10) + + # plot variable features with and without labels +plot1 <- VariableFeaturePlot(pbmc) +plot2 <- LabelPoints(plot = plot1, points = top10, repel = TRUE) +CombinePlots(plots = list(plot1, plot2), ncol=1) ``` ## Scaling the data and removing unwanted sources of variation @@ -125,7 +141,7 @@ Your single cell dataset likely contains ‘uninteresting’ sources of variatio We can regress out cell-cell variation in gene expression driven by batch (if applicable), cell alignment rate (as provided by Drop-seq tools for Drop-seq data), the number of detected molecules, and mitochondrial gene expression. For cycling cells, we can also learn a ‘cell-cycle’ score (see example [here](http://satijalab.org/seurat/cell_cycle_vignette.html)) and regress this out as well. In this simple example here for post-mitotic blood cells, we regress on the number of detected molecules per cell as well as the percentage mitochondrial gene content. -Seurat v2.0 implements this regression as part of the data scaling process. Therefore, the `RegressOut` function has been deprecated, and replaced with the vars.to.regress argument in `ScaleData`. +Seurat v2.0 and v3.0 implement this regression as part of the data scaling process. Therefore, the `RegressOut` function has been deprecated, and replaced with the vars.to.regress argument in `ScaleData`. ```{r} pbmc <- ScaleData(object = pbmc, vars.to.regress = c("nCounts_RNA", "percent.mito")) @@ -133,18 +149,25 @@ pbmc <- ScaleData(object = pbmc, vars.to.regress = c("nCounts_RNA", "percent.mit ## Perform linear dimensional reduction ---> refered to Seurat v2: Next we perform PCA on the scaled data. By default, the genes in `object@var.genes` are used as input, but can be defined using pc.genes. We have typically found that running dimensionality reduction on highly variable genes can improve performance. However, with UMI data - particularly after regressing out technical variables, we often see that PCA returns similar (albeit slower) results when run on much larger subsets of genes, including the whole transcriptome. +Next we perform PCA on the scaled data. By default, the genes in `object@var.genes` are used as input, but can be defined using pc.genes. We have typically found that running dimensionality reduction on highly variable genes can improve performance. However, with UMI data - particularly after regressing out technical variables, we often see that PCA returns similar (albeit slower) results when run on much larger subsets of genes, including the whole transcriptome. With Seurat v3 (latest) variable features can be accessed with VariableFeatures() and fed to RunPCA(). ---> refered to Seurat v3 (latest): high variable features are accessed through the function HVFInfo(object). Despite RunPCA has a features argument where to specify the features to compute PCA on, I've been modifying its values and the output PCA graph has always the same dimensions, indicating that the provided genes in the features argument are not exactly the ones used to compute PCA. Wether the function gets the HVG directly or does not take them into account, I don't know. + ```{r} pbmc <- RunPCA(object = pbmc, npcs = 30, verbose = FALSE) +# pbmc2 <- RunPCA(object = pbmc, npcs = 30, verbose = FALSE, features = VariableFeatures(object = pbmc)[1:100]) ``` ---> refered to Seurat v2: Seurat provides several useful ways of visualizing both cells and genes that define the PCA, including `PrintPCA`, `VizPCA`, `PCAPlot`, and `PCHeatmap` +Seurat v2 provides several useful ways of visualizing both cells and genes that define the PCA, including `PrintPCA`, `VizPCA`, `PCAPlot`, and `PCHeatmap`. Seurat v3 also provides functions for visualizing: ---> refered to Seurat v3 (latest): -Seurat v3 provides functions for visualizing: - PCA - PCA plot coloured by a quantitative feature - Scatter plot across single cells @@ -157,12 +180,12 @@ Seurat v3 provides functions for visualizing: # Examine and visualize PCA results a few different ways DimPlot(object = pbmc, reduction = "pca") ``` + ```{r} # Dimensional reduction plot, with cells colored by a quantitative feature FeaturePlot(object = pbmc, features = "MS4A1") ``` - ```{r} # Scatter plot across single cells, replaces GenePlot FeatureScatter(object = pbmc, feature1 = "MS4A1", feature2 = "PC_1") @@ -170,13 +193,14 @@ FeatureScatter(object = pbmc, feature1 = "MS4A1", feature2 = "CD3D") ``` ```{r} -# Scatter plot across individual features, repleaces CellPlot +# Scatter plot across individual features, replaces CellPlot CellScatter(object = pbmc, cell1 = "AGTCTACTAGGGTG", cell2 = "CACAGATGGTTTCT") ``` ```{r} VariableFeaturePlot(object = pbmc) ``` + ```{r} # Violin and Ridge plots VlnPlot(object = pbmc, features = c("LYZ", "CCL5", "IL32")) @@ -184,12 +208,13 @@ RidgePlot(object = pbmc, feature = c("LYZ", "CCL5", "IL32")) ``` In particular `DimHeatmap` allows for easy exploration of the primary sources of heterogeneity in a dataset, and can be useful when trying to decide which PCs to include for further downstream analyses. Both cells and genes are ordered according to their PCA scores. Setting cells.use to a number plots the ‘extreme’ cells on both ends of the spectrum, which dramatically speeds plotting for large datasets. Though clearly a supervised analysis, we find this to be a valuable tool for exploring correlated gene sets. + ```{r} # Heatmaps DimHeatmap(object = pbmc, reduction = "pca", cells = 200, balanced = TRUE) ``` -ProjectPCA function is no loger available in Seurat 3.0. + ## Determine statistically significant principal components @@ -214,26 +239,28 @@ pbmc <- ScoreJackStraw(object = pbmc, dims = 1:20, reduction = "pca") JackStrawPlot(object = pbmc, dims = 1:20, reduction = "pca") ``` -A more ad hoc method for determining which PCs to use is to look at a plot of the standard deviations of the principle components and draw your cutoff where there is a clear elbow in the graph. This can be done with `ElbowPlot`. In this example, it looks like the elbow would fall around PC 5. +A more ad hoc method for determining which PCs to use is to look at a plot of the standard deviations of the principal components and draw your cutoff where there is a clear elbow in the graph. This can be done with `ElbowPlot`. In this example, it looks like the elbow would fall around PC 5. ```{r} ElbowPlot(object = pbmc) ``` -PC selection – identifying the true dimensionality of a dataset – is an important step for Seurat, but can be challenging/uncertain for the user. We therefore suggest these three approaches to consider. The first is more supervised, exploring PCs to determine relevant sources of heterogeneity, and could be used in conjunction with GSEA for example. The second implements a statistical test based on a random null model, but is time-consuming for large datasets, and may not return a clear PC cutoff. The third is a heuristic that is commonly used, and can be calculated instantly. In this example, all three approaches yielded similar results, but we might have been justified in choosing anything between PC 7-10 as a cutoff. We followed the jackStraw here, admittedly buoyed by seeing the PCHeatmap returning interpretable signals (including canonical dendritic cell markers) throughout these PCs. Though the results are only subtly affected by small shifts in this cutoff (you can test below), we strongly suggest always explore the PCs they choose to include downstream. +PC selection – identifying the true dimensionality of a dataset – is an important step for Seurat, but can be challenging/uncertain for the user. We therefore suggest these three approaches to consider. The first is more supervised, exploring PCs to determine relevant sources of heterogeneity, and could be used in conjunction with GSEA for example. The second implements a statistical test based on a random null model, but is time-consuming for large datasets, and may not return a clear PC cutoff. The third is a heuristic that is commonly used, and can be calculated instantly. In this example, all three approaches yielded similar results, but we might have been justified in choosing anything between PC 7-10 as a cutoff. We followed the jackStraw here, admittedly buoyed by seeing the PCHeatmap returning interpretable signals (including canonical dendritic cell markers) throughout these PCs. Though the results are only subtly affected by small shifts in this cutoff (you can test below), we strongly suggest to always explore the PCs they choose to include downstream. ## Cluster the cells -Seurat now includes an graph-based clustering approach compared to (Macosko et al.). Importantly, the distance metric which drives the clustering analysis (based on previously identified PCs) remains the same. However, our approach to partioning the cellular distance matrix into clusters has dramatically improved. Our approach was heavily inspired by recent manuscripts which applied graph-based clustering approaches to scRNA-seq data [SNN-Cliq, Xu and Su, Bioinformatics, 2015](http://bioinformatics.oxfordjournals.org/content/early/2015/02/10/bioinformatics.btv088.abstract) and CyTOF data [PhenoGraph, Levine et al., Cell, 2015](http://www.ncbi.nlm.nih.gov/pubmed/26095251). Briefly, these methods embed cells in a graph structure - for example a K-nearest neighbor (KNN) graph, with edges drawn between cells with similar gene expression patterns, and then attempt to partition this graph into highly interconnected ‘quasi-cliques’ or ‘communities’. As in PhenoGraph, we first construct a KNN graph based on the euclidean distance in PCA space, and refine the edge weights between any two cells based on the shared overlap in their local neighborhoods (Jaccard similarity). To cluster the cells, we apply modularity optimization techniques such as the Louvain algorithm (default) or SLM [SLM, Blondel et al., Journal of Statistical Mechanics](http://dx.doi.org/10.1088/1742-5468/2008/10/P10008), to iteratively group cells together, with the goal of optimizing the standard modularity function. +Seurat now includes a graph-based clustering approach compared to (Macosko et al.). Importantly, the distance metric which drives the clustering analysis (based on previously identified PCs) remains the same. However, our approach to partioning the cellular distance matrix into clusters has dramatically improved. Our approach was heavily inspired by recent manuscripts which applied graph-based clustering approaches to scRNA-seq data [SNN-Cliq, Xu and Su, Bioinformatics, 2015](http://bioinformatics.oxfordjournals.org/content/early/2015/02/10/bioinformatics.btv088.abstract) and CyTOF data [PhenoGraph, Levine et al., Cell, 2015](http://www.ncbi.nlm.nih.gov/pubmed/26095251). Briefly, these methods embed cells in a graph structure - for example a K-nearest neighbor (KNN) graph, with edges drawn between cells with similar gene expression patterns, and then attempt to partition this graph into highly interconnected ‘quasi-cliques’ or ‘communities’. As in PhenoGraph, we first construct a KNN graph based on the euclidean distance in PCA space, and refine the edge weights between any two cells based on the shared overlap in their local neighborhoods (Jaccard similarity). This step is performed using the FindNeighbors function, and takes as input the previously defined dimensionality of the dataset (first 10 PCs). -The `FindClusters` function implements the procedure, and contains a resolution parameter that sets the ‘granularity’ of the downstream clustering, with increased values leading to a greater number of clusters. We find that setting this parameter between 0.6-1.2 typically returns good results for single cell datasets of around 3K cells. Optimal resolution often increases for larger datasets. +To cluster the cells, we apply modularity optimization techniques such as the Louvain algorithm (default) or SLM [SLM, Blondel et al., Journal of Statistical Mechanics](http://dx.doi.org/10.1088/1742-5468/2008/10/P10008), to iteratively group cells together, with the goal of optimizing the standard modularity function. The `FindClusters` function implements the procedure, and contains a resolution parameter that sets the ‘granularity’ of the downstream clustering, with increased values leading to a greater number of clusters. We find that setting this parameter between 0.6-1.2 typically returns good results for single cell datasets of around 3K cells. Optimal resolution often increases for larger datasets. Latest clustering results will be stored in object metadata under `seurat_clusters`. +With v3 the clusters found may be accessed using the Idents function. First calculate k-nearest neighbors and construct the SNN graph (`FindNeighbors`), then run `FindClusters`. ```{r} pbmc <- FindNeighbors(pbmc, reduction = "pca", dims = 1:20) pbmc <- FindClusters(pbmc, resolution = 0.5, algorithm = 1) +head(pbmc@meta.data) ``` ## Run Non-linear dimensional reduction (tSNE) @@ -249,15 +276,16 @@ DimPlot(object = pbmc, reduction = "tsne") ## Run UMAP To visualize the two conditions side-by-side, we can use the split.by argument to show each condition colored by cluster. -```{r} + +```{r, eval=FALSE} pbmc <- RunUMAP(pbmc, reduction = "pca", dims = 1:20) DimPlot(pbmc, reduction = "umap", split.by = "seurat_clusters") ``` - You can save the object at this point so that it can easily be loaded back in without having to rerun the computationally intensive steps performed above, or easily shared with collaborators. ```{r} +dir.create("data", recursive=T) saveRDS(pbmc, file = "data/pbmc_tutorial.rds") ``` @@ -290,6 +318,7 @@ Seurat has several tests for differential expression which can be set with the t ```{r} cluster1.markers <- FindMarkers(object = pbmc, ident.1 = 0, thresh.use = 0.25, test.use = "roc", only.pos = TRUE) +head(cluster1.markers, n = 5) ``` We include several tools for visualizing marker expression. @@ -309,7 +338,7 @@ VlnPlot(object = pbmc, features =c("NKG7", "PF4")) FeaturePlot(object = pbmc, features = c("MS4A1", "GNLY", "CD3E", "CD14", "FCER1A", "FCGR3A", "LYZ", "PPBP", "CD8A"), cols = c("grey", "blue"), reduction = "tsne") ``` -`DoHeatmap` generates an expression heatmap for given cells and genes. In this case, we are plotting the top 20 markers (or all markers if less than 20) for each cluster. +`DoHeatmap` generates an expression heatmap for given cells and genes. In this case, we are plotting the top 10 markers (or all markers if less than 10) for each cluster. ```{r} top10 <- pbmc.markers %>% group_by(cluster) %>% top_n(10, avg_logFC) @@ -347,7 +376,9 @@ pbmc <- FindClusters(object = pbmc, reduction.type = "pca", dims.use = 1:10, res # points based on different criteria plot1 <- DimPlot(object = pbmc, reduction = "tsne", do.return = TRUE, no.legend = TRUE, do.label = TRUE) plot2 <- DimPlot(object = pbmc, reduction = "tsne", do.return = TRUE, group.by = "ClusterNames_0.6", no.legend = TRUE, do.label = TRUE) -plot_grid(plot1, plot2) +plot1 <- plot1 + coord_fixed(ratio = 1) +plot2 <- plot2 + coord_fixed(ratio = 1) +plot_grid(plot1, plot2, ncol=1) ``` ```{r} @@ -358,7 +389,7 @@ tcell.markers <- FindMarkers(object = pbmc, ident.1 = 0, ident.2 = 1) # can see that CCR7 is upregulated in C0, strongly indicating that we can # differentiate memory from naive CD4 cells. cols.use demarcates the color # palette from low to high expression -FeaturePlot(object = pbmc, features = c("S100A4", "CCR7"), cols = c("green", "blue")) +FeaturePlot(object = pbmc, features = c("S100A4", "CCR7"), cols = c("green", "blue"), ncol=1) ``` The memory/naive split is bit weak, and we would probably benefit from looking at more cells to see if this becomes more convincing. In the meantime, we can restore our old cluster identities for downstream processing. @@ -373,3 +404,5 @@ saveRDS(pbmc, file = "data/pbmc3k_final.rds") ```{r echo=FALSE} sessionInfo() ``` + +