Author: Hui Ma, Yiming Yang
Date: 2021-06-24
Notebook Source: batch_correction.ipynb
import pegasus as pg
In this tutorial, we'll use a gene-count matrix dataset on human bone marrow from 8 donors, and show how to use the batch correction methods in Pegasus to tackle the batch effects in the data.
The dataset is stored at https://storage.googleapis.com/terra-featured-workspaces/Cumulus/MantonBM_nonmix_subset.zarr.zip. You can also use gsutil to download it via its Google bucket URL (gs://terra-featured-workspaces/Cumulus/MantonBM_nonmix_subset.zarr.zip).
Now load the count matrix:
data = pg.read_input("MantonBM_nonmix_subset.zarr.zip")
data
2021-06-24 16:46:19,297 - pegasusio.readwrite - INFO - zarr file 'MantonBM_nonmix_subset.zarr.zip' is loaded. 2021-06-24 16:46:19,298 - pegasusio.readwrite - INFO - Function 'read_input' finished in 0.19s.
MultimodalData object with 1 UnimodalData: 'GRCh38-rna' It currently binds to UnimodalData object GRCh38-rna UnimodalData object with n_obs x n_vars = 48219 x 36601 Genome: GRCh38; Modality: rna It contains 1 matrix: 'X' It currently binds to matrix 'X' as X obs: 'n_genes', 'Channel' var: 'featureid' obsm: varm: uns: 'genome', 'modality'
'Channel'
is the batch key. Each batch is associated with one donor, so there are 8 batches in total.
First, preprocess the data. This includes filtration, selecting robust genes, and log-normalization:
pg.qc_metrics(data, min_genes=500, max_genes=6000, mito_prefix='MT-', percent_mito=10)
pg.filter_data(data)
pg.identify_robust_genes(data)
pg.log_norm(data)
2021-06-24 16:46:19,802 - pegasusio.qc_utils - INFO - After filtration, 35465 out of 48219 cell barcodes are kept in UnimodalData object GRCh38-rna. 2021-06-24 16:46:19,803 - pegasus.tools.preprocessing - INFO - Function 'filter_data' finished in 0.21s. 2021-06-24 16:46:20,332 - pegasus.tools.preprocessing - INFO - After filtration, 25653/36601 genes are kept. Among 25653 genes, 17516 genes are robust. 2021-06-24 16:46:20,333 - pegasus.tools.preprocessing - INFO - Function 'identify_robust_genes' finished in 0.53s. 2021-06-24 16:46:20,804 - pegasus.tools.preprocessing - INFO - Function 'log_norm' finished in 0.47s.
After quality-control, distribution of cells in all the 8 batches is:
data.obs['Channel'].value_counts()
MantonBM2_HiSeq_1 4935 MantonBM6_HiSeq_1 4665 MantonBM8_HiSeq_1 4511 MantonBM7_HiSeq_1 4452 MantonBM1_HiSeq_1 4415 MantonBM3_HiSeq_1 4225 MantonBM4_HiSeq_1 4172 MantonBM5_HiSeq_1 4090 Name: Channel, dtype: int64
We first perform downstream steps without considering batch effects. In this way, you can see where the batch effects exist, and moreover, we'll use this result as the baseline when comparing different batch correction methods.
data_baseline = data.copy()
pg.highly_variable_features(data_baseline, consider_batch=False)
2021-06-24 16:46:21,202 - pegasus.tools.hvf_selection - INFO - Function 'estimate_feature_statistics' finished in 0.10s. 2021-06-24 16:46:21,237 - pegasus.tools.hvf_selection - INFO - 2000 highly variable features have been selected. 2021-06-24 16:46:21,238 - pegasus.tools.hvf_selection - INFO - Function 'highly_variable_features' finished in 0.14s.
In this tutorial, the downstream steps consists of:
pg.pca(data_baseline)
pg.neighbors(data_baseline)
pg.louvain(data_baseline)
pg.umap(data_baseline)
pg.scatter(data_baseline, attrs=['louvain_labels', 'Channel'], basis='umap')
2021-06-24 16:46:24,080 - pegasus.tools.preprocessing - INFO - Function 'pca' finished in 2.84s. 2021-06-24 16:46:27,882 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 3.80s. 2021-06-24 16:46:28,884 - pegasus.tools.nearest_neighbors - INFO - Function 'calculate_affinity_matrix' finished in 1.00s. 2021-06-24 16:46:30,119 - pegasus.tools.graph_operations - INFO - Function 'construct_graph' finished in 1.23s. 2021-06-24 16:46:47,350 - pegasus.tools.clustering - INFO - Louvain clustering is done. Get 19 clusters. 2021-06-24 16:46:47,483 - pegasus.tools.clustering - INFO - Function 'louvain' finished in 18.60s. 2021-06-24 16:46:47,483 - pegasus.tools.nearest_neighbors - INFO - Found cached kNN results, no calculation is required. 2021-06-24 16:46:47,484 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 0.00s. 2021-06-24 16:46:47,497 - pegasus.tools.visualization - INFO - UMAP(dens_frac=0.0, dens_lambda=0.0, min_dist=0.5, random_state=0, verbose=True) 2021-06-24 16:46:47,499 - pegasus.tools.visualization - INFO - Construct fuzzy simplicial set 2021-06-24 16:46:49,478 - pegasus.tools.visualization - INFO - Construct embedding completed 0 / 200 epochs completed 20 / 200 epochs completed 40 / 200 epochs completed 60 / 200 epochs completed 80 / 200 epochs completed 100 / 200 epochs completed 120 / 200 epochs completed 140 / 200 epochs completed 160 / 200 epochs completed 180 / 200 epochs 2021-06-24 16:47:09,711 - pegasus.tools.visualization - INFO - Function 'umap' finished in 22.23s.
Let's have a quick look at the UMAP plot above. If checking the cells in Louvain cluster 11 and 14 in the right-hand-side plot, you can see that most of them are from sample MantonBM3_HiSeq_1
. This indicates strong batch effects.
Batch effect occurs when data samples are generated in different conditions, such as date, weather, lab setting, equipment, etc. Unless informed that all the samples were generated under the similar condition, people may suspect presumable batch effects if they see a visualization graph with samples kind-of isolated from each other.
In this tutorial, you'll see how to apply the batch correction methods in Pegasus to this dataset.
As a common step ahead, we need to re-select HVGs considering batch effects:
pg.highly_variable_features(data, consider_batch=True)
2021-06-24 16:47:11,098 - pegasus.tools.hvf_selection - INFO - Function 'estimate_feature_statistics' finished in 0.16s. 2021-06-24 16:47:11,138 - pegasus.tools.hvf_selection - INFO - 2000 highly variable features have been selected. 2021-06-24 16:47:11,139 - pegasus.tools.hvf_selection - INFO - Function 'highly_variable_features' finished in 0.21s.
Harmony is a widely-used method for data integration. Pegasus uses harmony-pytorch package to perform Harmony batch correction.
Harmony works on PCA matrix. So we need to first calculate the original PCA matrix:
data_harmony = data.copy()
pg.pca(data_harmony)
2021-06-24 16:47:14,314 - pegasus.tools.preprocessing - INFO - Function 'pca' finished in 2.88s.
Now we are ready to run Harmony integration:
harmony_key = pg.run_harmony(data_harmony)
2021-06-24 16:47:14,792 - pegasus.tools.batch_correction - INFO - Start integration using Harmony. Initialization is completed. Completed 1 / 10 iteration(s). Completed 2 / 10 iteration(s). Completed 3 / 10 iteration(s). Completed 4 / 10 iteration(s). Completed 5 / 10 iteration(s). Completed 6 / 10 iteration(s). Completed 7 / 10 iteration(s). Completed 8 / 10 iteration(s). Reach convergence after 8 iteration(s). 2021-06-24 16:47:33,304 - pegasus.tools.batch_correction - INFO - Function 'run_harmony' finished in 18.98s.
When finished, the corrected PCA matrix is stored in data_harmony.obsm['X_pca_harmony']
, and run_harmony
returns the representation key 'pca_harmony'
as variable harmony_key
. In the downstream steps, you can set rep
parameter to either harmony_key
or 'pca_harmony'
in Pegasus functions whenever applicable.
For details on parameters of run_harmony
other than the default setting, please see here.
With the new corrected PCA matrix, we can perform kNN-graph-based clustering and calculate UMAP embeddings as follows:
pg.neighbors(data_harmony, rep=harmony_key)
pg.louvain(data_harmony, rep=harmony_key)
pg.umap(data_harmony, rep=harmony_key)
2021-06-24 16:47:37,603 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 4.29s. 2021-06-24 16:47:38,586 - pegasus.tools.nearest_neighbors - INFO - Function 'calculate_affinity_matrix' finished in 0.98s. 2021-06-24 16:47:39,850 - pegasus.tools.graph_operations - INFO - Function 'construct_graph' finished in 1.26s. 2021-06-24 16:47:55,129 - pegasus.tools.clustering - INFO - Louvain clustering is done. Get 16 clusters. 2021-06-24 16:47:55,272 - pegasus.tools.clustering - INFO - Function 'louvain' finished in 16.69s. 2021-06-24 16:47:55,273 - pegasus.tools.nearest_neighbors - INFO - Found cached kNN results, no calculation is required. 2021-06-24 16:47:55,274 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 0.00s. 2021-06-24 16:47:55,285 - pegasus.tools.visualization - INFO - UMAP(dens_frac=0.0, dens_lambda=0.0, min_dist=0.5, random_state=0, verbose=True) 2021-06-24 16:47:55,285 - pegasus.tools.visualization - INFO - Construct fuzzy simplicial set 2021-06-24 16:47:55,453 - pegasus.tools.visualization - INFO - Construct embedding completed 0 / 200 epochs completed 20 / 200 epochs completed 40 / 200 epochs completed 60 / 200 epochs completed 80 / 200 epochs completed 100 / 200 epochs completed 120 / 200 epochs completed 140 / 200 epochs completed 160 / 200 epochs completed 180 / 200 epochs 2021-06-24 16:48:16,079 - pegasus.tools.visualization - INFO - Function 'umap' finished in 20.81s.
Then show UMAP plot:
pg.scatter(data_harmony, attrs=['louvain_labels', 'Channel'], basis='umap')
Pegasus also provides a canonical Location and Scale (L/S) Adjustment method (See reference) for batch correction. Different from Harmony, L/S method modifies the log-normalized count matrix. It is faster, and works well on large-scale datasets.
Pegasus uses this method for batch correction in Cumulus paper.
After HVG selection, directly run correct_batch
function to perform L/S batch correction:
data_ls = data.copy()
pg.correct_batch(data_ls, features='highly_variable_features')
2021-06-24 16:48:17,523 - pegasus.tools.batch_correction - INFO - Adjustment parameters are estimated. 2021-06-24 16:48:17,903 - pegasus.tools.batch_correction - INFO - Features are selected. 2021-06-24 16:48:18,494 - pegasus.tools.batch_correction - INFO - Batch correction is finished. Time spent = 0.60s.
In correct_batch
function, features
parameter specifies which genes/features to consider in batch correction. By default, it considers all the features. Here, as we've already selected a HVG set, we can assign its key in data_ls.var
field, which is 'highly_variable_features'
, to this parameter.
data_ls.uns['_tmp_fmat_highly_variable_features'].shape
(35465, 2000)
As shown above, the corrected count matrix is stored at adata.uns['_tmp_fmat_highly_variable_features']
, with dimension cell-by-HVG.
See its documnetation for customization.
Now we can perform the downstream analysis:
pg.pca(data_ls)
pg.neighbors(data_ls)
pg.louvain(data_ls)
pg.umap(data_ls)
2021-06-24 16:48:21,332 - pegasus.tools.preprocessing - INFO - Function 'pca' finished in 2.83s. 2021-06-24 16:48:26,303 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 4.97s. 2021-06-24 16:48:27,321 - pegasus.tools.nearest_neighbors - INFO - Function 'calculate_affinity_matrix' finished in 1.02s. 2021-06-24 16:48:28,579 - pegasus.tools.graph_operations - INFO - Function 'construct_graph' finished in 1.26s. 2021-06-24 16:48:39,289 - pegasus.tools.clustering - INFO - Louvain clustering is done. Get 17 clusters. 2021-06-24 16:48:39,432 - pegasus.tools.clustering - INFO - Function 'louvain' finished in 12.11s. 2021-06-24 16:48:39,433 - pegasus.tools.nearest_neighbors - INFO - Found cached kNN results, no calculation is required. 2021-06-24 16:48:39,434 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 0.00s. 2021-06-24 16:48:39,445 - pegasus.tools.visualization - INFO - UMAP(dens_frac=0.0, dens_lambda=0.0, min_dist=0.5, random_state=0, verbose=True) 2021-06-24 16:48:39,446 - pegasus.tools.visualization - INFO - Construct fuzzy simplicial set 2021-06-24 16:48:39,619 - pegasus.tools.visualization - INFO - Construct embedding completed 0 / 200 epochs completed 20 / 200 epochs completed 40 / 200 epochs completed 60 / 200 epochs completed 80 / 200 epochs completed 100 / 200 epochs completed 120 / 200 epochs completed 140 / 200 epochs completed 160 / 200 epochs completed 180 / 200 epochs 2021-06-24 16:49:00,059 - pegasus.tools.visualization - INFO - Function 'umap' finished in 20.63s.
pg.scatter(data_ls, attrs=['louvain_labels', 'Channel'], basis='umap')
data_scan = data.copy()
scan_key = pg.run_scanorama(data_scan)
2021-06-24 16:49:01,409 - pegasus.tools.batch_correction - INFO - Start integration using Scanorama. Found 2000 genes among all datasets [[0. 0.69648924 0.29349112 0.49263873 0.51393643 0.46345123 0.57553794 0.40057637] [0. 0. 0.13964497 0.6662614 0.32518337 0.6006079 0.6674772 0.22611394] [0. 0. 0. 0.17761266 0.61775148 0.09046088 0.17017751 0.55100592] [0. 0. 0. 0. 0.45427873 0.63965702 0.55441035 0.43094658] [0. 0. 0. 0. 0. 0.40880196 0.49144254 0.67523831] [0. 0. 0. 0. 0. 0. 0.6829582 0.32604502] [0. 0. 0. 0. 0. 0. 0. 0.60053203] [0. 0. 0. 0. 0. 0. 0. 0. ]] Processing datasets (0, 1) Processing datasets (5, 6) Processing datasets (4, 7) Processing datasets (1, 6) Processing datasets (1, 3) Processing datasets (3, 5) Processing datasets (2, 4) Processing datasets (1, 5) Processing datasets (6, 7) Processing datasets (0, 6) Processing datasets (3, 6) Processing datasets (2, 7) Processing datasets (0, 4) Processing datasets (0, 3) Processing datasets (4, 6) Processing datasets (0, 5) Processing datasets (3, 4) Processing datasets (3, 7) Processing datasets (4, 5) Processing datasets (0, 7) Processing datasets (5, 7) Processing datasets (1, 4) Processing datasets (0, 2) Processing datasets (1, 7) Processing datasets (2, 3) Processing datasets (2, 6) Processing datasets (1, 2) 2021-06-24 16:50:13,768 - pegasus.tools.batch_correction - INFO - Function 'run_scanorama' finished in 72.38s.
You can check details on run_scanorama
parameters here.
By default, it considers count matrix only regarding the selected HVGs, and calculates the corrected PCA matrix of $50$ PCs. When finished, this new PCA matrix is stored in data_scan.obsm['X_scanorama']
, and returns its representation key 'scanorama'
as variable scan_key
. In the downstream steps, you can set rep
parameter to either scan_key
or 'scanorama'
in Pegasus functions whenever applicable:
pg.neighbors(data_scan, rep=scan_key)
pg.louvain(data_scan, rep=scan_key)
pg.umap(data_scan, rep=scan_key)
2021-06-24 16:50:18,619 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 4.84s. 2021-06-24 16:50:19,587 - pegasus.tools.nearest_neighbors - INFO - Function 'calculate_affinity_matrix' finished in 0.97s. 2021-06-24 16:50:20,816 - pegasus.tools.graph_operations - INFO - Function 'construct_graph' finished in 1.23s. 2021-06-24 16:50:35,800 - pegasus.tools.clustering - INFO - Louvain clustering is done. Get 18 clusters. 2021-06-24 16:50:35,944 - pegasus.tools.clustering - INFO - Function 'louvain' finished in 16.36s. 2021-06-24 16:50:35,945 - pegasus.tools.nearest_neighbors - INFO - Found cached kNN results, no calculation is required. 2021-06-24 16:50:35,946 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 0.00s. 2021-06-24 16:50:35,966 - pegasus.tools.visualization - INFO - UMAP(dens_frac=0.0, dens_lambda=0.0, min_dist=0.5, random_state=0, verbose=True) 2021-06-24 16:50:35,968 - pegasus.tools.visualization - INFO - Construct fuzzy simplicial set 2021-06-24 16:50:36,158 - pegasus.tools.visualization - INFO - Construct embedding completed 0 / 200 epochs completed 20 / 200 epochs completed 40 / 200 epochs completed 60 / 200 epochs completed 80 / 200 epochs completed 100 / 200 epochs completed 120 / 200 epochs completed 140 / 200 epochs completed 160 / 200 epochs completed 180 / 200 epochs 2021-06-24 16:50:57,261 - pegasus.tools.visualization - INFO - Function 'umap' finished in 21.32s.
Now check its UMAP plot:
pg.scatter(data_scan, attrs=['louvain_labels', 'Channel'], basis='umap')
To compare the performance on the three methods, one metric is runtime, which you can see from the logs in sections above: L/S Adjustment method is the fastest, then Harmony, and Scanorama is the slowest.
In this section, we'll use 2 other metrics for comparison:
We have 4 results: No batch correction (Baseline), Harmony, L/S, and Scanorama. For each result, kBET and kSIM acceptance rates are calculated on its 2D UMAP coordinates for comparison, which is consistent with Cumulus paper.
Details on these 2 metrics can also be found in Cumulus paper.
We can use calc_kBET
function to calculate on kBET acceptance rates. Besides,
attr
parameter, use the batch key, which is 'Channel'
in this tutorial.rep
parameter, set to the corresponding UMAP coordinates;_, _, kBET_baseline = pg.calc_kBET(data_baseline, attr='Channel', rep='umap')
_, _, kBET_harmony = pg.calc_kBET(data_harmony, attr='Channel', rep='umap')
_, _, kBET_ls = pg.calc_kBET(data_ls, attr='Channel', rep='umap')
_, _, kBET_scan = pg.calc_kBET(data_scan, attr='Channel', rep='umap')
2021-06-24 16:51:00,592 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 2.26s. 2021-06-24 16:51:28,033 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 2.30s. 2021-06-24 16:51:32,497 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 2.12s. 2021-06-24 16:51:37,017 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 2.13s.
We need pre-annotated cell type information as ground truth to calculate kSIM acceptance rate. This is achieved by:
This ground truth is stored at https://storage.googleapis.com/terra-featured-workspaces/Cumulus/cell_types.csv, or we can get it using gsutil
from its Google bucket URL (gs://terra-featured-workspaces/Cumulus/cell_types.csv):
Now load this file, and attach its 'cell_types'
column to the 4 resulting count matrices above:
import pandas as pd
import numpy as np
df_celltypes = pd.read_csv("cell_types.csv", index_col='barcodekey')
assert np.sum(df_celltypes.index!=data_baseline.obs_names) == 0
data_baseline.obs['cell_types'] = df_celltypes['cell_types']
assert np.sum(df_celltypes.index!=data_harmony.obs_names) == 0
data_harmony.obs['cell_types'] = df_celltypes['cell_types']
assert np.sum(df_celltypes.index!=data_ls.obs_names) == 0
data_ls.obs['cell_types'] = df_celltypes['cell_types']
assert np.sum(df_celltypes.index!=data_scan.obs_names) == 0
data_scan.obs['cell_types'] = df_celltypes['cell_types']
We can then use calc_kSIM
function to calculate kSIM acceptance rates. Besides,
attr
parameter, use the ground truth key 'cell_types'
;rep
parameter, similarly as in kBET section, set to the corresponding UMAP coordinates;_, kSIM_baseline = pg.calc_kSIM(data_baseline, attr='cell_types', rep='umap')
_, kSIM_harmony = pg.calc_kSIM(data_harmony, attr='cell_types', rep='umap')
_, kSIM_ls = pg.calc_kSIM(data_ls, attr='cell_types', rep='umap')
_, kSIM_scan = pg.calc_kSIM(data_scan, attr='cell_types', rep='umap')
2021-06-24 16:51:39,471 - pegasus.tools.nearest_neighbors - INFO - Found cached kNN results, no calculation is required. 2021-06-24 16:51:39,471 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 0.00s. 2021-06-24 16:51:39,482 - pegasus.tools.nearest_neighbors - INFO - Found cached kNN results, no calculation is required. 2021-06-24 16:51:39,482 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 0.00s. 2021-06-24 16:51:39,488 - pegasus.tools.nearest_neighbors - INFO - Found cached kNN results, no calculation is required. 2021-06-24 16:51:39,489 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 0.00s. 2021-06-24 16:51:39,498 - pegasus.tools.nearest_neighbors - INFO - Found cached kNN results, no calculation is required. 2021-06-24 16:51:39,499 - pegasus.tools.nearest_neighbors - INFO - Function 'get_neighbors' finished in 0.00s.
Now draw a scatterplot regarding these two metrics on the 4 results:
import seaborn as sns
import matplotlib.pyplot as plt
df_plot = pd.DataFrame({'method': ['Baseline', 'Harmony', 'L/S', 'Scanorama'],
'kBET': [kBET_baseline, kBET_harmony, kBET_ls, kBET_scan],
'kSIM': [kSIM_baseline, kSIM_harmony, kSIM_ls, kSIM_scan]})
plt.figure(dpi=100)
ax = sns.scatterplot(x = 'kSIM', y = 'kBET', hue = 'method', data = df_plot, legend = False)
for line in range(0, df_plot.shape[0]):
x_pos = df_plot.kSIM[line] + 0.003
if df_plot.method[line] == 'Baseline':
x_pos = df_plot.kSIM[line] - 0.003
y_pos = df_plot.kBET[line]
if df_plot.method[line] == 'L/S':
y_pos -= 0.01
alignment = 'right' if df_plot.method[line] == 'Baseline' else 'left'
ax.text(x_pos, y_pos, df_plot.method[line], ha = alignment, size = 'medium', color = 'black')
plt.xlabel('kSIM acceptance rate')
plt.ylabel('kBET acceptance rate')
Text(0, 0.5, 'kBET acceptance rate')
As this plot shows, a trade-off exists between good mixture of cells (in terms of kBET acceptance rate) and maintaining the biology well (in terms of kSIM acceptance rate). Harmony method achieves the best mixture of cells, while its consistency with the ground truth biology is the least. L/S and Scanorama both have a better balance between the two measurements.
Therefore, in general, the choice of batch correction method really depends on the dataset and your analysis goal.