A comprehensive pipeline for single-cell Perturb-Seq analysis that enables robust processing and analysis of CRISPR screening data at single-cell resolution.
Nextflow and Singularity must be installed before running the pipeline:
Workflow manager for executing the pipeline:
conda install bioconda::nextflow
Container platform that must be available in your execution environment.
This is a seamless pipeline execution monitoring system that offers a web-based interface for workflow management.
To enable Nextflow Tower, we require a TOWER_ACCESS_TOKEN.
To obtain your token:
- Create/login to your account at cloud.tower.nf
- Navigate to Settings > Your tokens
- Click "Add token" and generate a new token
- Set as environment variable:
export TOWER_ACCESS_TOKEN=your_token_here
To install the pipeline:
git clone https://github.com/pinellolab/CRISPR_Pipeline.git
{sample}_R1.fastq.gz
: Contains cell barcode and UMI sequences{sample}_R2.fastq.gz
: Contains transcript sequences
rna_seqspec.yml
: Defines RNA sequencing structure and parametersguide_seqspec.yml
: Specifies guide RNA detection parametershash_seqspec.yml
: Defines cell hashing structure (required if using cell hashing)barcode_onlist.txt
: List of valid cell barcodes
guide_metadata.tsv
: Contains guide RNA information and annotationshash_metadata.tsv
: Cell hashing sample information (required if using cell hashing)pairs_to_test.csv
: Defines perturbation pairs for comparison analysis (required if testing predefined pairs)
For detailed specifications, see our documentation.
Before running the pipeline, customize the configuration files for your environment:
Update the pipeline-specific parameters in the params
section, for example:
// Input data paths
input = "/path/to/your/samplesheet.csv"
// Analysis parameters (adjust for your experiment)
QC_min_genes_per_cell = 500
QC_min_cells_per_gene = 3
QC_pct_mito = 20
GUIDE_ASSIGNMENT_method = 'sceptre' // or 'cleanser'
INFERENCE_method = 'perturbo' // or 'sceptre'
Choose and configure your compute profile by updating the relevant sections:
// Resource limits (adjust based on your machine)
max_cpus = 8 // Number of CPU cores available
max_memory = '32.GB' // RAM available for the pipeline
// Run with: nextflow run main.nf -profile local
// Resource limits (adjust based on cluster specs)
max_cpus = 128
max_memory = '512.GB'
// Update SLURM partitions in profiles section:
slurm {
process {
queue = 'short,normal,long' // Replace with your partition names
}
}
// Run with: nextflow run main.nf -profile slurm
// Update GCP settings
google_bucket = 'gs://your-bucket-name'
google_project = 'your-gcp-project-id'
google_region = 'us-central1' // Choose your preferred region
// Resource limits
max_cpus = 128
max_memory = '512.GB'
// Run with (see more in GCP_user_notebook.ipynb):
// export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/pipeline-service-key.json"
// nextflow run main.nf -profile google
The pipeline uses pre-built containers. Update if you have custom versions:
containers {
base = 'sjiang9/conda-docker:0.2' // Main analysis tools
cleanser = 'sjiang9/cleanser:0.3' // Guide assignment
sceptre = 'sjiang9/sceptre-igvf:0.1' // Guide assignment/Perturbation inference
perturbo = 'loganblaine/perturbo:latest' // Perturbation inference
}
Environment | max_cpus | max_memory | Notes |
---|---|---|---|
Local (development) | 4-8 | 16-32GB | For testing small datasets |
Local (full analysis) | 8-16 | 64-128GB | For complete runs |
SLURM cluster | 64-128 | 256-512GB | Adjust based on node specs |
Google Cloud | 128+ | 512GB+ | Can scale dynamically |
-
Validate syntax:
nextflow config -profile local # Test local profile nextflow config -profile slurm # Test SLURM profile
-
Test with small dataset:
# Start with a subset of your data # Make all scripts executable (required for pipeline execution) chmod +x bin/* # RUN THE PIPELINE nextflow run main.nf -profile local --input small_test.tsv -outdir ./Outputs
- Start conservative: Begin with lower resource limits and increase as needed
- Profile-specific limits: The pipeline automatically scales resources based on retry attempts
- Development workflow: Use local profile for code testing, cluster/cloud for production runs
- Memory errors: Increase
max_memory
if you see out-of-memory failures - Queue timeouts: Adjust SLURM partition names to match your cluster
- Permission errors: Ensure your Google Cloud service account has proper permissions
- Container issues: Verify Singularity is available on your system
- Missing files: Double-check paths in
nextflow.config
and actual files inexample_data
The output files will be generated in the pipeline_outputs
and pipeline_dashboard
directory.
Within the pipeline_outputs
directory, you will find:
- inference_mudata.h5mu - MuData format output
- per_element_output.tsv - Per-element analysis
- per_guide_output.tsv - Per-guide analysis
Structure:
π pipeline_outputs/
βββ π inference_mudata.h5mu
βββ π per_element_output.tsv
βββ π per_guide_output.tsv
For details, see our documentation.
The pipeline produces several figures:
Within the pipeline_dashboard
directory, you will find:
-
Evaluation Output:
network_plot.png
: Gene interaction networks visualization.volcano_plot.png
: gRNA-gene pairs analysis.- IGV files (
.bedgraph
andbedpe
): Genome browser visualization files.
-
Analysis Figures:
knee_plot_scRNA.png
: Knee plot of UMI counts vs. barcode index.scatterplot_scrna.png
: Scatterplot of total counts vs. genes detected, colored by mitochondrial content.violin_plot.png
: Distribution of gene counts, total counts, and mitochondrial content.scRNA_barcodes_UMI_thresholds.png
: Number of scRNA barcodes using different Total UMI thresholds.guides_per_cell_histogram.png
: Histogram of guides per cell.cells_per_guide_histogram.png
: Histogram of cells per guide.guides_UMI_thresholds.png
: Simulating the final number of cells with assigned guides using different minimal number thresholds (at least one guide > threshold value). (Use it to inspect how many cells would have assigned guides. This can be used to check if the final number of cells with guides fit with your expected number of cells)guides_UMI_thresholds.png
: Histogram of the number of sgRNA represented per cellcells_per_htp_barplot.png
: Number of Cells across Different HTOsumap_hto.png
: UMAP Clustering of Cells Based on HTOs (The dimensions represent the distribution of HTOs in each cell)umap_hto_singlets.png
: UMAP Clustering of Cells Based on HTOs (multiplets removed)
-
seqSpec Plots:
seqSpec_check_plots.png
: The frequency of each nucleotides along the Read 1 (Use to inspect the expected read parts with their expected signature) and Read 2 (Use to inspect the expected read parts with their expected signature).
Structure:
π pipeline_dashboard/
βββ π dashboard.html
β
βββ π evaluation_output/
β βββ πΌοΈ network_plot.png
β βββ πΌοΈ volcano_plot.png
β βββ π igv.bedgraph
β βββ π igv.bedpe
β
βββ π figures/
β βββ πΌοΈ knee_plot_scRNA.png
β βββ πΌοΈ scatterplot_scrna.png
β βββ πΌοΈ violin_plot.png
β βββ πΌοΈ scRNA_barcodes_UMI_thresholds.png
β βββ πΌοΈ guides_per_cell_histogram.png
β βββ πΌοΈ cells_per_guide_histogram.png
β βββ πΌοΈ guides_UMI_thresholds.png
β βββ πΌοΈ cells_per_htp_barplot.png
β βββ πΌοΈ umap_hto.png
β βββ πΌοΈ umap_hto_singlets.png
β
βββ π guide_seqSpec_plots/
β βββ πΌοΈ seqSpec_check_plots.png
β
βββ π hashing_seqSpec_plots/
βββ πΌοΈ seqSpec_check_plots.png
To ensure proper pipeline functionality, we provide two extensively validated datasets for testing purposes.
The TF_Perturb_Seq_Pilot dataset was generated by the Gary-Hon Lab and is available through the IGVF Data Portal under Analysis Set ID: IGVFDS4389OUWU. To access the fastq files, you need to:
-
First, register for an account on the IGVF Data Portal to obtain your access credentials.
-
Once you have your credentials, you can use our provided Python script to download all necessary FASTQ files:
cd example_data python download_fastq.py \ --sample per-sample_file.tsv \ --access-key YOUR_ACCESS_KEY \ --secret-key YOUR_SECRET_KEY
π‘ Note: You'll need to replace
YOUR_ACCESS_KEY
andYOUR_SECRET_KEY
with the credentials from your IGVF portal account. These credentials can be found in your IGVF portal profile settings.
All other required input files for running the pipeline with this dataset are already included in the repository under the example_data
directory.
This dataset comes from a large-scale CRISPR screen study published in Cell (Gasperini et al., 2019: "A Genome-wide Framework for Mapping Gene Regulation via Cellular Genetic Screens") and provides an excellent resource for testing the pipeline. The full dataset, including raw sequencing data and processed files, is publicly available through GEO under accession number GSE120861.
-
Environment Setup
# Clone and enter the repository git clone https://github.com/pinellolab/CRISPR_Pipeline.git cd CRISPR_Pipeline
-
Choose Your Dataset and Follow the Corresponding Instructions:
# Run with LOCAL nextflow run main.nf \ -profile local \ --input samplesheet.tsv \ --outdir ./outputs/ # Run with SLURM nextflow run main.nf \ -profile slurm \ --input samplesheet.tsv \ --outdir ./outputs/ # Run with GCP nextflow run main.nf \ -profile google \ --input samplesheet.tsv \ --outdir gs://igvf-pertub-seq-pipeline-data/scratch/ # Path to your GCP bucket
-
Set up the configuration files:
# Copy configuration files and example data cp example_gasperini/nextflow.config nextflow.config cp -r example_gasperini/example_data/* example_data/
-
Obtain sequencing data:
- Download a subset of the dataset gasperini in your own server.
- Place files in
example_data/fastq_files
directory
NTHREADS=16 wget https://github.com/10XGenomics/bamtofastq/releases/download/v1.4.1/bamtofastq_linux; chmod +x bamtofastq_linux wget https://sra-pub-src-1.s3.amazonaws.com/SRR7967488/pilot_highmoi_screen.1_CGTTACCG.grna.bam.1;mv pilot_highmoi_screen.1_CGTTACCG.grna.bam.1 pilot_highmoi_screen.1_CGTTACCG.grna.bam ./bamtofastq_linux --nthreads="$NTHREADS" pilot_highmoi_screen.1_CGTTACCG.grna.bam bam_pilot_guide_1 wget https://sra-pub-src-1.s3.amazonaws.com/SRR7967482/pilot_highmoi_screen.1_SI_GA_G1.bam.1;mv pilot_highmoi_screen.1_SI_GA_G1.bam.1 pilot_highmoi_screen.1_SI_GA_G1.bam ./bamtofastq_linux --nthreads="$NTHREADS" pilot_highmoi_screen.1_SI_GA_G1.bam bam_pilot_scrna_1
Now you should see the
bam_pilot_guide_1
andbam_pilot_scrna_1
directories inside theexample_data/fastq_files
directory. Insidebam_pilot_guide_1
andbam_pilot_scrna_1
, there are multiple sets of FASTQ files. -
Prepare the whitelist:
# Extract the compressed whitelist file unzip example_data/yaml_files/3M-february-2018.txt.zip
Now you should see
3M-february-2018.txt
insideexample_data/yaml_files/
directory. -
Launch the pipeline:
# Run with LOCAL nextflow run main.nf \ -profile local \ --input samplesheet.tsv \ --outdir ./outputs/ # Run with SLURM nextflow run main.nf \ -profile slurm \ --input samplesheet.tsv \ --outdir ./outputs/ # Run with GCP nextflow run main.nf \ -profile google \ --input samplesheet.tsv \ --outdir gs://igvf-pertub-seq-pipeline-data/scratch/ # Path to your GCP bucket
-
The pipeline generates two directories upon completion:
pipeline_outputs
: Contains all analysis resultspipeline_dashboard
: Houses interactive visualization reports
If you encounter any issues during testing:
- Review log files and intermediate results in the
work/
directory - Verify that all input files meet the required format specifications
For additional support or questions, please open an issue on our GitHub repository.
We thank the following people for their extensive assistance in the development of this pipeline:
If you would like to contribute to this pipeline, please see the contributing guidelines.
For further information or help, don't hesitate to get in touch on the [Slack #fg-crispr
channel]
An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md
file.
You can cite the nf-core
publication as follows:
The nf-core framework for community-curated bioinformatics pipelines.
Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.