Skip to content

DeepTrackAI/vs_dataset

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

Virtual Staining Dataset (vs_dataset)

Overview

This DeepTrackAI repository replicates part of the In Silico Labeling Dataset, available from the in-silico-labeling GitHub Repository and described in Christiansen et al., Cell, 2018.

These images were used for developing models that predict virtual staining of biological samples from brightfield images.

From the original dataset, this repository includes only the folders named Rubin/scott_1_0, corresponding to human motor neurons (Condition A).

Each field of view contains:

  • Brightfield images: a z-stack of 13 images acquired at different focal planes (RGB, identical content in all three channels).
  • Fluorescence images: spatially coregistered with the brightfield images, showing:
    • Hoechst stain — nuclei (blue)
    • Anti-TuJ1 stain — neurons (green)
  • Predicted fluorescence images: generated by virtual staining models.

Summary

  • Number of fields of view: 25 (22 for training, 3 for testing)
  • Number of images per field of view: 13 brightfield images + 1 fluorescence image + 1 predicted fluorescence image
  • Image size: variable, depending on acquisition
  • Image format: 8-bit per channel RGB PNG

The filenames contain metadata sufficient to identify the image contents.


Original Source

If you use this dataset in your research, please follow the licensing requirements and properly attribute the original authors.


Dataset Structure

/vs_dataset  
  ├── train/          # Training set (fields of view split into modality/z-depth files)  
  │   ├── <filename_1>.png  
  │   ├── <filename_2>.png  
  │   └── ...  
  └── test/           # Test set (same structure as training set)  
      ├── <filename_1>.png  
      ├── <filename_2>.png  
      └── ...

Each file corresponds to one modality or z-plane from a field of view. Filenames encode metadata including laboratory, experimental condition, acquisition date, well position, z-depth, imaging channel, mask flag, and image type.


How to Access the Data

Clone the Repository

git clone https://github.com/DeepTrackAI/vs_dataset
cd vs_dataset

Attribution

If you use this dataset, please cite both the In Silico Labeling dataset and the reference article.

Cite the dataset:

Google Research. In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images. GitHub (2018). Retrieved from github.com/google/in-silico-labeling

@misc{google2018insilico,
  title={In silico labeling: predicting fluorescent labels in unlabeled images},
  author       = {Google Research},
  year         = {2018},
  howpublished = {\url{https://github.com/google/in-silico-labeling}}
}

Cite the reference article:

Christiansen E, Yang S, Ando D, Javaherian A, Skibinski G, Lipnick S, Mount E, O'Neil A, Shah K, Lee A, Goyal P, Fedus W, Poplin R, Esteva A, Berndl M, Rubin L, Nelson P, Finkbeiner S. In silico labeling: predicting fluorescent labels in unlabeled images}. Cell 173(3): 792–803 (2018). DOI: 10.1016/j.cell.2018.03.040

@article{christiansen2018isl,
  title={In silico labeling: predicting fluorescent labels in unlabeled images},
  author={Christiansen, Eric M and Yang, Samuel J and Ando, D Michael and Javaherian, Ashkan and Skibinski, Gaia and Lipnick, Scott and Mount, Elliot and O’Neil, Alison and Shah, Kevan and Lee, Alicia K and Goyal, Piyush and Fedus, William and Poplin, Ryan and Esteva, Andre and Berndl, Marc and Rubin, Lee L and Nelson, Philip and Finkbeiner, Steven},
  journal={Cell},
  volume={173},
  number={3},
  pages={792--803},
  year={2018},
  publisher={Elsevier},
  doi={10.1016/j.cell.2018.03.040}
}

License

This replication dataset is shared under the Apache 2.0 License, consistent with the original licensing terms.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •