Python package that evaluates the topological accuracy of a predicted neuron segmentation by comparing it to a set of ground truth skeletons (i.e. graphs). Topological errors (e.g. splits and merges) are detected by examining skeleton edges and checking if the corresponding nodes belong to the same object in the segmentation. Once the accuracy of each edge has been determined, several skeleton-based metrics are computed to quantify the topological accuracy.
Note: This repository is an implementation of the skeleton-based metrics described in High-Precision Automated Reconstruction of Neurons with Flood-filling Networks
The pipeline for computing skeleton metrics consists of three main steps:
1. Label Graphs: Nodes in ground truth graphs are labeled with segmentation IDs.
2. Error Detection: Compare labels of neighboring nodes to detect mistakes.
3. Compute Metrics: Update graph structure by removing omit nodes and compute skeleton-based metrics.
Figure: Visualization of skeleton metric computation pipeline, see Method section for description of each step.
The process starts with a collection of ground truth graphs, each stored as an individual SWC file, where the "xyz" attribute represents voxel coordinates in an image. Each ground truth graph is loaded and represented as a custom NetworkX graph with these coordinates as a node-level attribute. The nodes of each graph are then labeled with their corresponding segment IDs from the predicted segmentation.
Figure: On the left, ground truth graphs are superimposed on a segmentation where colors represent segment IDs. On the right, the nodes of the graphs have been labeled with the corresponding segment IDs.
Figure: From top to bottom: correct edge (nodes have same segment ID), omit edge (at least one node does not have a segment ID), split edge (nodes have different segment IDs), merged edge (segment intersects with multiple graphs).
Lastly, we compute the following skeleton-based metrics:
- # Splits: Number of connected components (minus 1) in a ground truth graph after removing omit nodes.
- # Merges: Number of ground truth graphs that contain at least one merge.
- Omit Edge Ratio: Proportion of omitted edges.
- Split Edge Ratio: Proportion of split edges.
- Merged Edge Ratio: Proportion of merged edges.
- Edge Accuracy: Proportion of edges that are correct.
- Expected Run Length (ERL): Expected run length of ground truth graph after removing omit nodes.
To use the software, in the root directory, run
pip install -e .
Here is a simple example of evaluating a predicted segmentation.
from segmentation_skeleton_metrics.skeleton_metric import SkeletonMetric
from segmentation_skeleton_metrics.utils.img_util import TiffReader
def evaluate():
segmentation = TiffReader(segmentation_path)
skeleton_metric = SkeletonMetric(
groundtruth_pointer,
segmentation,
fragments_pointer=fragments_pointer,
output_dir=output_dir,
)
full_results, avg_results = skeleton_metric.run(results_path)
if __name__ == "__main__":
# Initializations
output_dir = "./"
segmentation_path = "./pred_labels.tif"
fragments_pointer = "./pred_swcs.zip"
groundtruth_pointer = "./target_swcs.zip"
results_path = f"{output_dir}/results.xls"
# Run
evaluate()
Figure: Example of printouts generated after running evaluation.
Note: this Python package can also be used to evaluate a segmentation in which split mistakes have been corrected.
For any inquiries, feedback, or contributions, please do not hesitate to contact us. You can reach us via email at anna.grim@alleninstitute.org or connect on LinkedIn.
segmentation-skeleton-metrics is licensed under the MIT License.