Generative 3D Point Cloud Counterfactuals: Explaining 3D Semantic Segmentation through Generative AI-based Counterfactuals
This repository contains the implementation of Explaining 3D Semantic Segmentation through Generative AI-based Counterfactuals, a generative framework for counterfactual explanations in 3D semantic segmentation.
The framework introduces a latent-space navigation approach to generate counterfactuals for 3D point cloud models. By leveraging autoencoder-based latent representations, UMAP embeddings, and graph traversal (Delaunay triangulation + shortest-path search), the method produces counterfactual point clouds that remain both geometrically plausible and semantically meaningful.
Figure 1: Graphical Abstract - Schematic diagram of the proposed framework for generating counter- factuals in 3D PCD.
- PointNet++ Autoencoder for compact latent representations.
- Latent space counterfactual generation with:
- UMAP projection for semantic neighborhood preservation.
- Graph construction via Delaunay triangulation.
- Geodesic shortest-path search guided by plausibility + classifier confidence.
- Semantic segmentation classifier (PointNet++) to validate plausibility.
- Interpretability metrics:
- Similarity - closeness to the original point cloud
- Validity - intensity in classification change
- Sparsity - intensity of local change given a epsilon value
- Supports ShapeNet classes: car, bus, boat, tower, motorcycle, airplane.
source/
βββ main_syn.py # Main script for counterfactual experiments
β
βββ generative/ # Generative Autoencoder Source code
β βββ dataset.py
β βββ model.py # Autoencoder (PointNet++ encoder + decoder)
β βββ pointnet_utils.py
β βββ train.py
β βββ train_utils.py
β βββ preds/ # Example counterfactual outputs (.pcd)
β
βββ semseg/ # Semantic Segmentation Classifier
β βββ dataset.py
β βββ model.py # PointNet++ semantic segmentation model
β βββ train.py
β βββ test.py
β
βββ scripts/
βββ instance_preprocessing.py # ShapeNet preprocessing
