This codebase provides a Pytorch implementation of:
Image-based Outlier Synthesis With Training Data
Sudarshan Regmi
Out-of-distribution (OOD) detection is critical to ensure the safe deployment of deep learning models in critical applications. Deep learning models can often misidentify OOD samples as in-distribution (ID) samples. This vulnerability worsens in the presence of spurious correlation in the training set. Likewise, in fine-grained classification settings, detection of fine-grained OOD samples becomes inherently challenging due to their high similarity to ID samples. However, current research on OOD detection has focused instead largely on relatively easier (conventional) cases. Even the few recent works addressing these challenging cases rely on carefully curated or synthesized outliers, ultimately requiring external data. This motivates our central research question: ``Can we innovate OOD detection training framework for fine-grained and spurious settings \textbf{without requiring any external data at all?}" In this work, we present a unified \textbf{A}pproach to \textbf{S}purious, fine-grained, and \textbf{C}onventional \textbf{OOD D}etection (\textbf{\ASCOOD}) that eliminates the reliance on external data. First, we synthesize virtual outliers from ID data by approximating the destruction of invariant features. Specifically, we propose to add gradient attribution values to ID inputs to disrupt invariant features while amplifying true-class logit, thereby synthesizing challenging near-manifold virtual outliers. Then, we simultaneously incentivize ID classification and predictive uncertainty towards virtual outliers. For this, we further propose to leverage standardized features with z-score normalization. ASCOOD effectively mitigates impact of spurious correlations and encourages capturing fine-grained attributes. Extensive experiments across \textbf{7} datasets and and comparisons with \textbf{30+} methods demonstrate merit of ASCOOD in spurious, fine-grained and conventional settings.
Check other works:
Follow OpenOOD official instruction to complete the setup.
pip install git+https://github.com/Jingkang50/OpenOOD
- Spurious setting:
- Waterbirds (spurious correlation ~ 0.9)
- CelebA (spurious correlation ~ 0.8)
- Fine-grained setting:
- Car
- Aircraft
- Conventional setting:
- CIFAR-10
- CIFAR-100
- ImageNet-100
Visit Google Drive for datasets and checkpoints.
You may download necessary datasets and checkpoints with the following command:
bash scripts/download/download_ascood.shThese datasets are adapted from OpenOOD, Spurious_OOD and MixOE.
Use the following scripts for training and inferencing the ASCOOD model:
bash scripts/ood/ascood/waterbirds_train_ascood.sh
bash scripts/ood/ascood/waterbirds_test_ascood.shOther similar scripts are available in scripts/ood/ascood folder.
Example of training script (eg. scripts/ood/ascood/waterbirds_train_ascood.sh):
python main.py \
--config configs/datasets/waterbird/waterbird.yml \
configs/networks/ascood_net.yml \
configs/pipelines/train/train_ascood.yml \
configs/preprocessors/ascood_preprocessor.yml \
--network.backbone.name resnet18_224x224 \
--network.backbone.pretrained True \
--network.backbone.checkpoint ./results/pretrained_weights/resnet18-f37072fd.pth \
--optimizer.lr 0.01 \
--optimizer.num_epochs 30 "$@"
ASCOOD model uses these hyperparameters: p_inv, ood_type, alpha_min, alpha_max, w, sigma
- p_inv : percent of pixels treated as invariant ones (eg. 0.1)
- ood_type : type of synthesized outlier (eg. gradient, shuffle, gaussian)
- w : weight of uncertainty loss (eg. 1.0)
- sigma : sigma of feature representation (eg. 0.5)
- alpha_min: initial value of alpha (eg. 30.0)
- alpha_max: final value of alpha (eg. 300.0) [alpha is calculated by linearly decreasing its value from alpha_max to alpha_min over the course of the training epochs.]
Training arguments can be passed as:
bash scripts/ood/ascood/waterbirds_train_ascood.sh --trainer.trainer_args.ood_type gradient[--optimizer.fc_lr_factor is set to 1.0 everywhere except in CIFAR-100 where it is adjusted to 0.05]
Note: We use configs/preprocessor/ascood_preprocessor.yml across entire experiments for all methods.
Example of inference script (eg. scripts/ood/ascood/waterbirds_test_ascood.sh):
python scripts/eval_ood.py \
--id-data waterbirds \
--wrapper-net ASCOODNet \
--root ./results/waterbird_ascood_net_ascood_e30_lr0.01_w0.1_p0.1_otype_gradient_nmg_30.0_300.0_default \
--postprocessor odin --save-score --save-csvUse the following scripts for inference using i-ODIN postprocessor:
bash scripts/ood/iodin/cifar10_test_ood_iodin.sh
bash scripts/ood/iodin/cifar100_test_ood_iodin.sh
bash scripts/ood/iodin/imagenet200_test_ood_iodin.sh
bash scripts/ood/iodin/imagenet_test_ood_iodin.shPlease see openoodv1.5_results folder for OpenOOD v1.5 benchmark results.
Pre-trained models are available in the given links:
- Waterbirds [Google Drive]: ResNet-18 backbone fine-tuned on Waterbirds datasets with ASCOOD across 3 trials.
- CelebA [Google Drive]: ResNet-18 backbone fine-tuned on CelebA datasets with ASCOOD across 3 trials.
- Car [Google Drive]: ResNet-50 backbone fine-tuned on Car datasets with ASCOOD across 3 trials.
- Aircraft [Google Drive]: ResNet-50 backbone fine-tuned on Aircraft datasets with ASCOOD across 3 trials.
- CIFAR-10 [Google Drive]: ResNet-18 backbone trained on CIFAR-10 datasets with ASCOOD across 3 trials.
- CIFAR-100 [Google Drive]: ResNet-18 backbone trained on CIFAR-100 datasets with ASCOOD across 3 trials.
- Fine-grained, Spurious and Conventional settings:
- ODIN vs. i-ODIN:
- Comparison with OE methods:
- Ablation studies
@misc{regmi2024goingconventionalooddetection,
title={Going Beyond Conventional OOD Detection},
author={Sudarshan Regmi},
year={2024},
eprint={2411.10794},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.10794},
}
This codebase builds upon OpenOOD.





