Skip to content

sudarshanregmi/ASCOOD

Image-based Outlier Synthesis With Training Data

This codebase provides a Pytorch implementation of:

Image-based Outlier Synthesis With Training Data
ASCOOD
i-ODIN
Sudarshan Regmi

Abstract

Out-of-distribution (OOD) detection is critical to ensure the safe deployment of deep learning models in critical applications. Deep learning models can often misidentify OOD samples as in-distribution (ID) samples. This vulnerability worsens in the presence of spurious correlation in the training set. Likewise, in fine-grained classification settings, detection of fine-grained OOD samples becomes inherently challenging due to their high similarity to ID samples. However, current research on OOD detection has focused instead largely on relatively easier (conventional) cases. Even the few recent works addressing these challenging cases rely on carefully curated or synthesized outliers, ultimately requiring external data. This motivates our central research question: ``Can we innovate OOD detection training framework for fine-grained and spurious settings \textbf{without requiring any external data at all?}" In this work, we present a unified \textbf{A}pproach to \textbf{S}purious, fine-grained, and \textbf{C}onventional \textbf{OOD D}etection (\textbf{\ASCOOD}) that eliminates the reliance on external data. First, we synthesize virtual outliers from ID data by approximating the destruction of invariant features. Specifically, we propose to add gradient attribution values to ID inputs to disrupt invariant features while amplifying true-class logit, thereby synthesizing challenging near-manifold virtual outliers. Then, we simultaneously incentivize ID classification and predictive uncertainty towards virtual outliers. For this, we further propose to leverage standardized features with z-score normalization. ASCOOD effectively mitigates impact of spurious correlations and encourages capturing fine-grained attributes. Extensive experiments across \textbf{7} datasets and and comparisons with \textbf{30+} methods demonstrate merit of ASCOOD in spurious, fine-grained and conventional settings.

Check other works:

t2fnorm
reweightood
adascale

Follow OpenOOD official instruction to complete the setup.

pip install git+https://github.com/Jingkang50/OpenOOD

Datasets

  • Spurious setting:
    • Waterbirds (spurious correlation ~ 0.9)
    • CelebA (spurious correlation ~ 0.8)
  • Fine-grained setting:
    • Car
    • Aircraft
  • Conventional setting:
    • CIFAR-10
    • CIFAR-100
    • ImageNet-100

Visit Google Drive for datasets and checkpoints.

You may download necessary datasets and checkpoints with the following command:

bash scripts/download/download_ascood.sh

These datasets are adapted from OpenOOD, Spurious_OOD and MixOE.

Example Scripts for ASCOOD Training and Inference

Use the following scripts for training and inferencing the ASCOOD model:

bash scripts/ood/ascood/waterbirds_train_ascood.sh
bash scripts/ood/ascood/waterbirds_test_ascood.sh

Other similar scripts are available in scripts/ood/ascood folder.

Example of training script (eg. scripts/ood/ascood/waterbirds_train_ascood.sh):

python main.py \
    --config configs/datasets/waterbird/waterbird.yml \
    configs/networks/ascood_net.yml \
    configs/pipelines/train/train_ascood.yml \
    configs/preprocessors/ascood_preprocessor.yml \
    --network.backbone.name resnet18_224x224 \
    --network.backbone.pretrained True \
    --network.backbone.checkpoint ./results/pretrained_weights/resnet18-f37072fd.pth \
    --optimizer.lr 0.01 \
    --optimizer.num_epochs 30 "$@"

ASCOOD model uses these hyperparameters: p_inv, ood_type, alpha_min, alpha_max, w, sigma

  • p_inv : percent of pixels treated as invariant ones (eg. 0.1)
  • ood_type : type of synthesized outlier (eg. gradient, shuffle, gaussian)
  • w : weight of uncertainty loss (eg. 1.0)
  • sigma : sigma of feature representation (eg. 0.5)
  • alpha_min: initial value of alpha (eg. 30.0)
  • alpha_max: final value of alpha (eg. 300.0) [alpha is calculated by linearly decreasing its value from alpha_max to alpha_min over the course of the training epochs.]

Note: Inside ASCOOD trainer, you may obtain saliency map in training mode (instead of eval mode)

Training arguments can be passed as:

bash scripts/ood/ascood/waterbirds_train_ascood.sh --trainer.trainer_args.ood_type gradient

[--optimizer.fc_lr_factor is set to 1.0 everywhere except in CIFAR-100 where it is adjusted to 0.05]

Note: We use configs/preprocessor/ascood_preprocessor.yml across entire experiments for all methods.

Example of inference script (eg. scripts/ood/ascood/waterbirds_test_ascood.sh):

python scripts/eval_ood.py \
   --id-data waterbirds \
   --wrapper-net ASCOODNet \
   --root ./results/waterbird_ascood_net_ascood_e30_lr0.01_w0.1_p0.1_otype_gradient_nmg_30.0_300.0_default \
   --postprocessor odin --save-score --save-csv

Example Scripts for i-ODIN inference

Use the following scripts for inference using i-ODIN postprocessor:

bash scripts/ood/iodin/cifar10_test_ood_iodin.sh
bash scripts/ood/iodin/cifar100_test_ood_iodin.sh
bash scripts/ood/iodin/imagenet200_test_ood_iodin.sh
bash scripts/ood/iodin/imagenet_test_ood_iodin.sh

Please see openoodv1.5_results folder for OpenOOD v1.5 benchmark results.

Pre-trained checkpoints

Pre-trained models are available in the given links:

  • Waterbirds [Google Drive]: ResNet-18 backbone fine-tuned on Waterbirds datasets with ASCOOD across 3 trials.
  • CelebA [Google Drive]: ResNet-18 backbone fine-tuned on CelebA datasets with ASCOOD across 3 trials.
  • Car [Google Drive]: ResNet-50 backbone fine-tuned on Car datasets with ASCOOD across 3 trials.
  • Aircraft [Google Drive]: ResNet-50 backbone fine-tuned on Aircraft datasets with ASCOOD across 3 trials.
  • CIFAR-10 [Google Drive]: ResNet-18 backbone trained on CIFAR-10 datasets with ASCOOD across 3 trials.
  • CIFAR-100 [Google Drive]: ResNet-18 backbone trained on CIFAR-100 datasets with ASCOOD across 3 trials.

Results

  • Fine-grained, Spurious and Conventional settings:

  • ODIN vs. i-ODIN:

  • Comparison with OE methods:

  • Ablation studies

Consider citing this work if you find it useful.

@misc{regmi2024goingconventionalooddetection,
      title={Going Beyond Conventional OOD Detection},
      author={Sudarshan Regmi},
      year={2024},
      eprint={2411.10794},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.10794},
}

Acknowledgment

This codebase builds upon OpenOOD.

About

Official code for CVPR'26 "Image-based Outlier Synthesis With Training Data"

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors