Skip to content
#

ml-robustness

Here are 2 public repositories matching this topic...

[ECCV 2022] We investigated a broad range of neural network elements and developed a robust perceptual similarity metric. Our shift-tolerant perceptual similarity metric (ST-LPIPS) is consistent with human perception and is less susceptible to imperceptible misalignments between two images than existing metrics.

  • Updated May 21, 2024
  • Python

Noise Injection Techniques provides a comprehensive exploration of methods to make machine learning models more robust to real-world bad data. This repository explains and demonstrates Gaussian noise, dropout, mixup, masking, adversarial noise, and label smoothing, with intuitive explanations, theory, and practical code examples.

  • Updated Nov 15, 2025

Improve this page

Add a description, image, and links to the ml-robustness topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ml-robustness topic, visit your repo's landing page and select "manage topics."

Learn more