Skip to content
#

model-robustness

Here are 20 public repositories matching this topic...

Noise Injection Techniques provides a comprehensive exploration of methods to make machine learning models more robust to real-world bad data. This repository explains and demonstrates Gaussian noise, dropout, mixup, masking, adversarial noise, and label smoothing, with intuitive explanations, theory, and practical code examples.

  • Updated Nov 15, 2025

This repository is an implementation of https://link.springer.com/chapter/10.1007/978-3-030-72699-7_35 article. it uses evolutionary strategy (NSGA-II algorithm specificially) to configure image filters parameters in order to attack adversarially to a neural network.

  • Updated Feb 11, 2023
  • Jupyter Notebook

Production-grade demonstration of AI safety mechanisms: Out-of-Distribution detection and adversarial robustness testing for NLP models using DistilBERT, PyTorch, and TextAttack.

  • Updated Nov 20, 2025
  • Python

This repository contains the solution for Assignment 1 of the Deep Learning course at the University of Tehran, focusing on image classification, adversarial attacks, and defensive techniques..

  • Updated Nov 17, 2025
  • Jupyter Notebook

Regime-based evaluation framework for financial NLP stability. Implements chronological cross-validation, semantic drift quantification via Jensen-Shannon divergence, and multi-faceted robustness profiling. Replicates Sun et al.'s (2025) methodology with modular, auditable Python codebase.

  • Updated Oct 3, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the model-robustness topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the model-robustness topic, visit your repo's landing page and select "manage topics."

Learn more