This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).
-
Updated
Jun 5, 2024 - Python
This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).
Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University
[ICLR24] AutoVP: An Automated Visual Prompting Framework and Benchmark
✨ Official code for our paper: "Uncertainty-o: One Model-agnostic Framework for Unveiling Epistemic Uncertainty in Large Multimodal Models".
Official project website for the AAAI 2022 paper "Stereo Neural Vernier Caliper"
Official implementation of FedGAT: Generative Autoregressive Transformers for Model-Agnostic Federated MRI Reconstruction (https://arxiv.org/abs/2502.04521)
A model-agnostic library for generating explanations of machine learning predictions, supporting diverse XAI methods like CEM and LIME.
Document-first CLI for building agent workflows in Markdown and JSON.
Robust regression algorithm that can be used for explaining black box models (Python implementation)
Post-hoc prototype-based explanations with rules for time-series classifiers
NeurIPS 2025: Graph Your Own Prompt
Robust regression algorithm that can be used for explaining black box models (R implementation)
A simple generic (TensorFlow) function that implements the MAML algorithm for regression problems as designed by Chelsea Finn et al. 2017
Codebase for CIKM '24 paper -- PARs: Predicate-based Association Rules for Efficient and Accurate (Model-Agnostic) Anomaly Explanation
Introducing a novel lightweight, post-hoc, single-pass, model-agnostic uncertainty quantification model for pretrained deep neural networks, designed for efficiency, scalability, and compatibility.
Segmented Sampling for Boundary Approximation (SSBA) generates discrete decision boundary points for generating counterfactual explanations or bounded counterfactuals (restricted feature change).
AI Explainability 360 Toolkit for Time-Series and Industrial Use Cases
Open source SDKs, benchmarks, and examples for QWED - The Enterprise AI Verification Platform. Model-agnostic verification infrastructure for production AI.
Pondera is a lightweight, YAML-first framework to evaluate AI models and agents with pluggable runners and an LLM-as-a-judge.
Flare is an open-source boundary engine for LLMs, enforcing minimal relational safety against synthetic intimacy, identity fusion, and role confusion.
Add a description, image, and links to the model-agnostic topic page so that developers can more easily learn about it.
To associate your repository with the model-agnostic topic, visit your repo's landing page and select "manage topics."