OpenXAI : Towards a Transparent Evaluation of Model Explanations
-
Updated
Aug 17, 2024 - JavaScript
OpenXAI : Towards a Transparent Evaluation of Model Explanations
Love2D LSP (VS Code / Neovim / Zed / etc.) extension for live coding and live variable tracking
Editing machine learning models to reflect human knowledge and values
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
A user interface to interpret machine learning models.
Explaining black boxes with a SMILE: Statistical Mode-agnostic Interpretability with Local Explanations
Visually compare fill-in-the-blank LLM prompts to uncover learned biases and associations!
Online exploration of memory reduction strategies of a DRL agent trained to solve a navigation task on ViZDoom
ir_explain: a Python Library of Explainable IR Methods
A Python Toolkit for Explainable IR methods
Visual analytics approach presented in the paper "Visual Analytics Tool for the Interpretation of Hidden States in Recurrent Neural Networks" (VCIBA, 2021).
Web based interpretability tool for LLMs using Huggingface and Inseq
A web user interface for the OncoText Pathology System (https://github.com/yala/OncoText)
Neural network inspector web application source repository
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."