Sensitivity Analysis for Understanding Complex Computational Models
-
Updated
Apr 18, 2016 - R
Sensitivity Analysis for Understanding Complex Computational Models
This repository contains code, information and datasets for the project on making interpretable models titled "Model Agnostic Methods for Interpretable Machine Learning". The abstract can be accessed at https://docs.google.com/document/d/1k2-beHD4YQxXpH8ExUM2Gd-yE5VqdluhiCsUIO3czRM/edit?usp=sharing
Optimizing Mind static website v1
SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments
Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions
Create a model that can return an explanation for its decision.
Part of the experiments in "A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations"
This notebook can be downloaded, tested and modified with Google Colab and aims at explainable how a Decision Tree is built. It is also coupled with a Medium article.
Visualize BERT's self-attention layers on text classification tasks
Pytorch implementation of "Explainable and Explicit Visual Reasoning over Scene Graphs "
Repo for Tang et al, bioRxiv 454793 (2018)
🕵️♂️ Interpreting Convolutional Neural Network (CNN) Results.
Predictive models in Python for Explainable AI
Assessing external car damage, i.e., severity and location using Deep Learning and deploy it using flask and tensorflow serving.
Curated list of DL Resources [Updated 2019]
Explaining blackbox predictions using python libraries.
A fast Tsetlin Machine implementation employing bit-wise operators, with MNIST demo.
Add a description, image, and links to the explainable-ai topic page so that developers can more easily learn about it.
To associate your repository with the explainable-ai topic, visit your repo's landing page and select "manage topics."