SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments
-
Updated
Nov 6, 2018 - Python
SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments
Create a model that can return an explanation for its decision.
Part of the experiments in "A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations"
Visualize BERT's self-attention layers on text classification tasks
Pytorch implementation of "Explainable and Explicit Visual Reasoning over Scene Graphs "
Tool developed as part of my master's thesis in AI, focused on giving medical users tools to explain existing data sets.
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Transfer Explainability via Layer-Wise Relevance Propagation Demo for AAAI
Explanations in Multi-Model Planning
Delineating Causality in Neural Networks
Explain variable influence in black-box model through pattern mining
📺 A Python library for pruning and visualizing Keras Neural Networks' structure and weights
Visual Explanation using Uncertainty based Class Activation Maps
Experiments in explainable AI with exact optimization tools on the MNIST image dataset.
Analysis of 'Attention is not Explanation' performed for the University of Amsterdam's Fairness, Accountability, Confidentiality and Transparency in AI Course Assignment, January 2020
Source code for 'Lemna: Explaining deep learning based security applications'.
CAIPI turns LIMEs into trust!
What factors influence the predictions of Deep learning Algorithms?
A python library for eXplainable Reinforcement Learning (XRL) based on the concept of interestingness elements.
Add a description, image, and links to the explainable-ai topic page so that developers can more easily learn about it.
To associate your repository with the explainable-ai topic, visit your repo's landing page and select "manage topics."