Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world datasets and workflows.
-
Updated
Feb 7, 2025 - Python
Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world datasets and workflows.
An interpretable framework for inferring nonlinear multivariate Granger causality based on self-explaining neural networks.
An interpretable system that models the future of work as an equilibrium under AI-driven forces. Instead of predicting job loss, it decomposes workforce disruption into automation pressure, adaptability, skill transferability, demand, and AI augmentation to explain stability, tension, and transition paths by 2030.
An interpretable early-warning engine that detects academic instability before grades collapse. Instead of predicting performance, it models pressure accumulation, buffer strength, and transition risk using attendance, engagement, and study load to explain fragility and identify high-leverage interventions.
Code for Surgical Skill Assessment via Video Semantic Aggregation (MICCAI 2022)
Comprehensible Convolutional Neural Networks via Guided Concept Learning
📊 Detect academic fragility early with this analytics engine that identifies instability before grades reveal the problem.
🤖 Analyze the future of work with the Workforce Disruption Equilibrium Engine, a system for understanding job changes in an AI-driven world.
Add a description, image, and links to the interpretable-models topic page so that developers can more easily learn about it.
To associate your repository with the interpretable-models topic, visit your repo's landing page and select "manage topics."