Skip to content
#

interpretable-models

Here are 8 public repositories matching this topic...

An interpretable system that models the future of work as an equilibrium under AI-driven forces. Instead of predicting job loss, it decomposes workforce disruption into automation pressure, adaptability, skill transferability, demand, and AI augmentation to explain stability, tension, and transition paths by 2030.

  • Updated Dec 13, 2025
  • Python

An interpretable early-warning engine that detects academic instability before grades collapse. Instead of predicting performance, it models pressure accumulation, buffer strength, and transition risk using attendance, engagement, and study load to explain fragility and identify high-leverage interventions.

  • Updated Dec 14, 2025
  • Python

🤖 Analyze the future of work with the Workforce Disruption Equilibrium Engine, a system for understanding job changes in an AI-driven world.

  • Updated Feb 10, 2026
  • Python

Improve this page

Add a description, image, and links to the interpretable-models topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the interpretable-models topic, visit your repo's landing page and select "manage topics."

Learn more