-
Notifications
You must be signed in to change notification settings - Fork 66
Description
Track
Reasoning Agents (Azure AI Foundry)
Project Name
Kaizen AI
GitHub Username
Repository URL
https://github.com/JonEricEubanks/kaizen-ai
Project Description
π Kaizen AI β Microsoft Certification Study Coach
Kaizen AI is an AI-powered certification study coach that helps learners prepare for any Microsoft certification exam. It features Donna AI, a context-aware conversational study companion powered by Azure OpenAI, and a 6-agent multi-agent reasoning pipeline that generates personalized study plans, learning paths, and exam content.
π Problem
Certification prep is fragmented. Learners juggle disconnected resources with no personalized guidance or progress tracking. There's no single tool that adapts to weak areas and keeps learners motivated.
β¨ Key Features
| Feature | Description |
|---|---|
| π€ 6-Agent Reasoning Pipeline | Learning Path Curator β Study Plan Generator β Engagement Agent β Code Interpreter β Critic/Verifier β Readiness Assessor, with self-reflection loop (up to 3 iterations) and parallel fan-out |
| π Human-in-the-Loop | React UI confirmation dialog before executing AI-generated plans |
| π MCP Exam Discovery | Fetches real exam details from Microsoft Learn via Streamable HTTP transport |
| π¦ Full Exam Content Generation | One-click pipeline creating exam records, modules, lessons, quiz questions, and reference cards in Dataverse |
| π Gamification | XP, leveling, 25+ achievements, daily challenges, streaks, and leaderboards |
| π Practice & Flashcards | AI question generation, flashcards, listen mode, and interactive lessons |
| π‘ Observability | OpenTelemetry tracing, Azure Content Safety guardrails, Azure AI Evaluation scoring |
| π Security Hardened | API key auth, CORS allow-list, sanitized errors, SVG XSS prevention |
π οΈ Tech Stack
Azure OpenAI Dataverse React MCP OpenTelemetry Azure Content Safety Azure AI Evaluation
π Repository
https://github.com/JonEricEubanks/kaizen-ai
Demo Video or Screenshots
π¬ Watch Demo on YouTube
πΈ View Screenshots
Primary Programming Language
Python
Key Technologies Used
| Layer | Technology |
|---|---|
| π€ AI Models | Azure OpenAI (GPT-4.1-mini, GPT 5, DALL-E 3) |
| π§ Agent Framework | Microsoft Agent Framework (agent-framework-core, agent-framework-azure-ai) |
| π₯οΈ Frontend | React 19 + TypeScript + Vite + Tailwind CSS |
| βοΈ Backend | Azure Functions (Python 3.11) |
| ποΈ Data | Microsoft Dataverse (13 tables) via Power Apps Code Apps |
| π MCP | MCP SDK β Streamable HTTP transport to Microsoft Learn |
| π‘ Observability | OpenTelemetry (distributed tracing) |
| π‘οΈ Safety | Azure Content Safety (input/output guardrails) |
| π Evaluation | Azure AI Evaluation (automated agent quality scoring) |
Submission Type
Individual
Team Members
n/a
Submission Requirements
- My project meets the track-specific challenge requirements
- My repository includes a comprehensive README.md with setup instructions
- My code does not contain hardcoded API keys or secrets
- I have included demo materials (video or screenshots)
- My project is my own work with proper attribution for any third-party code
- I agree to the Code of Conduct
- I have read and agree to the Disclaimer
- My submission does NOT contain any confidential, proprietary, or sensitive information
- I confirm I have the rights to submit this content and grant the necessary licenses
Quick Setup Summary
- π₯ Clone the repo β
git clone https://github.com/JonEricEubanks/kaizen-ai.git - π₯οΈ Install frontend β
npm install - βοΈ Install backend β
cd ai-coach && pip install -r requirements.txt - π§ Configure frontend β copy
.env.exampleβ.env, setVITE_AI_COACH_API_KEY - π§ Configure backend β copy
local.settings.json.exampleβlocal.settings.json, setAZURE_OPENAI_ENDPOINT,AZURE_OPENAI_KEY,AZURE_OPENAI_DEPLOYMENT,AI_COACH_API_KEY - ποΈ Set up Dataverse β import tables and sample data (see README)
- π Run β
# Frontend
npm run dev
# Backend
cd ai-coach && func host startTechnical Highlights
| Highlight | Detail | |
|---|---|---|
| π | Self-reflection convergence loop | The Critic/Verifier agent rejects low-quality outputs and loops back to the Study Plan Generator β up to 3 iterations until quality passes |
| β‘ | Parallel fan-out | Engagement + Code Interpreter agents run concurrently via asyncio.gather, saving ~5β10s per workflow |
| π | MCP exam discovery | Streamable HTTP transport fetches any Microsoft certification from Microsoft Learn in real time, then generates full exam content (modules, lessons, questions, references) into Dataverse with one click |
| π‘ | OpenTelemetry tracing | Distributed tracing across all 6 agents with graceful fallback β every agent span is instrumented for observability |
| π | Human-in-the-Loop UI | React confirmation dialog surfaces before AI-generated study plans execute, giving the student full control |
| π | Gamification engine | XP streak multipliers, 25+ achievements, streak shields, daily challenges, and 20 levels β all persisted to Dataverse |
Challenges & Learnings
| Challenge | What We Learned | |
|---|---|---|
| π | Dataverse OData quirks | Lookup fields (@odata.bind) don't work with Power SDK's createRecordAsync. Discovered through 9 production bugs that Dataverse's OData layer behaves differently from standard OData β solved by using direct string ID fields instead of lookup bindings |
| π― | Agent quality control | Early agent outputs were inconsistent. Adding the Critic/Verifier agent with a convergence loop (retry up to 3x) dramatically improved output quality |
| π‘οΈ | Graceful degradation | Made OpenTelemetry, Content Safety, and AI Evaluation all optional with no-op fallbacks β the app runs locally without every Azure service configured |
Contact Information
https://www.linkedin.com/in/joneric-eubanks-pmp-developer/
Country/Region
United States