This project demonstrates a line-of-business (LOB) chatbot implementation using a Support Ticket Management System as the sample application. It showcases both a functional workflow for managing support tickets and a methodology for evaluating LOB agent's performance in business contexts.
The Support Ticket Management chatbot is built with Microsoft Agent Framework, where users can:
- Create and update support tickets
- Manage action items within tickets
- Search historical tickets for reference
Refer to the architecture documentation for more details.
The project includes an evaluation framework designed to address the challenges of assessing non-deterministic, LLM-powered agents in business applications with key features:
- LLM-based user agent for simulating user-chatbot interactions
- Test cases factory with scenarios templating and injection of business data to run evaluations at scale
- Azure AI Evaluation SDK integration for calculating metrics and enabling tracking and comparing evaluation runs in Azure AI Foundry
- LLM-power error analysis with actionable summaries
Refer to the evaluation documentation for more information.
-
Deploy an OpenAI chat model in Azure (GPT-4o or better preferably) - see documentation.
-
Once your model is ready, create an
.envfile by copying.env.templateand replacing values with your configuration. -
Open this project with Visual Studio Code using the Dev Containers extension. This will ensure all dependencies are correctly installed in an isolated environment. (Alternatively, if you'd like to run the project on your local machine, and manually create a virtual Python env, change the following
.envfile var toPYTHONPATH=., then runmake install)
make chatbot # Runs the chatbot application
make chatbot-eval # Runs evaluation against ground truth datasetsapp/chatbot/- Support Ticket Management implementationtools/support_ticket_system/- Agent Framework tools for function callingdata_models/- Data structures for tickets and action itemsworkflow-definitions/- Workflow definitions that guide conversations
evaluation/- Evaluation framework componentsevaluation_service.py- Core evaluation servicechatbot/evaluate.py- Chatbot evaluation entry pointchatbot/evaluators/- Specialized evaluators for different metricschatbot/ground-truth/- Ground truth datasets and related code used for evaluation
This sample can be used as a template to create chatbots for other line-of-business applications. To migrate this sample to your specific use case:
- In Visual Studio Code use the
Chat: Run Promptcommand from the Command Palette. - Choose
migrateto attach it to the Copilot chat. - Clearly describe your target use case and business requirements.
- Review the generated migration plan and adapt as required.
- Implement the plan phase-by-phase, testing thoroughly at each stage.
- Architecture - Chatbot architecture overview
- Evaluation Guide - How the evaluation framework works
- User Guide - How to use the Support Ticket Chatbot