ToC: USER STORY | GETTING STARTED | HOW IT WORKS
An opinionated set of best practices and patterns to bootstrap your Multi Agent application in minutes.
This architecture implements a Debate Pattern using the Semantic Kernel's agent framework, a dynamic environment where multiple AI agents collaborate to refine ideas, test arguments, or reach a resolution.
The core architecture components based on Semantic Kernel abstractions:
- Speaker Selection Strategy (Green Box):
- This component determines which agent (WRITER or CRITIC) "speaks" next.
- It ensures productive collaboration by regulating the flow of interaction between the agents and preventing redundant actions.
- WRITER Agent: provides the initial proposal and the subsequent revisions following the direction from critic.
- CRITIC Agent: evaluates the text and provides constructive feedback to drive readibility and popularity of the post. Provides scoring across a number of categories and a final score.
- Chat Termination Strategy (Red Box):
- This component decides when the conversation has reached a satisfactory conclusion. It takes the overall critic score and compares to acceptance treshold.
Semantic Kernel powers the agents with features like prompt engineering, memory recall, and logic orchestration.
This respository has been configured to support GitHub Codespace and DevContainers.
Warning
Do NOT git clone the application under Windows and then open a DevContainer.
This would create issues with file end of lines. For DevContainer click on the button
above and let Visual Studio Code download the repository for you. Alternatively you
can also git clone under Windows Subsystem for Linux (WSL) and ask Visual Studio Code to
Re-Open in Container.
- Azure CLI:
az - Azure Developer CLI:
azd - Python:
python - UV:
uv - Optionally Docker:
docker
See below for installation instructions
See infra/README.md for instructions about how to customize the deployment
To deploy Azure AI App Kickstarter just run:
azd upWarning
This deploys the application with authentication DISABLED.
cd src/frontend
uv sync
uv run streamlit app.py# Sync Python dependencies
uv sync
# Start the backend server with live reloading
uv run uvicorn app:app --reloadThe AI Traces you will be able to find in AI Foundry Project under "Tracing".
If you click on one of the traces you will see a detailed history view with every agent,
prompt, etc.:

If you need to troubleshoot and access the logs of the containers running in Azure Container
apps you can use this helper script (bash only). It will connect to Azure remotely and
stream the logs to your local terminal.
For the Frontend:
./scripts/aca_logs.sh frontendFor the Backend:
./scripts/aca_logs.sh backendLogs will be streamed to your terminal:

This project has adopted the Microsoft Open Source Code of Conduct.
Resources:
- Microsoft Open Source Code of Conduct
- Microsoft Code of Conduct FAQ
- Contact opencode@microsoft.com with questions or concerns
For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project follows the below responsible AI guidelines and best practices, please review them before using this project:
- Microsoft Responsible AI Guidelines
- Responsible AI practices for Azure OpenAI models
- Safety evaluations transparency notes
- Kudos to Pamela Fox and James Casey for Azure-Samples/openai-chat-app-entra-auth-builtin from which we borrowed most of authentication & authorization setup.
- Special thank you to Michael Hofer for extensive testing and solving o1 compatibility

