Skip to content

Doca/a2a_showcase

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A2A Showcase

A showcase of Google's Agent2Agent (A2A) protocol — three LLM-powered agents collaborate to book a haircut appointment in Berlin.

You (terminal)          Customer Agent (:9000)         Schnipp Schnapp (:9100)    Haar Magie (:9200)
    │                        │                              │                          │
    │── "I need a haircut" ─▶│                              │                          │
    │                        │── GET /agent-card.json ──────▶│                          │
    │                        │── GET /agent-card.json ──────────────────────────────────▶│
    │                        │                              │                          │
    │                        │── "Free slots?" ────────────▶│                          │
    │                        │── "Free slots?" ────────────────────────────────────────▶│
    │                        │◀── "Thu 14:00, Fri 10:00" ──│                          │
    │                        │◀── "Tue 18:00" ────────────────────────────────────────│
    │                        │                              │                          │
    │◀── "Here are all       │                              │                          │
    │    available slots..." ─│                              │                          │
    │                        │                              │                          │
    │── "Haar Magie Tue" ───▶│                              │                          │
    │                        │── "Book Tue 18:00" ─────────────────────────────────────▶│
    │                        │◀── "Confirmed! 35€" ───────────────────────────────────│
    │◀── "Booked! 35€" ─────│                              │                          │

What this demonstrates

A2A Feature Where it shows
Agent Discovery Customer fetches Agent Cards from /.well-known/agent-card.json
Skill declaration Each salon declares a conversation skill with knowledge from its .md file
Task lifecycle submittedworkingcompleted / input-required
Multi-turn conversation Booking requires multiple turns on the same taskId/contextId
Agent-as-orchestrator Customer agent is both A2A server (you talk to it) and A2A client (it talks to salons)
LLM-driven routing The Customer agent's LLM decides which agent to talk to and when to respond to the human
Concurrent broadcast First message is sent to all agents in parallel via asyncio.gather

Architecture

The Customer Agent uses an LLM-driven agent loop — no hardcoded flows. The LLM receives the conversation history and decides one action per turn:

  • SEND:<agent name> — route a message to a remote agent via A2A
  • HUMAN: — respond to the human user

On the first message, the system broadcasts to all discovered agents concurrently, collects responses, and injects them into the LLM context. The LLM then has complete information before making its first decision. Follow-up messages are routed by the LLM one at a time.

Each remote agent maintains a separate A2A contextId, so multi-turn conversations with different salons don't interfere.

Agents

Agent Port Role Brain
Customer Agent 9000 Personal assistant for Dorian, orchestrates other agents Ollama (qwen3:32b)
Schnipp Schnapp 9100 Hair salon in Berlin-Mitte Ollama (qwen3:32b)
Haar Magie 9200 Hair salon in Berlin-Kreuzberg Ollama (qwen3:32b)

All agents use Ollama locally — no API keys needed. Agent knowledge comes from markdown files in data/.

Setup

# Prerequisites: Python 3.11+, Ollama running with a model pulled
ollama pull qwen3:32b   # or any model — change OLLAMA_MODEL in llm.py

python3 -m venv .venv
source .venv/bin/activate
pip install "a2a-sdk[http-server]" "uvicorn>=0.34.0" "httpx>=0.28.0" "rich>=13.0"

Usage

One command (recommended)

source .venv/bin/activate
python3 run_demo.py           # starts all agents + interactive chat with your assistant
python3 run_demo.py --auto    # starts all agents + runs automated booking flow

Manual (separate terminals)

# Terminal 1–3: start the agents
source .venv/bin/activate && python3 -m agents.salon.server --data-file data/schnipp_schnapp.md --port 9100
source .venv/bin/activate && python3 -m agents.salon.server --data-file data/haar_magie.md --port 9200
source .venv/bin/activate && python3 -m agents.customer.server

# Terminal 4: talk to your assistant
source .venv/bin/activate && python3 interactive_client.py http://localhost:9000

You can also talk to a salon directly: python3 interactive_client.py http://localhost:9100

Raw A2A requests

# Agent Card
curl -s http://localhost:9100/.well-known/agent-card.json | jq .

# Send a message
curl -s http://localhost:9100/v1/message:send \
  -H "Content-Type: application/json" \
  -d '{"message":{"messageId":"1","role":"user","parts":[{"kind":"text","text":"Any free slots this week?"}]}}' | jq .

Project structure

a2a/
├── run_demo.py                 # Starts all agents + interactive or automated demo
├── interactive_client.py       # Interactive terminal client for any A2A agent
├── llm.py                      # Shared Ollama wrapper (/api/chat)
├── agentlog.py                 # Colored per-agent logging (unified log stream)
├── data/
│   ├── schnipp_schnapp.md      # Salon info: services, prices, available slots
│   ├── haar_magie.md           # Salon info: services, prices, available slots
│   └── customer_profile.md     # Customer preferences (name, language, goals)
├── agents/
│   ├── salon/
│   │   ├── server.py           # A2A server — Agent Card + FastAPI app
│   │   └── executor.py         # AgentExecutor — LLM receptionist
│   └── customer/
│       ├── server.py           # A2A server (:9000)
│       ├── executor.py         # AgentExecutor — LLM-driven agent loop
│       └── client.py           # Automated demo client (used by --auto)
└── .gitignore

Adding a new agent

Create a .md file in data/ with the agent's knowledge, then start it:

python3 -m agents.salon.server --data-file data/my_new_agent.md --port 9300

The Customer Agent will discover it automatically if you add the URL to REMOTE_AGENT_URLS in agents/customer/executor.py.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages