Minimal agent-to-agent communication protocol for the Daisy AI-native OS.
pip install -r requirements.txtexport GROQ_API_KEY=your_key_hereGet a free key at: https://console.groq.com/keys
Option A: All-in-one script
chmod +x run_demo.sh
./run_demo.shOption B: Manual (4 terminals)
Terminal 1 - Registry:
python registry.pyTerminal 2 - Coordinator:
python example_agents.py coordinatorTerminal 3 - Researcher:
python example_agents.py researcherTerminal 4 - Coder:
python example_agents.py coderTerminal 5 - Test:
python test_client.py┌─────────────────────────────────────────────────────┐
│ REGISTRY │
│ (discovery only) │
│ ws://localhost:9000 │
└─────────────────────────────────────────────────────┘
▲ ▲ ▲
│ │ │
┌─────────┐ ┌─────────┐ ┌─────────┐
│Coordinat│◄──►│Researcher│◄──►│ Coder │
│ :8001 │ │ :8002 │ │ :8003 │
└─────────┘ └─────────┘ └─────────┘
| File | Description |
|---|---|
registry.py |
Discovery service - agents register here |
agent.py |
Base class for creating agents |
example_agents.py |
Three example agents: Coordinator, Researcher, Coder |
test_client.py |
Send messages to test the network |
run_demo.sh |
Start everything at once |
from agent import DaisyAgent
class MyAgent(DaisyAgent):
def __init__(self):
super().__init__(
agent_id="my-agent",
name="My Custom Agent",
capabilities=["my_capability"],
port=8010
)
async def handle_message(self, message: dict) -> dict:
# Use self.think() to call the LLM
result = self.think(f"Process this: {message['intent']}")
return {"result": result}
# Run it
import asyncio
agent = MyAgent()
asyncio.run(agent.start()){
"id": "uuid",
"from": "agent-id",
"to": "agent-id",
"type": "request|response|error",
"intent": "what you want done",
"payload": {},
"ts": 1706300000
}Based on what we learn from running v0.1:
- Streaming responses
- Conversation memory
- Agent-to-agent negotiation
- Security/auth
- Message persistence
DaisyChain v0.1 - Simple enough to understand, minimal enough to run today