-
Notifications
You must be signed in to change notification settings - Fork 1
Closed
Labels
Description
Part of #694
Operations
| ID | Location | Operation | Est. Latency |
|---|---|---|---|
| SPIN-001 | zeph-llm/src/ollama.rs generate/chat |
LLM inference (Ollama) | 1-30s |
| SPIN-002 | zeph-llm/src/claude.rs generate/chat |
LLM inference (Claude API) | 1-15s |
| SPIN-003 | zeph-llm/src/openai.rs generate/chat |
LLM inference (OpenAI API) | 1-15s |
| SPIN-016 | zeph-llm/src/orchestrator.rs warmup |
Provider warmup/health check | 0.5-5s |
Priority: P0
These are the most visible operations — every user message triggers inference.
Approach
Status must be sent from the agent loop caller (not inside LlmProvider) since providers don't have channel access. Add send_status("thinking...") before provider.generate() / provider.chat() calls in the agent loop, and clear after response arrives.
For provider warmup, add status in the initialization sequence.
Reactions are currently unavailable