-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Context
Currently, agents have to explicitly call bm_search or memory_search to find relevant knowledge. The plugin should proactively inject relevant context from the knowledge graph before the agent starts its turn.
This is similar to how memory-lancedb works — automatic context injection based on the user's message.
What this enables
- Agent automatically "remembers" relevant context without being told to search
- Reduces tool calls (no explicit search step needed)
- Better continuity across sessions — agent wakes up with relevant context loaded
- Knowledge graph becomes invisible infrastructure rather than an explicit tool
Implementation
- Hook into
agent_startor message preprocessing - Extract key terms/intent from the user's message
- Search the knowledge graph for relevant notes
- Inject top N results as system context (similar to how OpenClaw injects workspace files)
- Configurable: max results, relevance threshold, max tokens
- Should be fast — add latency budget config
Config example
{
"basic-memory": {
"config": {
"autoRecall": {
"enabled": true,
"maxResults": 3,
"maxTokens": 2000,
"minScore": 0.5
}
}
}
}Notes
- Listed as post-1.0 in TODO but this is a high-value differentiator
- Need to be careful about latency — search must be fast
- Consider caching recent searches to avoid redundant lookups
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels