Skip to content

fix(openclaw): prevent memory wipe on every session#323

Merged
nicoloboschi merged 1 commit intovectorize-io:mainfrom
slayoffer:fix/openclaw-memory-wipe
Feb 9, 2026
Merged

fix(openclaw): prevent memory wipe on every session#323
nicoloboschi merged 1 commit intovectorize-io:mainfrom
slayoffer:fix/openclaw-memory-wipe

Conversation

@slayoffer
Copy link
Contributor

Summary

  • Fix memory loss bug: Use unique document_id per conversation (sessionKey + timestamp) instead of static sessionKey. The backend CASCADE-deletes old memories when the same document_id is reused, causing all prior facts to be lost on every new conversation.
  • Universal envelope stripping: Replace Telegram-only [Telegram ...] extraction with generic channel support (Slack, Discord, WhatsApp, Signal, etc.) for cleaner recall queries.
  • Prefer rawMessage: Use event.rawMessage over event.prompt when available to avoid envelope-formatted prompts.
  • Increase recall context: Bump max_tokens from 512 to 2048 for richer memory injection.

Root Cause

The agent_end hook retained transcripts with document_id set to the static sessionKey (always agent:main:main). The backend's handle_document_tracking() does DELETE FROM documents WHERE id = $1 AND bank_id = $2 with CASCADE before inserting — wiping ALL previous memory_units and entity_links tied to that document. Every conversation replaced the previous one.

Test plan

  • Build: cd hindsight-integrations/openclaw && npm run build
  • Send 3 separate messages to bot via Telegram
  • Ask "what do you remember about me?" — should recall facts from all 3 conversations
  • Check logs: each retain should show a different document ID (e.g., agent:main:main-1739083800000)

🤖 Generated with Claude Code

Use unique document_id per conversation (sessionKey + timestamp) instead
of static sessionKey. The backend CASCADE-deletes old memories when the
same document_id is reused, causing all prior facts to be lost.

Also:
- Universal envelope stripping for all channels (was Telegram-only)
- Prefer rawMessage over prompt for cleaner recall queries
- Increase recall max_tokens from 512 to 2048

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@nicoloboschi nicoloboschi merged commit 981cf60 into vectorize-io:main Feb 9, 2026
15 of 28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants