This is a personal side project exploring a cognitive architecture idea for AI systems:
"What if forgetting wasn't deletion, but symbolic compression?"
Inspired by human memory systems β where unused data is compressed into abstract, long-term storage and later retrieved via pattern-matching or associative cues β this project attempts to:
- Compress low-usage data into symbolic or vector-based representations
- Store them in a modular "archive"
- Retrieve them on-demand via pattern similarity (a kind of cognitive "ping")
- Prototype a memory system that balances relevance, precision, and cognitive scalability
Modern LLMs have no real memory β only token limits, context windows, and caching.
This project asks:
What would a cognitively plausible, AGI-scalable memory layer look like?
It is:
- A learning path (Iβm re-learning Python through this)
- A conceptual sandbox (expect experiments and missteps)
- A potential early-stage contribution to AGI memory design
- π§± Repo structure initialized
- π§ Concept and architecture sketched
- β¨οΈ Early code and notebooks being prototyped
- π§ Memory strength scoring system next
- /theory/ β Core concepts & architecture
- /experiments/ β Prototypes, test cases,
- /src/ β Actual implementation code (modules)
- /notes/ β Loose thoughts, logs, brainstorms
- /data/ β Mock or sample data (small, cleaned)
If you find the idea interesting, feel free to open an issue or discussion β or just lurk.
This project is experimental, messy, and exploratory by design. Iβm building it in public to think clearly and improve over time.
MIT β feel free to use or fork, just credit the original idea.