AI Agent Alpha is an experimental, fully local AI agent framework built on top of Ollama.
It focuses on learning, transparency, and control — no cloud APIs, no hidden costs, no black boxes.
⚠️ Work in Progress
This project is under active development. Expect frequent changes, experiments, and refactors.
AI Agent Alpha is a hands-on exploration of how to build stateful AI agents from the ground up using:
- Local large language models (LLMs)
- Explicit memory control
- Simple, understandable Python code
- A friendly local UI
The goal is to understand how agents actually work, not just call an API.
-
🏠 100% Local Execution
- Runs entirely on your machine using Ollama
- No external servers or paid APIs
-
💾 Explicit Memory System
- Store long-term facts intentionally
- Commands:
remember: ...forget: ...show memory
-
🧪 Agent Reasoning Loop
- Uses stored memory to inform responses
- Maintains interaction history locally
-
🖥️ Friendly Local UI
- Browser-based chat interface (Gradio)
- Much nicer than typing into a terminal
- Python 3
- Ollama (local LLM runtime)
- Gradio (local web UI)
- JSON (simple, inspectable memory storage)
- 🧠 Memory summarization & pruning
- 🧰 Tool usage (filesystem, notes, automation)
- 🔁 Autonomous task loops
- 🔽 Model selector in UI
- 🗂️ Memory sidebar / manager
- 📦 Packaging as a standalone local app
Most AI tools hide complexity behind abstractions.
This project does the opposite:
- Memory is visible
- Decisions are inspectable
- Behavior is explicit
- Everything is local
It’s meant for learning, experimentation, and extending, not for production (yet).
Early alpha.
Breaking changes are expected.
Documentation will improve as the architecture stabilizes.