Build your own AI-powered automation tools in the terminal with this extensible agent framework
Features " Installation " Usage " Development " Contributing
vibecore is a Do-it-yourself Agent Framework that transforms your terminal into a powerful AI workspace. More than just a chat interface, it's a complete platform for building and orchestrating custom AI agents that can manipulate files, execute code, run shell commands, and manage complex workflows—all from the comfort of your terminal.
Built on Textual and the OpenAI Agents SDK, vibecore provides the foundation for creating your own AI-powered automation tools. Whether you're automating development workflows, building custom AI assistants, or experimenting with agent-based systems, vibecore gives you the building blocks to craft exactly what you need.
- Flow Mode (Experimental) - Build structured agent-based applications with programmatic conversation control
- AI-Powered Chat Interface - Interact with state-of-the-art language models through an intuitive terminal interface
- Rich Tool Integration - Built-in tools for file operations, shell commands, Python execution, and task management
- MCP Support - Connect to external tools and services via Model Context Protocol servers
- Beautiful Terminal UI - Modern, responsive interface with dark/light theme support
- Real-time Streaming - See AI responses as they're generated with smooth streaming updates
- Extensible Architecture - Easy to add new tools and capabilities
- High Performance - Async-first design for responsive interactions
- Context Management - Maintains state across tool executions for coherent workflows
- Python 3.11 or higher
- (Optional) uv for quick testing and better package management
Try vibecore instantly without installing it:
# Install uv if you don't have it (optional)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"
# Run vibecore directly with uvx
uvx vibecore
This will download and run vibecore in an isolated environment without affecting your system Python installation.
# Install vibecore
pip install vibecore
# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"
# Run vibecore
vibecore
# Clone the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore
# Install with pip
pip install -e .
# Or install with uv (recommended for development)
uv sync
# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"
# Run vibecore
vibecore
# or with uv
uv run vibecore
Once vibecore is running, you can:
- Chat naturally - Type messages and press Enter to send
- Toggle theme - Press
Ctrl+Shift+D
to toggle dark/light - Cancel agent - Press
Esc
to cancel the current operation - Navigate history - Use
Up/Down
arrows - Exit - Press
Ctrl+D
twice to confirm
/help
- Show help and keyboard shortcuts/clear
- Clear the current session and start a new one
Flow Mode is vibecore's key differentiator - it transforms the framework from a chat interface into a platform for building structured agent-based applications with programmatic conversation control.
Flow Mode allows you to:
- Define custom conversation logic that controls how agents process user input
- Build multi-step workflows with defined sequences and decision points
- Orchestrate multiple agents with handoffs and shared context
- Maintain conversation state across interactions
- Create agent-based applications rather than just chatbots
import asyncio
from agents import Agent, Runner
from vibecore.flow import flow, UserInputFunc
from vibecore.context import VibecoreContext
# Define your agent with tools
agent = Agent[VibecoreContext](
name="Assistant",
instructions="You are a helpful assistant",
tools=[...], # Your tools here
)
# Define your conversation logic
async def logic(app, ctx: VibecoreContext, user_input: UserInputFunc):
# Get user input programmatically
user_message = await user_input("What would you like to do?")
# Process with agent
result = Runner.run_streamed(
agent,
input=user_message,
context=ctx,
session=app.session,
)
# Handle the response
app.current_worker = app.handle_streamed_response(result)
await app.current_worker.wait()
# Run the flow
async def main():
await flow(agent, logic)
if __name__ == "__main__":
asyncio.run(main())
Flow Mode shines when building complex multi-agent systems. See examples/customer_service.py
for a complete implementation featuring:
- Triage Agent: Routes requests to appropriate specialists
- FAQ Agent: Handles frequently asked questions
- Booking Agent: Manages seat reservations
- Agent Handoffs: Seamless transitions between agents with context preservation
- Shared State: Maintains customer information across the conversation
flow()
: Entry point that sets up the Vibecore app with your custom logiclogic()
: Your async function that controls the conversation flowUserInputFunc
: Provides programmatic user input collectionVibecoreContext
: Shared state across tools and agents- Agent Handoffs: Transfer control between specialized agents
Flow Mode enables building:
- Customer service systems with routing and escalation
- Guided workflows for complex tasks
- Interactive tutorials with step-by-step guidance
- Task automation with human-in-the-loop controls
- Multi-stage data processing pipelines
The examples in the examples/
directory are adapted from the official OpenAI Agents SDK with minimal modifications, demonstrating how easily you can build sophisticated agent applications with vibecore.
vibecore comes with powerful built-in tools:
- Read files and directories
- Write and edit files
- Multi-edit for batch file modifications
- Pattern matching with glob
- Execute bash commands
- Search with grep
- List directory contents
- File system navigation
- Run Python code in isolated environments
- Persistent execution context
- Full standard library access
- Create and manage todo lists
- Track task progress
- Organize complex workflows
vibecore supports the Model Context Protocol, allowing you to connect to external tools and services through MCP servers.
Create a config.yaml
file in your project directory or add MCP servers to your environment:
mcp_servers:
# Filesystem server for enhanced file operations
- name: filesystem
type: stdio
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
# GitHub integration
- name: github
type: stdio
command: npx
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_PERSONAL_ACCESS_TOKEN: "your-github-token"
# Custom HTTP server
- name: my-server
type: http
url: "http://localhost:8080/mcp"
allowed_tools: ["specific_tool"] # Optional: whitelist specific tools
- stdio: Spawns a local process (npm packages, executables)
- sse: Server-Sent Events connection
- http: HTTP-based MCP servers
Control which tools are available from each server:
mcp_servers:
- name: restricted-server
type: stdio
command: some-command
allowed_tools: ["safe_read", "safe_write"] # Only these tools available
blocked_tools: ["dangerous_delete"] # These tools are blocked
# Clone and enter the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore
# Install dependencies
uv sync
# Run tests
uv run pytest
# Run tests by category
uv run pytest tests/ui/ # UI and widget tests
uv run pytest tests/tools/ # Tool functionality tests
uv run pytest tests/session/ # Session tests
# Run linting and formatting
uv run ruff check .
uv run ruff format .
# Type checking
uv run pyright
vibecore/
├── src/vibecore/
│ ├── main.py # Application entry point & TUI orchestration
│ ├── context.py # Central state management for agents
│ ├── settings.py # Configuration with Pydantic
│ ├── agents/ # Agent configurations & handoffs
│ │ └── default.py # Main agent with tool integrations
│ ├── models/ # LLM provider integrations
│ │ └── anthropic.py # Claude model support via LiteLLM
│ ├── mcp/ # Model Context Protocol integration
│ │ └── manager.py # MCP server lifecycle management
│ ├── handlers/ # Stream processing handlers
│ │ └── stream_handler.py # Handle streaming agent responses
│ ├── session/ # Session management
│ │ ├── jsonl_session.py # JSONL-based conversation storage
│ │ └── loader.py # Session loading logic
│ ├── widgets/ # Custom Textual UI components
│ │ ├── core.py # Base widgets & layouts
│ │ ├── messages.py # Message display components
│ │ ├── tool_message_factory.py # Factory for creating tool messages
│ │ ├── core.tcss # Core styling
│ │ └── messages.tcss # Message-specific styles
│ ├── tools/ # Extensible tool system
│ │ ├── base.py # Tool interfaces & protocols
│ │ ├── file/ # File manipulation tools
│ │ ├── shell/ # Shell command execution
│ │ ├── python/ # Python code interpreter
│ │ └── todo/ # Task management system
│ └── prompts/ # System prompts & instructions
├── tests/ # Comprehensive test suite
│ ├── ui/ # UI and widget tests
│ ├── tools/ # Tool functionality tests
│ ├── session/ # Session and storage tests
│ ├── cli/ # CLI and command tests
│ ├── models/ # Model integration tests
│ └── _harness/ # Test utilities
├── pyproject.toml # Project configuration & dependencies
├── uv.lock # Locked dependencies
└── CLAUDE.md # AI assistant instructions
We maintain high code quality standards:
- Linting: Ruff for fast, comprehensive linting
- Formatting: Ruff formatter for consistent code style
- Type Checking: Pyright for static type analysis
- Testing: Pytest for comprehensive test coverage
Run all checks:
uv run ruff check . && uv run ruff format --check . && uv run pyright . && uv run pytest
vibecore includes a path confinement system that restricts file and shell operations to specified directories for enhanced security. This prevents agents from accessing sensitive system files or directories outside your project.
# config.yaml
path_confinement:
enabled: true # Enable/disable path confinement (default: true)
allowed_directories: # List of allowed directories (default: [current working directory])
- /home/user/projects
- /tmp
allow_home: false # Allow access to user's home directory (default: false)
allow_temp: true # Allow access to system temp directory (default: true)
strict_mode: false # Strict validation mode (default: false)
Or via environment variables:
export VIBECORE_PATH_CONFINEMENT__ENABLED=true
export VIBECORE_PATH_CONFINEMENT__ALLOWED_DIRECTORIES='["/home/user/projects", "/tmp"]'
export VIBECORE_PATH_CONFINEMENT__ALLOW_HOME=false
export VIBECORE_PATH_CONFINEMENT__ALLOW_TEMP=true
When enabled, the path confinement system:
- Validates all file read/write/edit operations
- Checks paths in shell commands before execution
- Resolves symlinks to prevent escapes
- Blocks access to files outside allowed directories
- Set default via env var:
VIBECORE_REASONING_EFFORT
(minimal | low | medium | high) - Keyword triggers:
think
→ low,think hard
→ medium,ultrathink
→ high
# Model configuration
ANTHROPIC_API_KEY=sk-... # For Claude models
OPENAI_API_KEY=sk-... # For GPT models
# OpenAI Models
VIBECORE_DEFAULT_MODEL=o3
VIBECORE_DEFAULT_MODEL=gpt-4.1
# Claude
VIBECORE_DEFAULT_MODEL=anthropic/claude-sonnet-4-20250514
# Use any LiteLLM supported models
VIBECORE_DEFAULT_MODEL=litellm/deepseek/deepseek-chat
# Local models. Use with OPENAI_BASE_URL
VIBECORE_DEFAULT_MODEL=qwen3-30b-a3b-mlx@8bit
We welcome contributions! Here's how to get started:
- Fork the repository and create your branch from
main
- Make your changes and ensure all tests pass
- Add tests for any new functionality
- Update documentation as needed
- Submit a pull request with a clear description
- Follow the existing code style and patterns
- Write descriptive commit messages
- Add type hints to all functions
- Ensure your code passes all quality checks
- Update tests for any changes
Found a bug or have a feature request? Please open an issue with:
- Clear description of the problem or feature
- Steps to reproduce (for bugs)
- Expected vs actual behavior
- Environment details (OS, Python version)
vibecore is built with a modular, extensible architecture:
- Textual Framework: Provides the responsive TUI foundation
- OpenAI Agents SDK: Powers the AI agent capabilities
- Async Design: Ensures smooth, non-blocking interactions
- Tool System: Modular tools with consistent interfaces
- Context Management: Maintains state across operations
- Path Confinement: New security feature to restrict file and shell operations to specified directories
- Reasoning View: New ReasoningMessage widget with live reasoning summaries during streaming
- Context Usage Bar & CWD: Footer shows token usage progress and current working directory
- Keyboard & Commands: Ctrl+Shift+D toggles theme, Esc cancels, Ctrl+D double-press to exit,
/help
and/clear
commands - MCP Tool Output: Improved rendering with Markdown and JSON prettification
- MCP Support: Full integration with Model Context Protocol for external tool connections
- Print Mode:
-p
flag to print response and exit for pipes/automation
- More custom tool views (Python, Read, Todo widgets)
- Automation (vibecore -p "prompt")
- MCP (Model Context Protocol) support
- Path confinement for security
- Multi-agent system (agent-as-tools)
- Plugin system for custom tools
- Automated workflow
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Textual - The amazing TUI framework
- Powered by OpenAI Agents SDK
- Inspired by the growing ecosystem of terminal-based AI tools
Made with love by the vibecore community