Skip to content

serialx/vibecore

Repository files navigation

vibecore

PyPI version Python 3.11+ License: MIT PyPI downloads Code style: ruff Checked with pyright

Build your own AI-powered automation tools in the terminal with this extensible agent framework

Features " Installation " Usage " Development " Contributing


vibecore terminal screenshot

Overview

vibecore is a Do-it-yourself Agent Framework that transforms your terminal into a powerful AI workspace. More than just a chat interface, it's a complete platform for building and orchestrating custom AI agents that can manipulate files, execute code, run shell commands, and manage complex workflows—all from the comfort of your terminal.

Built on Textual and the OpenAI Agents SDK, vibecore provides the foundation for creating your own AI-powered automation tools. Whether you're automating development workflows, building custom AI assistants, or experimenting with agent-based systems, vibecore gives you the building blocks to craft exactly what you need.

Key Features

  • Flow Mode (Experimental) - Build structured agent-based applications with programmatic conversation control
  • AI-Powered Chat Interface - Interact with state-of-the-art language models through an intuitive terminal interface
  • Rich Tool Integration - Built-in tools for file operations, shell commands, Python execution, and task management
  • MCP Support - Connect to external tools and services via Model Context Protocol servers
  • Beautiful Terminal UI - Modern, responsive interface with dark/light theme support
  • Real-time Streaming - See AI responses as they're generated with smooth streaming updates
  • Extensible Architecture - Easy to add new tools and capabilities
  • High Performance - Async-first design for responsive interactions
  • Context Management - Maintains state across tool executions for coherent workflows

Installation

Prerequisites

  • Python 3.11 or higher
  • (Optional) uv for quick testing and better package management

Quick Test (No Installation)

Try vibecore instantly without installing it:

# Install uv if you don't have it (optional)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"

# Run vibecore directly with uvx
uvx vibecore

This will download and run vibecore in an isolated environment without affecting your system Python installation.

Install from PyPI

# Install vibecore
pip install vibecore

# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"

# Run vibecore
vibecore

Install from Source

# Clone the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore

# Install with pip
pip install -e .

# Or install with uv (recommended for development)
uv sync

# Configure your API key
export ANTHROPIC_API_KEY="your-api-key-here"
# or
export OPENAI_API_KEY="your-api-key-here"

# Run vibecore
vibecore
# or with uv
uv run vibecore

Usage

Basic Commands

Once vibecore is running, you can:

  • Chat naturally - Type messages and press Enter to send
  • Toggle theme - Press Ctrl+Shift+D to toggle dark/light
  • Cancel agent - Press Esc to cancel the current operation
  • Navigate history - Use Up/Down arrows
  • Exit - Press Ctrl+D twice to confirm

Commands

  • /help - Show help and keyboard shortcuts
  • /clear - Clear the current session and start a new one

Flow Mode (Experimental)

Flow Mode is vibecore's key differentiator - it transforms the framework from a chat interface into a platform for building structured agent-based applications with programmatic conversation control.

What is Flow Mode?

Flow Mode allows you to:

  • Define custom conversation logic that controls how agents process user input
  • Build multi-step workflows with defined sequences and decision points
  • Orchestrate multiple agents with handoffs and shared context
  • Maintain conversation state across interactions
  • Create agent-based applications rather than just chatbots

Example: Simple Flow

import asyncio
from agents import Agent, Runner
from vibecore.flow import flow, UserInputFunc
from vibecore.context import VibecoreContext

# Define your agent with tools
agent = Agent[VibecoreContext](
    name="Assistant",
    instructions="You are a helpful assistant",
    tools=[...],  # Your tools here
)

# Define your conversation logic
async def logic(app, ctx: VibecoreContext, user_input: UserInputFunc):
    # Get user input programmatically
    user_message = await user_input("What would you like to do?")
    
    # Process with agent
    result = Runner.run_streamed(
        agent,
        input=user_message,
        context=ctx,
        session=app.session,
    )
    
    # Handle the response
    app.current_worker = app.handle_streamed_response(result)
    await app.current_worker.wait()

# Run the flow
async def main():
    await flow(agent, logic)

if __name__ == "__main__":
    asyncio.run(main())

Example: Multi-Agent Customer Service

Flow Mode shines when building complex multi-agent systems. See examples/customer_service.py for a complete implementation featuring:

  • Triage Agent: Routes requests to appropriate specialists
  • FAQ Agent: Handles frequently asked questions
  • Booking Agent: Manages seat reservations
  • Agent Handoffs: Seamless transitions between agents with context preservation
  • Shared State: Maintains customer information across the conversation

Key Components

  • flow(): Entry point that sets up the Vibecore app with your custom logic
  • logic(): Your async function that controls the conversation flow
  • UserInputFunc: Provides programmatic user input collection
  • VibecoreContext: Shared state across tools and agents
  • Agent Handoffs: Transfer control between specialized agents

Use Cases

Flow Mode enables building:

  • Customer service systems with routing and escalation
  • Guided workflows for complex tasks
  • Interactive tutorials with step-by-step guidance
  • Task automation with human-in-the-loop controls
  • Multi-stage data processing pipelines

The examples in the examples/ directory are adapted from the official OpenAI Agents SDK with minimal modifications, demonstrating how easily you can build sophisticated agent applications with vibecore.

Available Tools

vibecore comes with powerful built-in tools:

File Operations

- Read files and directories
- Write and edit files
- Multi-edit for batch file modifications
- Pattern matching with glob

Shell Commands

- Execute bash commands
- Search with grep
- List directory contents
- File system navigation

Python Execution

- Run Python code in isolated environments
- Persistent execution context
- Full standard library access

Task Management

- Create and manage todo lists
- Track task progress
- Organize complex workflows

MCP (Model Context Protocol) Support

vibecore supports the Model Context Protocol, allowing you to connect to external tools and services through MCP servers.

Configuring MCP Servers

Create a config.yaml file in your project directory or add MCP servers to your environment:

mcp_servers:
  # Filesystem server for enhanced file operations
  - name: filesystem
    type: stdio
    command: npx
    args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
    
  # GitHub integration
  - name: github
    type: stdio
    command: npx
    args: ["-y", "@modelcontextprotocol/server-github"]
    env:
      GITHUB_PERSONAL_ACCESS_TOKEN: "your-github-token"
    
  # Custom HTTP server
  - name: my-server
    type: http
    url: "http://localhost:8080/mcp"
    allowed_tools: ["specific_tool"]  # Optional: whitelist specific tools

Available MCP Server Types

  • stdio: Spawns a local process (npm packages, executables)
  • sse: Server-Sent Events connection
  • http: HTTP-based MCP servers

Tool Filtering

Control which tools are available from each server:

mcp_servers:
  - name: restricted-server
    type: stdio
    command: some-command
    allowed_tools: ["safe_read", "safe_write"]  # Only these tools available
    blocked_tools: ["dangerous_delete"]         # These tools are blocked

Development

Setting Up Development Environment

# Clone and enter the repository
git clone https://github.com/serialx/vibecore.git
cd vibecore

# Install dependencies
uv sync

# Run tests
uv run pytest

# Run tests by category
uv run pytest tests/ui/        # UI and widget tests
uv run pytest tests/tools/     # Tool functionality tests
uv run pytest tests/session/   # Session tests

# Run linting and formatting
uv run ruff check .
uv run ruff format .

# Type checking
uv run pyright

Project Structure

vibecore/
├── src/vibecore/
│   ├── main.py              # Application entry point & TUI orchestration
│   ├── context.py           # Central state management for agents
│   ├── settings.py          # Configuration with Pydantic
│   ├── agents/              # Agent configurations & handoffs
│   │   └── default.py       # Main agent with tool integrations
│   ├── models/              # LLM provider integrations
│   │   └── anthropic.py     # Claude model support via LiteLLM
│   ├── mcp/                 # Model Context Protocol integration
│   │   └── manager.py       # MCP server lifecycle management
│   ├── handlers/            # Stream processing handlers
│   │   └── stream_handler.py # Handle streaming agent responses
│   ├── session/             # Session management
│   │   ├── jsonl_session.py # JSONL-based conversation storage
│   │   └── loader.py        # Session loading logic
│   ├── widgets/             # Custom Textual UI components
│   │   ├── core.py          # Base widgets & layouts
│   │   ├── messages.py      # Message display components
│   │   ├── tool_message_factory.py  # Factory for creating tool messages
│   │   ├── core.tcss        # Core styling
│   │   └── messages.tcss    # Message-specific styles
│   ├── tools/               # Extensible tool system
│   │   ├── base.py          # Tool interfaces & protocols
│   │   ├── file/            # File manipulation tools
│   │   ├── shell/           # Shell command execution
│   │   ├── python/          # Python code interpreter
│   │   └── todo/            # Task management system
│   └── prompts/             # System prompts & instructions
├── tests/                   # Comprehensive test suite
│   ├── ui/                  # UI and widget tests
│   ├── tools/               # Tool functionality tests
│   ├── session/             # Session and storage tests
│   ├── cli/                 # CLI and command tests
│   ├── models/              # Model integration tests
│   └── _harness/            # Test utilities
├── pyproject.toml           # Project configuration & dependencies
├── uv.lock                  # Locked dependencies
└── CLAUDE.md                # AI assistant instructions

Code Quality

We maintain high code quality standards:

  • Linting: Ruff for fast, comprehensive linting
  • Formatting: Ruff formatter for consistent code style
  • Type Checking: Pyright for static type analysis
  • Testing: Pytest for comprehensive test coverage

Run all checks:

uv run ruff check . && uv run ruff format --check . && uv run pyright . && uv run pytest

Configuration

Path Confinement (Security)

vibecore includes a path confinement system that restricts file and shell operations to specified directories for enhanced security. This prevents agents from accessing sensitive system files or directories outside your project.

Configuration Options

# config.yaml
path_confinement:
  enabled: true                    # Enable/disable path confinement (default: true)
  allowed_directories:              # List of allowed directories (default: [current working directory])
    - /home/user/projects
    - /tmp
  allow_home: false                # Allow access to user's home directory (default: false)
  allow_temp: true                 # Allow access to system temp directory (default: true)
  strict_mode: false               # Strict validation mode (default: false)

Or via environment variables:

export VIBECORE_PATH_CONFINEMENT__ENABLED=true
export VIBECORE_PATH_CONFINEMENT__ALLOWED_DIRECTORIES='["/home/user/projects", "/tmp"]'
export VIBECORE_PATH_CONFINEMENT__ALLOW_HOME=false
export VIBECORE_PATH_CONFINEMENT__ALLOW_TEMP=true

When enabled, the path confinement system:

  • Validates all file read/write/edit operations
  • Checks paths in shell commands before execution
  • Resolves symlinks to prevent escapes
  • Blocks access to files outside allowed directories

Reasoning Effort

  • Set default via env var: VIBECORE_REASONING_EFFORT (minimal | low | medium | high)
  • Keyword triggers: think → low, think hard → medium, ultrathink → high

Environment Variables

# Model configuration
ANTHROPIC_API_KEY=sk-...        # For Claude models
OPENAI_API_KEY=sk-...          # For GPT models

# OpenAI Models
VIBECORE_DEFAULT_MODEL=o3
VIBECORE_DEFAULT_MODEL=gpt-4.1
# Claude
VIBECORE_DEFAULT_MODEL=anthropic/claude-sonnet-4-20250514
# Use any LiteLLM supported models
VIBECORE_DEFAULT_MODEL=litellm/deepseek/deepseek-chat
# Local models. Use with OPENAI_BASE_URL
VIBECORE_DEFAULT_MODEL=qwen3-30b-a3b-mlx@8bit

Contributing

We welcome contributions! Here's how to get started:

  1. Fork the repository and create your branch from main
  2. Make your changes and ensure all tests pass
  3. Add tests for any new functionality
  4. Update documentation as needed
  5. Submit a pull request with a clear description

Development Guidelines

  • Follow the existing code style and patterns
  • Write descriptive commit messages
  • Add type hints to all functions
  • Ensure your code passes all quality checks
  • Update tests for any changes

Reporting Issues

Found a bug or have a feature request? Please open an issue with:

  • Clear description of the problem or feature
  • Steps to reproduce (for bugs)
  • Expected vs actual behavior
  • Environment details (OS, Python version)

Architecture

vibecore is built with a modular, extensible architecture:

  • Textual Framework: Provides the responsive TUI foundation
  • OpenAI Agents SDK: Powers the AI agent capabilities
  • Async Design: Ensures smooth, non-blocking interactions
  • Tool System: Modular tools with consistent interfaces
  • Context Management: Maintains state across operations

Recent Updates

  • Path Confinement: New security feature to restrict file and shell operations to specified directories
  • Reasoning View: New ReasoningMessage widget with live reasoning summaries during streaming
  • Context Usage Bar & CWD: Footer shows token usage progress and current working directory
  • Keyboard & Commands: Ctrl+Shift+D toggles theme, Esc cancels, Ctrl+D double-press to exit, /help and /clear commands
  • MCP Tool Output: Improved rendering with Markdown and JSON prettification
  • MCP Support: Full integration with Model Context Protocol for external tool connections
  • Print Mode: -p flag to print response and exit for pipes/automation

Roadmap

  • More custom tool views (Python, Read, Todo widgets)
  • Automation (vibecore -p "prompt")
  • MCP (Model Context Protocol) support
  • Path confinement for security
  • Multi-agent system (agent-as-tools)
  • Plugin system for custom tools
  • Automated workflow

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Built with Textual - The amazing TUI framework
  • Powered by OpenAI Agents SDK
  • Inspired by the growing ecosystem of terminal-based AI tools

Made with love by the vibecore community

Report Bug " Request Feature " Join Discussions

About

Build your own AI-powered automation tools in the terminal with this extensible agent framework

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages