Skip to content

wronai/ellma

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

79 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🧬 ELLMa - Evolutionary Local LLM Agent

Evolutionary Local LLM Agent - Self-improving AI assistant that evolves with your needs

PyPI version Python Support License: MIT Code style: black Documentation Status

πŸ“‹ Table of Contents

πŸš€ Features

ELLMa is a revolutionary self-evolving AI agent that runs locally on your machine. Unlike traditional AI tools, ELLMa learns and improves itself with these key features:

πŸ”„ Self-Improvement & Evolution

  • Automatic Code Generation: Generates new modules and capabilities on-the-fly
  • Continuous Learning: Improves from interactions and feedback
  • Evolution Engine: Self-modifying architecture that evolves over time
  • Performance Optimization: Identifies and implements performance improvements
  • Error Recovery: Automatically detects and recovers from errors

πŸ”’ Security & Dependency Management

  • Automatic Environment Setup: Ensures all dependencies are installed and configured correctly
  • Dependency Auto-Repair: Automatically detects and fixes missing or broken dependencies
  • Virtual Environment Management: Handles Python virtual environments automatically
  • Security Checks: Performs security validations before executing commands
  • Graceful Degradation: Works even when optional dependencies are missing

πŸŽ™οΈ Audio Features (Optional)

ELLMa includes optional audio capabilities that can be enabled by installing the audio extras. These features require additional system dependencies.

Audio Features

  • Speech Recognition: Convert speech to text
  • Text-to-Speech: Convert text to speech (coming soon)

Installation

To install with audio support:

pip install ellma[audio]

System Dependencies

On Ubuntu/Debian:

sudo apt-get update
sudo apt-get install -y python3-dev portaudio19-dev

On Fedora/RHEL:

sudo dnf install -y python3-devel alsa-lib-devel portaudio-devel

On macOS (using Homebrew):

brew install portaudio

Note: Audio features are optional. If you don't need them, you can use ELLMa without installing these dependencies.

  • Audio Processing: Work with audio files and streams

To install with audio support:

poetry install --extras "audio"
# or with pip
pip install ellma[audio]

Note: Audio features require system dependencies. On Fedora/RHEL:

sudo dnf install python3-devel alsa-lib-devel portaudio-devel

On Ubuntu/Debian:

sudo apt-get install python3-dev portaudio19-dev

πŸ“¦ Installation

Prerequisites

  • Python 3.10 or higher
  • Poetry (recommended) or pip
  • System dependencies (for audio features, see above)

πŸ›  Commands

# Check environment status
ellma security check

# Install dependencies
ellma security install [--group GROUP]

# Repair environment issues
ellma security repair

PyPI version Python Support License: MIT Code style: black

πŸš€ What is ELLMa?

ELLMa is a revolutionary self-evolving AI agent that runs locally on your machine. Unlike traditional AI tools, ELLMa learns and improves itself by:

Core Capabilities

  • Performance Monitoring: Built-in metrics and monitoring
  • Cross-Platform: Works on Linux, macOS, and Windows (WSL2 recommended for Windows)
  • System Introspection: Built-in commands for system exploration and debugging

System Introspection

ELLMa includes powerful introspection capabilities to help you understand and debug the system:

# View configuration
sys config                # Show all configuration
sys config model         # Show model configuration

# Explore source code
sys source ellma.core.agent.ELLMa  # View class source
sys code ellma.commands.system     # View module source

# System information
sys info                 # Show detailed system info
sys status               # Show system status
sys health              # Run system health check

# Module exploration
sys modules             # List all available modules
sys module ellma.core   # Show info about a module

# Command help
sys commands           # List all available commands
sys help               # Show help for system commands

These commands support natural language queries, so you can type things like:

  • "show me the config" β†’ sys config
  • "what modules are available" β†’ sys modules
  • "display system information" β†’ sys info

πŸ›‘οΈ Security and Dependency Management

ELLMa includes a comprehensive security and dependency management system that ensures safe and reliable execution:

πŸ”’ Security Features

  • Secure Code Execution: All code runs in a sandboxed environment with restricted permissions
  • Dependency Validation: Automatic verification of required packages and versions
  • Environment Isolation: Each component runs in its own isolated environment
  • Audit Logging: Detailed logging of all security-relevant actions
  • Automatic Repair: Self-healing capabilities for common issues
  • Secure Defaults: Secure by default with sensible restrictions

πŸ“¦ Dependency Management

  • Automatic Dependency Resolution: Automatically installs missing dependencies
  • Version Conflict Resolution: Handles version conflicts gracefully
  • Dependency Isolation: Each module can specify its own dependencies
  • Security Scanning: Regular security scans for known vulnerabilities

πŸ› οΈ Using the Secure Executor

Run any Python script or module securely:

# Run a script with dependency checking
ellma-secure path/to/script.py

# Interactive secure Python shell
ellma-secure

# Install dependencies from requirements.txt
ellma-secure --requirements requirements.txt

πŸ›‘οΈ Security Context Manager

Use the security context manager in your code:

from ellma.core.security import SecurityContext, Dependency

# Define dependencies
dependencies = [
    Dependency(name="numpy", min_version="1.20.0"),
    Dependency(name="pandas", min_version="1.3.0")
]

# Run code in a secure context
with SecurityContext(dependencies):
    import numpy as np
    import pandas as pd
    # Your secure code here

πŸ”„ Automatic Dependency Checking

Add dependency checking to any function:

from ellma.core.decorators import secure
from ellma.core.security import Dependency

@secure(dependencies=[
    Dependency(name="requests", min_version="2.25.0"),
    Dependency(name="numpy", min_version="1.20.0")
])
def process_data(url: str) -> dict:
    import requests
    import numpy as np
    # Your function code here

πŸš€ Setup and Configuration

  1. Install development dependencies:

    poetry install --with dev
  2. Run security checks:

    # Run bandit security scanner
    bandit -r ellma/
    
    # Check for vulnerable dependencies
    safety check
  3. Update dependencies:

    # Update all dependencies
    poetry update
    
    # Update a specific package
    poetry update package-name

⚑ Quick Start

Prerequisites

  • Python 3.8+
  • pip (Python package manager)
  • Git (for development)
  • 8GB+ RAM recommended for local models
  • For GPU acceleration: CUDA-compatible GPU (optional)

Installation

Option 1: Install from source (recommended for development)

# Clone the repository
git clone https://github.com/wronai/ellma.git
cd ellma

# Install in development mode with all dependencies
pip install -e ".[dev]"

Option 2: Install via pip

pip install ellma

First Steps

  1. Initialize ELLMa (creates config in ~/.ellma)

    # Basic initialization
    ellma init
    
    # Force re-initialization
    # ellma init --force
  2. Download a model (or let it auto-download when needed)

    # Download default model
    ellma download-model
    
    # Specify a different model
    # ellma download-model --model mistral-7b-instruct
  3. Start the interactive shell

    # Start interactive shell
    ellma shell
    
    # Start shell with verbose output
    # ellma -v shell
  4. Or execute commands directly

    # System information
    ellma exec system scan
    
    # Web interaction (extract text and links)
    ellma exec web read https://example.com --extract-text --extract-links
    
    # File operations (search for Python files)
    ellma exec files search /path/to/directory --pattern "*.py"
    
    # Get agent status
    ellma status

πŸ›  Development

Setting Up Development Environment

  1. Clone the repository

    git clone https://github.com/wronai/ellma.git
    cd ellma
  2. Create and activate a virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install development dependencies

    pip install -e ".[dev]"
    pip install pytest pytest-cov pytest-mock
  4. Install runtime dependencies

    pip install SpeechRecognition pyttsx3

Running Tests

Run all tests:

pytest -v

Run tests with coverage report:

pytest --cov=ellma --cov-report=term-missing

Evolution Engine

The evolution engine is a core component that enables self-improvement. It works by:

  1. Analyzing system performance and capabilities
  2. Identifying improvement opportunities
  3. Generating and testing new code
  4. Integrating successful changes

To manually trigger an evolution cycle:

from ellma.core.agent import ELLMa

agent = ELLMa()
agent.evolve()

Code Style

We use black for code formatting and flake8 for linting. Before submitting a PR, please run:

black .
flake8
  1. Set up pre-commit hooks (recommended)
    pre-commit install

Development Workflow

Running Tests

# Run all tests
make test

# Run specific test file
pytest tests/test_web_commands.py -v

# Run with coverage report
make test-coverage

Code Quality

# Run linters
make lint

# Auto-format code
make format

# Type checking
make typecheck

# Security checks
make security

Documentation

# Build documentation
make docs

# Serve docs locally
cd docs && python -m http.server 8000

Project Structure

ellma/
β”œβ”€β”€ ellma/                  # Main package
β”‚   β”œβ”€β”€ core/              # Core functionality
β”‚   β”œβ”€β”€ commands/          # Built-in commands
β”‚   β”œβ”€β”€ generators/        # Code generation
β”‚   β”œβ”€β”€ models/           # Model management
β”‚   └── utils/            # Utilities
β”œβ”€β”€ tests/                 # Test suite
β”œβ”€β”€ docs/                 # Documentation
└── scripts/              # Development scripts

Project Structure

ellma/
β”œβ”€β”€ ellma/                  # Main package
β”‚   β”œβ”€β”€ core/              # Core functionality
β”‚   β”œβ”€β”€ commands/          # Built-in commands
β”‚   β”œβ”€β”€ generators/        # Code generation
β”‚   β”œβ”€β”€ models/           # Model management
β”‚   └── utils/            # Utilities
β”œβ”€β”€ tests/                 # Test suite
β”œβ”€β”€ docs/                 # Documentation
└── scripts/              # Development scripts

πŸ”„ Evolution & Self-Improvement

ELLMa's evolution engine allows it to analyze its performance and automatically improve its capabilities.

Running Evolution

# Run a single evolution cycle
ellma evolve

# Run multiple evolution cycles (up to 3 recommended)
ellma evolve --cycles 3

# Force evolution even if not enough commands have been executed
ellma evolve --force

Evolution Requirements

  • At least 10 commands should be executed before evolution is recommended
  • Use --force to bypass this requirement
  • Evolution status is shown in the main status output

Monitoring Evolution

# View evolution history (if available)
cat ~/.ellma/evolution/evolution_history.json | jq .

# Monitor evolution logs
tail -f ~/.ellma/logs/evolution.log

# Check evolution status in the main status output
ellma status

🧬 Evolution Configuration

Customize the self-improvement process in ~/.ellma/config.yaml:

evolution:
  enabled: true               # Enable/disable evolution
  auto_improve: true         # Allow automatic improvements
  learning_rate: 0.1         # Learning rate for evolution (0.0-1.0)

Status Information

The main status command shows key evolution metrics:

ellma status

Example output:

πŸ€– ELLMa Status                        
┏━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Property               ┃ Value                                   ┃
┑━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
β”‚ Version                β”‚ 0.1.6                                   β”‚
β”‚ Model Loaded           β”‚ βœ… Yes                                  β”‚
β”‚ Model Path             β”‚ /path/to/model.gguf                     β”‚
β”‚ Modules                β”‚ 0                                       β”‚
β”‚ Commands               β”‚ 3                                       β”‚
β”‚ Commands Executed      β”‚ 15                                      β”‚
β”‚ Success Rate           β”‚ 100.0%                                  β”‚
β”‚ Evolution Cycles       β”‚ 0                                       β”‚
β”‚ Modules Created        β”‚ 0                                       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

### Monitoring Evolution

Track evolution progress and results:

```bash
# View evolution history with detailed metrics
ellma evolution history --limit 10

# Monitor evolution in real-time
ellma evolution monitor

# Get evolution statistics
ellma evolution stats

# Compare evolution cycles
ellma evolution compare cycle1 cycle2

Evolution Best Practices

  1. Start Conservative: Begin with lower learning rates and enable auto-improve
  2. Monitor Progress: Regularly check evolution logs and metrics
  3. Set Resource Limits: Prevent excessive resource usage
  4. Use Benchmarks: Enable benchmarking to measure improvements
  5. Review Changes: Periodically review and test evolved modules

Troubleshooting Evolution

Common issues and solutions:

# If evolution gets stuck
ellma evolution cancel

# Reset to last known good state
ellma evolution rollback

# Clear evolution cache
ellma evolution clean

# Force reset evolution state (use with caution)
ellma evolution reset --confirm

🧩 Extending ELLMa

Creating Custom Commands

  1. Create a new Python module in ellma/commands/:
from ellma.commands.base import BaseCommand

class MyCustomCommand(BaseCommand):
    """My custom command"""
  1. Register your command in ellma/commands/__init__.py
  2. Restart ELLMa to load your new command

βš™οΈ Generated Utilities

ELLMa includes a powerful set of self-generated utilities for common programming tasks. These include:

  • πŸ›‘οΈ Enhanced Error Handling: Automatic retries with exponential backoff
  • ⚑ Performance Caching: In-memory cache with TTL support
  • πŸš€ Parallel Processing: Easy parallel execution of tasks

See the Generated Utilities Documentation for detailed usage and examples.

def __init__(self, agent):
    super().__init__(agent)
    self.name = "custom"
    self.description = "My custom command"

def my_action(self, param1: str, param2: int = 42):
    """Example action"""
    return {"result": f"Got {param1} and {param2}"}

2. Register your command in `ellma/commands/__init__.py`

### Creating Custom Modules

1. Create a new module class:

```python
from ellma.core.module import BaseModule

class MyCustomModule(BaseModule):
    def __init__(self, agent):
        super().__init__(agent)
        self.name = "my_module"
        self.version = "1.0.0"
    
    def setup(self):
        # Initialization code
        pass
    
    def execute(self, command: str, *args, **kwargs):
        # Handle commands
        if command == "greet":
            return f"Hello, {kwargs.get('name', 'World')}!"
        raise ValueError(f"Unknown command: {command}")
  1. Register your module in the agent's configuration

🀝 Contributing

We welcome contributions! Here's how you can help:

  1. Report Bugs: Open an issue with detailed steps to reproduce
  2. Suggest Features: Share your ideas for new features
  3. Submit Pull Requests: Follow these steps:
    • Fork the repository
    • Create a feature branch
    • Make your changes
    • Add tests
    • Update documentation
    • Submit a PR

Development Guidelines

  • Follow PEP 8 style guide
  • Write docstrings for all public functions and classes
  • Add type hints for better code clarity
  • Write tests for new features
  • Update documentation when making changes

πŸ“„ License

MIT License - see LICENSE for details.

πŸ“š Documentation

For complete documentation, visit ellma.readthedocs.io

πŸ™ Acknowledgments

  • Thanks to all contributors who have helped improve ELLMa
  • Built with ❀️ by the ELLMa team

🎯 Core Features

🧬 Self-Evolution Engine

ELLMa continuously improves by analyzing its performance and automatically generating new modules:

$ ellma evolve
🧬 Starting evolution process...
πŸ“Š Analyzing current capabilities...
🎯 Identified 3 improvement opportunities:
   βœ… Added: advanced_file_analyzer
   βœ… Added: network_monitoring
   βœ… Added: code_optimizer
πŸŽ‰ Evolution complete! 3 new capabilities added.

πŸ“Š Performance Monitoring

Track your agent's performance:

# Show agent status
ellma status

# View system health metrics
ellma exec system.health

πŸ” Advanced Command Usage

# Run system scan
ellma exec system.scan

# Read web page content
ellma exec web.read https://example.com

# Read web page with link extraction
ellma exec web.read https://example.com extract_links true extract_text true

# Quick system health check
ellma exec system.health

# Save command output to file
ellma exec system.scan > scan_results.json

🐚 Interactive Shell Interface

Start the interactive shell and use system commands:

# Start the interactive shell
ellma shell

# In the shell, you can run commands like:
ellma> system.health
ellma> system.scan
ellma> web.read https://example.com
ellma> web.read https://example.com extract_links true extract_text true

Example shell session:

πŸ€– ELLMa Interactive Shell (v0.1.6)
Type 'help' for available commands, 'exit' to quit

# Available commands:
# - system.health: Check system health
# - system.scan: Perform system scan
# - web.read [url]: Read web page content
# - web.read [url] extract_links true extract_text true: Read web page with link extraction
# - help: Show available commands

ellma> system.health
{'status': 'HEALTHY', 'cpu_usage': 12.5, 'memory_usage': 45.2, ...}

ellma> web.read example.com
{'status': 200, 'title': 'Example Domain', 'content_length': 1256, ...}

# For commands with parameters, use space-separated values
ellma> web.read example.com extract_links true extract_text true

πŸ› οΈ Multi-Language Code Generation

Generate production-ready code in multiple languages:

# Generate Bash scripts
ellma generate bash --task="Monitor system resources and alert on high usage"

# Generate Python code  
ellma generate python --task="Web scraper with rate limiting"

# Generate Docker configurations
ellma generate docker --task="Multi-service web application"

# Generate Groovy for Jenkins
ellma generate groovy --task="CI/CD pipeline with testing stages"

πŸ“Š Intelligent System Integration

ELLMa understands your system and can:

  • Scan and analyze system configurations
  • Monitor processes and resources
  • Automate repetitive tasks
  • Generate custom tools for your workflow

πŸ—οΈ Architecture

ellma/
β”œβ”€β”€ core/                   # Core agent and evolution engine
β”‚   β”œβ”€β”€ agent.py           # Main LLM Agent class
β”‚   β”œβ”€β”€ evolution.py       # Self-improvement system
β”‚   └── shell.py           # Interactive shell interface
β”œβ”€β”€ commands/               # Modular command system
β”‚   β”œβ”€β”€ system.py          # System operations
β”‚   β”œβ”€β”€ web.py             # Web interactions
β”‚   └── files.py           # File operations
β”œβ”€β”€ generators/             # Code generation engines
β”‚   β”œβ”€β”€ bash.py            # Bash script generator
β”‚   β”œβ”€β”€ python.py          # Python code generator
β”‚   └── docker.py          # Docker configuration generator
β”œβ”€β”€ modules/                # Dynamic module system
β”‚   β”œβ”€β”€ registry.py        # Module registry and loader
β”‚   └── [auto-generated]/  # Self-created modules
└── cli/                   # Command-line interface
    β”œβ”€β”€ main.py            # Main CLI entry point
    └── shell.py           # Interactive shell

πŸ“š Usage Examples

System Administration

# Run comprehensive system scan
ellma exec system.scan

# Monitor system resources (60 seconds with 5-second intervals)
ellma exec system.monitor --duration 60 --interval 5

# Check system health status
ellma exec system.health

# List top processes by CPU usage
ellma exec system.processes --sort-by cpu --limit 10

# Check open network ports
ellma exec system.ports

Development Workflow

# Generate a new Python project
ellma generate python --task "FastAPI project with SQLAlchemy and JWT auth"

# Create a Docker Compose setup
ellma generate docker --task "Python app with PostgreSQL and Redis"

# Generate test cases
```bash
ellma generate test --file app/main.py --framework pytest

# Document a Python function
ellma exec code document_function utils.py --function process_data

Generated Utilities Examples

Explore practical examples of using the generated utilities in the examples/generated_utils/ directory:

  1. Error Handling - Automatic retries with exponential backoff
  2. Performance Caching - Efficient data caching with TTL
  3. Parallel Processing - Concurrent task execution
  4. Combined Example - Using all utilities together

Run any example with:

# From the project root
python -m examples.generated_utils.example_name

# Or directly
cd examples/generated_utils/
python example_name.py

For more details, see the generated utilities documentation.

Web & API Interaction

# Read and extract content from a webpage
ellma exec web.read https://example.com --extract-text --extract-links

# Make HTTP GET request to an API endpoint
ellma exec web.get https://api.example.com/data

# Make HTTP POST request with JSON data
ellma exec web.post https://api.example.com/data --data '{"key": "value"}'

# Generate API client code
ellma generate python --task "API client for REST service with error handling"

## πŸ”§ Configuration

ELLMa stores its configuration in `~/.ellma/`:

```yaml
# ~/.ellma/config.yaml
model:
  path: ~/.ellma/models/mistral-7b.gguf
  context_length: 4096
  temperature: 0.7

evolution:
  enabled: true
  auto_improve: true
  learning_rate: 0.1

modules:
  auto_load: true
  custom_path: ~/.ellma/modules

🧬 How Evolution Works

  1. Performance Analysis: ELLMa monitors execution times, success rates, and user feedback
  2. Gap Identification: Identifies missing functionality or optimization opportunities
  3. Code Generation: Uses its LLM to generate new modules and improvements
  4. Testing & Integration: Automatically tests and integrates new capabilities
  5. Continuous Learning: Learns from each interaction to become more useful

πŸš€ Advanced Features

Custom Module Development

# Create custom modules that ELLMa can use and improve
from ellma.core.module import BaseModule

class MyCustomModule(BaseModule):
    def execute(self, *args, **kwargs):
        # Your custom functionality
        return result

API Integration

from ellma import ELLMa

# Use ELLMa programmatically
agent = ELLMa()
result = agent.execute("system.scan")
code = agent.generate("python", task="Data analysis script")

Web Interface (Optional)

# Install web dependencies
pip install ellma[web]

# Start web interface
ellma web --port 8000

πŸ›£οΈ Roadmap

Version 0.1.6 - MVP βœ…

  • Core agent with Mistral 7B
  • Basic command system
  • Shell interface
  • Evolution foundation

Version 0.2.0 - Enhanced Shell

  • Advanced command completion
  • Command history and favorites
  • Real-time performance monitoring
  • Module hot-reloading

Version 0.3.0 - Code Generation

  • Multi-language code generators
  • Template system
  • Code quality analysis
  • Integration testing

Version 0.4.0 - Advanced Evolution

  • Performance-based learning
  • User feedback integration
  • Predictive capability development
  • Module marketplace

Version 1.0.0 - Autonomous Agent

  • Full self-management
  • Advanced reasoning capabilities
  • Multi-agent coordination
  • Enterprise features

🀝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

# Clone repository
git clone https://github.com/wronai/ellma.git
cd ellma

# Install in development mode
pip install -e .[dev]

# Run tests
pytest

# Run linting
black ellma/
flake8 ellma/

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Built on top of llama-cpp-python
  • Inspired by the vision of autonomous AI agents
  • Powered by the amazing Mistral 7B model

πŸ“ž Support


ELLMa: The AI agent that grows with you πŸŒ±β†’πŸŒ³

Releases

No releases published

Packages

No packages published

Languages