Evolutionary Local LLM Agent - Self-improving AI assistant that evolves with your needs
- π Features
- β‘ Quick Start
- π Development
- π Usage Examples
- π§© Extending ELLMa
- βοΈ Generated Utilities
- π€ Contributing
- π License
- π Documentation
ELLMa is a revolutionary self-evolving AI agent that runs locally on your machine. Unlike traditional AI tools, ELLMa learns and improves itself with these key features:
- Automatic Code Generation: Generates new modules and capabilities on-the-fly
- Continuous Learning: Improves from interactions and feedback
- Evolution Engine: Self-modifying architecture that evolves over time
- Performance Optimization: Identifies and implements performance improvements
- Error Recovery: Automatically detects and recovers from errors
- Automatic Environment Setup: Ensures all dependencies are installed and configured correctly
- Dependency Auto-Repair: Automatically detects and fixes missing or broken dependencies
- Virtual Environment Management: Handles Python virtual environments automatically
- Security Checks: Performs security validations before executing commands
- Graceful Degradation: Works even when optional dependencies are missing
ELLMa includes optional audio capabilities that can be enabled by installing the audio extras. These features require additional system dependencies.
- Speech Recognition: Convert speech to text
- Text-to-Speech: Convert text to speech (coming soon)
To install with audio support:
pip install ellma[audio]On Ubuntu/Debian:
sudo apt-get update
sudo apt-get install -y python3-dev portaudio19-devOn Fedora/RHEL:
sudo dnf install -y python3-devel alsa-lib-devel portaudio-develOn macOS (using Homebrew):
brew install portaudioNote: Audio features are optional. If you don't need them, you can use ELLMa without installing these dependencies.
- Audio Processing: Work with audio files and streams
To install with audio support:
poetry install --extras "audio"
# or with pip
pip install ellma[audio]Note: Audio features require system dependencies. On Fedora/RHEL:
sudo dnf install python3-devel alsa-lib-devel portaudio-develOn Ubuntu/Debian:
sudo apt-get install python3-dev portaudio19-dev- Python 3.10 or higher
- Poetry (recommended) or pip
- System dependencies (for audio features, see above)
# Check environment status
ellma security check
# Install dependencies
ellma security install [--group GROUP]
# Repair environment issues
ellma security repairELLMa is a revolutionary self-evolving AI agent that runs locally on your machine. Unlike traditional AI tools, ELLMa learns and improves itself by:
- Performance Monitoring: Built-in metrics and monitoring
- Cross-Platform: Works on Linux, macOS, and Windows (WSL2 recommended for Windows)
- System Introspection: Built-in commands for system exploration and debugging
ELLMa includes powerful introspection capabilities to help you understand and debug the system:
# View configuration
sys config # Show all configuration
sys config model # Show model configuration
# Explore source code
sys source ellma.core.agent.ELLMa # View class source
sys code ellma.commands.system # View module source
# System information
sys info # Show detailed system info
sys status # Show system status
sys health # Run system health check
# Module exploration
sys modules # List all available modules
sys module ellma.core # Show info about a module
# Command help
sys commands # List all available commands
sys help # Show help for system commandsThese commands support natural language queries, so you can type things like:
- "show me the config" β
sys config - "what modules are available" β
sys modules - "display system information" β
sys info
ELLMa includes a comprehensive security and dependency management system that ensures safe and reliable execution:
- Secure Code Execution: All code runs in a sandboxed environment with restricted permissions
- Dependency Validation: Automatic verification of required packages and versions
- Environment Isolation: Each component runs in its own isolated environment
- Audit Logging: Detailed logging of all security-relevant actions
- Automatic Repair: Self-healing capabilities for common issues
- Secure Defaults: Secure by default with sensible restrictions
- Automatic Dependency Resolution: Automatically installs missing dependencies
- Version Conflict Resolution: Handles version conflicts gracefully
- Dependency Isolation: Each module can specify its own dependencies
- Security Scanning: Regular security scans for known vulnerabilities
Run any Python script or module securely:
# Run a script with dependency checking
ellma-secure path/to/script.py
# Interactive secure Python shell
ellma-secure
# Install dependencies from requirements.txt
ellma-secure --requirements requirements.txtUse the security context manager in your code:
from ellma.core.security import SecurityContext, Dependency
# Define dependencies
dependencies = [
Dependency(name="numpy", min_version="1.20.0"),
Dependency(name="pandas", min_version="1.3.0")
]
# Run code in a secure context
with SecurityContext(dependencies):
import numpy as np
import pandas as pd
# Your secure code hereAdd dependency checking to any function:
from ellma.core.decorators import secure
from ellma.core.security import Dependency
@secure(dependencies=[
Dependency(name="requests", min_version="2.25.0"),
Dependency(name="numpy", min_version="1.20.0")
])
def process_data(url: str) -> dict:
import requests
import numpy as np
# Your function code here-
Install development dependencies:
poetry install --with dev
-
Run security checks:
# Run bandit security scanner bandit -r ellma/ # Check for vulnerable dependencies safety check
-
Update dependencies:
# Update all dependencies poetry update # Update a specific package poetry update package-name
- Python 3.8+
- pip (Python package manager)
- Git (for development)
- 8GB+ RAM recommended for local models
- For GPU acceleration: CUDA-compatible GPU (optional)
# Clone the repository
git clone https://github.com/wronai/ellma.git
cd ellma
# Install in development mode with all dependencies
pip install -e ".[dev]"pip install ellma-
Initialize ELLMa (creates config in ~/.ellma)
# Basic initialization ellma init # Force re-initialization # ellma init --force
-
Download a model (or let it auto-download when needed)
# Download default model ellma download-model # Specify a different model # ellma download-model --model mistral-7b-instruct
-
Start the interactive shell
# Start interactive shell ellma shell # Start shell with verbose output # ellma -v shell
-
Or execute commands directly
# System information ellma exec system scan # Web interaction (extract text and links) ellma exec web read https://example.com --extract-text --extract-links # File operations (search for Python files) ellma exec files search /path/to/directory --pattern "*.py" # Get agent status ellma status
-
Clone the repository
git clone https://github.com/wronai/ellma.git cd ellma -
Create and activate a virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install development dependencies
pip install -e ".[dev]" pip install pytest pytest-cov pytest-mock -
Install runtime dependencies
pip install SpeechRecognition pyttsx3
Run all tests:
pytest -vRun tests with coverage report:
pytest --cov=ellma --cov-report=term-missingThe evolution engine is a core component that enables self-improvement. It works by:
- Analyzing system performance and capabilities
- Identifying improvement opportunities
- Generating and testing new code
- Integrating successful changes
To manually trigger an evolution cycle:
from ellma.core.agent import ELLMa
agent = ELLMa()
agent.evolve()We use black for code formatting and flake8 for linting. Before submitting a PR, please run:
black .
flake8- Set up pre-commit hooks (recommended)
pre-commit install
# Run all tests
make test
# Run specific test file
pytest tests/test_web_commands.py -v
# Run with coverage report
make test-coverage# Run linters
make lint
# Auto-format code
make format
# Type checking
make typecheck
# Security checks
make security# Build documentation
make docs
# Serve docs locally
cd docs && python -m http.server 8000ellma/
βββ ellma/ # Main package
β βββ core/ # Core functionality
β βββ commands/ # Built-in commands
β βββ generators/ # Code generation
β βββ models/ # Model management
β βββ utils/ # Utilities
βββ tests/ # Test suite
βββ docs/ # Documentation
βββ scripts/ # Development scripts
ellma/
βββ ellma/ # Main package
β βββ core/ # Core functionality
β βββ commands/ # Built-in commands
β βββ generators/ # Code generation
β βββ models/ # Model management
β βββ utils/ # Utilities
βββ tests/ # Test suite
βββ docs/ # Documentation
βββ scripts/ # Development scripts
ELLMa's evolution engine allows it to analyze its performance and automatically improve its capabilities.
# Run a single evolution cycle
ellma evolve
# Run multiple evolution cycles (up to 3 recommended)
ellma evolve --cycles 3
# Force evolution even if not enough commands have been executed
ellma evolve --force- At least 10 commands should be executed before evolution is recommended
- Use
--forceto bypass this requirement - Evolution status is shown in the main status output
# View evolution history (if available)
cat ~/.ellma/evolution/evolution_history.json | jq .
# Monitor evolution logs
tail -f ~/.ellma/logs/evolution.log
# Check evolution status in the main status output
ellma statusCustomize the self-improvement process in ~/.ellma/config.yaml:
evolution:
enabled: true # Enable/disable evolution
auto_improve: true # Allow automatic improvements
learning_rate: 0.1 # Learning rate for evolution (0.0-1.0)The main status command shows key evolution metrics:
ellma statusExample output:
π€ ELLMa Status
ββββββββββββββββββββββββββ³ββββββββββββββββββββββββββββββββββββββββββ
β Property β Value β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β Version β 0.1.6 β
β Model Loaded β β
Yes β
β Model Path β /path/to/model.gguf β
β Modules β 0 β
β Commands β 3 β
β Commands Executed β 15 β
β Success Rate β 100.0% β
β Evolution Cycles β 0 β
β Modules Created β 0 β
ββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββ
### Monitoring Evolution
Track evolution progress and results:
```bash
# View evolution history with detailed metrics
ellma evolution history --limit 10
# Monitor evolution in real-time
ellma evolution monitor
# Get evolution statistics
ellma evolution stats
# Compare evolution cycles
ellma evolution compare cycle1 cycle2
- Start Conservative: Begin with lower learning rates and enable auto-improve
- Monitor Progress: Regularly check evolution logs and metrics
- Set Resource Limits: Prevent excessive resource usage
- Use Benchmarks: Enable benchmarking to measure improvements
- Review Changes: Periodically review and test evolved modules
Common issues and solutions:
# If evolution gets stuck
ellma evolution cancel
# Reset to last known good state
ellma evolution rollback
# Clear evolution cache
ellma evolution clean
# Force reset evolution state (use with caution)
ellma evolution reset --confirm- Create a new Python module in
ellma/commands/:
from ellma.commands.base import BaseCommand
class MyCustomCommand(BaseCommand):
"""My custom command"""- Register your command in
ellma/commands/__init__.py - Restart ELLMa to load your new command
ELLMa includes a powerful set of self-generated utilities for common programming tasks. These include:
- π‘οΈ Enhanced Error Handling: Automatic retries with exponential backoff
- β‘ Performance Caching: In-memory cache with TTL support
- π Parallel Processing: Easy parallel execution of tasks
See the Generated Utilities Documentation for detailed usage and examples.
def __init__(self, agent):
super().__init__(agent)
self.name = "custom"
self.description = "My custom command"
def my_action(self, param1: str, param2: int = 42):
"""Example action"""
return {"result": f"Got {param1} and {param2}"}
2. Register your command in `ellma/commands/__init__.py`
### Creating Custom Modules
1. Create a new module class:
```python
from ellma.core.module import BaseModule
class MyCustomModule(BaseModule):
def __init__(self, agent):
super().__init__(agent)
self.name = "my_module"
self.version = "1.0.0"
def setup(self):
# Initialization code
pass
def execute(self, command: str, *args, **kwargs):
# Handle commands
if command == "greet":
return f"Hello, {kwargs.get('name', 'World')}!"
raise ValueError(f"Unknown command: {command}")
- Register your module in the agent's configuration
We welcome contributions! Here's how you can help:
- Report Bugs: Open an issue with detailed steps to reproduce
- Suggest Features: Share your ideas for new features
- Submit Pull Requests: Follow these steps:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Update documentation
- Submit a PR
- Follow PEP 8 style guide
- Write docstrings for all public functions and classes
- Add type hints for better code clarity
- Write tests for new features
- Update documentation when making changes
MIT License - see LICENSE for details.
For complete documentation, visit ellma.readthedocs.io
- Thanks to all contributors who have helped improve ELLMa
- Built with β€οΈ by the ELLMa team
ELLMa continuously improves by analyzing its performance and automatically generating new modules:
$ ellma evolve
𧬠Starting evolution process...
π Analyzing current capabilities...
π― Identified 3 improvement opportunities:
β
Added: advanced_file_analyzer
β
Added: network_monitoring
β
Added: code_optimizer
π Evolution complete! 3 new capabilities added.Track your agent's performance:
# Show agent status
ellma status
# View system health metrics
ellma exec system.health# Run system scan
ellma exec system.scan
# Read web page content
ellma exec web.read https://example.com
# Read web page with link extraction
ellma exec web.read https://example.com extract_links true extract_text true
# Quick system health check
ellma exec system.health
# Save command output to file
ellma exec system.scan > scan_results.jsonStart the interactive shell and use system commands:
# Start the interactive shell
ellma shell
# In the shell, you can run commands like:
ellma> system.health
ellma> system.scan
ellma> web.read https://example.com
ellma> web.read https://example.com extract_links true extract_text trueExample shell session:
π€ ELLMa Interactive Shell (v0.1.6)
Type 'help' for available commands, 'exit' to quit
# Available commands:
# - system.health: Check system health
# - system.scan: Perform system scan
# - web.read [url]: Read web page content
# - web.read [url] extract_links true extract_text true: Read web page with link extraction
# - help: Show available commands
ellma> system.health
{'status': 'HEALTHY', 'cpu_usage': 12.5, 'memory_usage': 45.2, ...}
ellma> web.read example.com
{'status': 200, 'title': 'Example Domain', 'content_length': 1256, ...}
# For commands with parameters, use space-separated values
ellma> web.read example.com extract_links true extract_text true
Generate production-ready code in multiple languages:
# Generate Bash scripts
ellma generate bash --task="Monitor system resources and alert on high usage"
# Generate Python code
ellma generate python --task="Web scraper with rate limiting"
# Generate Docker configurations
ellma generate docker --task="Multi-service web application"
# Generate Groovy for Jenkins
ellma generate groovy --task="CI/CD pipeline with testing stages"ELLMa understands your system and can:
- Scan and analyze system configurations
- Monitor processes and resources
- Automate repetitive tasks
- Generate custom tools for your workflow
ellma/
βββ core/ # Core agent and evolution engine
β βββ agent.py # Main LLM Agent class
β βββ evolution.py # Self-improvement system
β βββ shell.py # Interactive shell interface
βββ commands/ # Modular command system
β βββ system.py # System operations
β βββ web.py # Web interactions
β βββ files.py # File operations
βββ generators/ # Code generation engines
β βββ bash.py # Bash script generator
β βββ python.py # Python code generator
β βββ docker.py # Docker configuration generator
βββ modules/ # Dynamic module system
β βββ registry.py # Module registry and loader
β βββ [auto-generated]/ # Self-created modules
βββ cli/ # Command-line interface
βββ main.py # Main CLI entry point
βββ shell.py # Interactive shell
# Run comprehensive system scan
ellma exec system.scan
# Monitor system resources (60 seconds with 5-second intervals)
ellma exec system.monitor --duration 60 --interval 5
# Check system health status
ellma exec system.health
# List top processes by CPU usage
ellma exec system.processes --sort-by cpu --limit 10
# Check open network ports
ellma exec system.ports# Generate a new Python project
ellma generate python --task "FastAPI project with SQLAlchemy and JWT auth"
# Create a Docker Compose setup
ellma generate docker --task "Python app with PostgreSQL and Redis"
# Generate test cases
```bash
ellma generate test --file app/main.py --framework pytest
# Document a Python function
ellma exec code document_function utils.py --function process_dataExplore practical examples of using the generated utilities in the examples/generated_utils/ directory:
- Error Handling - Automatic retries with exponential backoff
- Performance Caching - Efficient data caching with TTL
- Parallel Processing - Concurrent task execution
- Combined Example - Using all utilities together
Run any example with:
# From the project root
python -m examples.generated_utils.example_name
# Or directly
cd examples/generated_utils/
python example_name.pyFor more details, see the generated utilities documentation.
# Read and extract content from a webpage
ellma exec web.read https://example.com --extract-text --extract-links
# Make HTTP GET request to an API endpoint
ellma exec web.get https://api.example.com/data
# Make HTTP POST request with JSON data
ellma exec web.post https://api.example.com/data --data '{"key": "value"}'
# Generate API client code
ellma generate python --task "API client for REST service with error handling"
## π§ Configuration
ELLMa stores its configuration in `~/.ellma/`:
```yaml
# ~/.ellma/config.yaml
model:
path: ~/.ellma/models/mistral-7b.gguf
context_length: 4096
temperature: 0.7
evolution:
enabled: true
auto_improve: true
learning_rate: 0.1
modules:
auto_load: true
custom_path: ~/.ellma/modules
- Performance Analysis: ELLMa monitors execution times, success rates, and user feedback
- Gap Identification: Identifies missing functionality or optimization opportunities
- Code Generation: Uses its LLM to generate new modules and improvements
- Testing & Integration: Automatically tests and integrates new capabilities
- Continuous Learning: Learns from each interaction to become more useful
# Create custom modules that ELLMa can use and improve
from ellma.core.module import BaseModule
class MyCustomModule(BaseModule):
def execute(self, *args, **kwargs):
# Your custom functionality
return resultfrom ellma import ELLMa
# Use ELLMa programmatically
agent = ELLMa()
result = agent.execute("system.scan")
code = agent.generate("python", task="Data analysis script")# Install web dependencies
pip install ellma[web]
# Start web interface
ellma web --port 8000- Core agent with Mistral 7B
- Basic command system
- Shell interface
- Evolution foundation
- Advanced command completion
- Command history and favorites
- Real-time performance monitoring
- Module hot-reloading
- Multi-language code generators
- Template system
- Code quality analysis
- Integration testing
- Performance-based learning
- User feedback integration
- Predictive capability development
- Module marketplace
- Full self-management
- Advanced reasoning capabilities
- Multi-agent coordination
- Enterprise features
We welcome contributions! Please see our Contributing Guide for details.
# Clone repository
git clone https://github.com/wronai/ellma.git
cd ellma
# Install in development mode
pip install -e .[dev]
# Run tests
pytest
# Run linting
black ellma/
flake8 ellma/This project is licensed under the MIT License - see the LICENSE file for details.
- Built on top of llama-cpp-python
- Inspired by the vision of autonomous AI agents
- Powered by the amazing Mistral 7B model
- π Documentation
- π Issue Tracker
- π¬ Discussions
- π§ Email Support
ELLMa: The AI agent that grows with you π±βπ³