Skip to content

AgenticPlace/mindXbeta

Repository files navigation

mindX: Self-Improving Augmentic Intelligence

Version: 3.1.4 (Production Candidate - Core Loop)

Overview

mindX is an Augmentic Intelligence developed by PYTHAI under the conceptual "Augmentic Project" to faciliate knowledge delivery". mindX central design philosophy is autonomous self-improvement. MindX is designed toanalyze its own Python codebase, identify areas for enhancement, generate solutions using Large Language Models (LLMs) appling improvements safely evolving mindX capabilities over time. This project draws inspiration from concepts like the Darwin Godel Machine and MIT SEAL emphasizing empirical validation of changes to maintain a history of mindX evolution.

mindX is architected around a suite of interacting Python agents and modules:

  • AGInt (Augmentic General Intelligence Agent): A higher-level cognitive agent operating on a Perceive-Orient-Decide-Act (PODA) cycle. It synthesizes a world model from the BeliefSystem, makes strategic decisions, and can delegate complex, plan-based tasks to a subordinate BDIAgent. It features a "Cognitive Metabolism" to intelligently select from a tiered hierarchy of LLMs, ensuring both high performance and operational resilience.
  • CoordinatorAgent: The primary orchestrator. It manages high-level system operations, performs system-wide analysis (integrating data from code structure, resource monitors, and LLM performance monitors), maintains an "improvement backlog" of potential enhancements, and delegates tactical code modification tasks. It features an autonomous improvement loop with optional Human-in-the-Loop (HITL) for changes to critical components.
  • SelfImprovementAgent (SIA): The specialized "code surgeon." It is invoked via its Command Line Interface (CLI) by the CoordinatorAgent. Given a specific Python file and an improvement goal, the SIA uses an LLM to generate code modifications. Crucially, it evaluates these changes in isolated iteration directories (including running self-tests if modifying its own code) and employs safety mechanisms like backups and fallbacks before promoting successful changes, particularly for its own source code.
  • IDManagerAgent: Responsible for creating, managing, and deprecating cryptographic identities for agents and tools within the MindX system. It uses the eth-account library to generate Ethereum-style wallets and stores private keys in a secure, isolated .env file, paving the way for secure inter-agent verifiable communication and futher blockchain integrations for data integrity.
  • Monitoring Agents (ResourceMonitor, PerformanceMonitor): These agents continuously track system health (CPU, memory, disk usage across multiple configurable paths) and LLM interaction performance (latency, success rates, token counts, costs, error types per model/task). This data is vital for informing the CoordinatorAgent's analysis and strategic decision-making.
  • Strategic Evolution Agent (StrategicEvolutionAgent - formerly AGISelfImprovementAgent): A higher-level agent that can be tasked with managing broader self-improvement campaigns. It uses its own internal BDIAgent and a SystemAnalyzerTool to strategize, identify opportunities, and then makes specific component improvement requests to the CoordinatorAgent for tactical execution by the SIA.
  • BDIAgent (Belief-Desire-Intention Agent): A core component for goal-directed autonomous behavior. It uses an LLM to decompose high-level goals into concrete, executable plans (a sequence of tool calls or actions). It maintains beliefs in the BeliefSystem and executes its plans to achieve its desires (goals).
  • BeliefSystem: A shared, persistent, and namespaced knowledge base for agents to store, manage, and reason about information, observations, and statuses.
  • Utility Modules: A suite of supporting components for robust configuration management (Config), standardized logging (Logging), and a factory for LLM interaction handlers (LLMFactory).

Project Vision & Goals

  • Explore AI Self-Improvement: To research, implement, and demonstrate mechanisms that enable an AI system to autonomously enhance its own functionality and performance.
  • Autonomous Code Evolution: To create a system capable of identifying areas for code improvement, generating solutions (via LLMs), and safely applying these modifications to its Python codebase.
  • Safe & Verifiable Changes: To build a framework where self-improvement cycles are managed with safety as a priority, incorporating verification steps (like syntax checks and automated self-tests) and fallback options.
  • LLM-Driven Cognitive Cycle: To leverage Large Language Models for various cognitive tasks within the self-improvement loop, including code analysis, solution generation, and critique of proposed changes.
  • Evolving Platform: To provide an extensible platform for experimenting with different strategies for autonomous AI development, learning, and strategic evolution.

Core Features

  • Hierarchical Improvement Process:
    • Strategic Layer (StrategicEvolutionAgent): Manages long-term improvement campaigns, identifies broad areas using SystemAnalyzerTool, and uses an internal BDIAgent to plan campaign steps.
    • Orchestration Layer (CoordinatorAgent): Receives strategic directives or direct user requests. Performs system-wide analysis, manages an improvement_backlog, handles HITL for critical changes, and delegates tactical code modifications.
    • Tactical Layer (SelfImprovementAgent): Executes specific file modification tasks via its robust CLI, ensuring safety and verification.
  • Data-Informed System Analysis: The CoordinatorAgent integrates data from codebase scans, resource monitors, and LLM performance monitors to make informed suggestions for improvements.
  • Autonomous Improvement Loop (Coordinator): Periodically analyzes the system, adds suggestions to a persistent backlog, and (if configured) attempts to implement high-priority items, respecting HITL.
  • Human-in-the-Loop (HITL): Changes to designated critical system components (e.g., SIA, Coordinator) can be configured to require manual approval via CLI before autonomous application.
  • Safe & Verified Code Modification (via SelfImprovementAgent):
    • CLI Interface: Decoupled execution via a standardized command-line interface.
    • Iteration Directories: Self-modifications are developed and tested in isolated temporary directories.
    • Automated Self-Tests: When modifying its own code, the SIA runs a suite of self-tests on the changed version before it can be promoted.
    • LLM-Critique: An LLM evaluates the quality and goal-adherence of generated code changes.
    • Backup & Fallback: The SIA automatically backs up its current script before promoting a self-update, allowing for reversion.
    • Structured JSON Output: The SIA CLI provides detailed, machine-parsable JSON reports of its operations.
  • Comprehensive Monitoring:
    • ResourceMonitor: Tracks CPU, memory, and multi-path disk usage with configurable alert thresholds and callbacks.
    • PerformanceMonitor: Logs detailed metrics for LLM calls (latency, tokens, cost, success/failure rates, error types) per model and optionally per task type/initiating agent. Metrics are persisted.
  • Centralized Configuration (Config): Robustly loads settings from code defaults, a JSON file (mindx_config.json), .env files, and MINDX_ prefixed environment variables, with clear precedence. PROJECT_ROOT is centrally defined.
  • Shared Belief System (BeliefSystem): A persistent, namespaced knowledge base for agents to share and record information, observations, and statuses.
  • Modular & Asynchronous Design: Built with Python's asyncio for concurrent operations and a modular structure for better maintainability and extensibility.
  • **A comphrehensive overview is available in docs and mapped as index.md

Project File Structure

augmentic_mindx/
├── mindx/ # Main MindX Python package (installable)
│ ├── core/ # Core agent concepts
│ │ ├── __init__.py
│ │ ├── agint.py               # High-level strategic agent (PODA cycle)
│ │ ├── bdi_agent.py           # Belief-Desire-Intention agent for plan execution
│ │ ├── belief_system.py       # Shared knowledge base
│ │ └── id_manager_agent.py    # Manages cryptographic identities
│ ├── orchestration/ # System-level coordination
│ │ ├── __init__.py
│ │ ├── coordinator_agent.py # Main orchestrator
│ │ ├── multimodel_agent.py # STUB: For managing multiple LLM tasks
│ │ └── model_selector.py # STUB: For selecting LLMs
│ ├── learning/ # Self-improvement and evolution logic
│ │ ├── init.py
│ │ ├── self_improve_agent.py # Tactical code modification worker (CLI)
│ │ ├── strategic_evolution_agent.py # Strategic improvement campaign manager
│ │ ├── goal_management.py # Goal manager for SEA/BDI
│ │ └── plan_management.py # Plan manager for SEA/BDI
│ ├── monitoring/ # System and performance monitoring
│ │ ├── init.py
│ │ ├── resource_monitor.py # Monitors CPU, memory, disk
│ │ └── performance_monitor.py# Monitors LLM call performance
│ ├── llm/ # LLM interaction layer
│ │ ├── init.py
│ │ ├── llm_interface.py # Abstract interface for LLM handlers
│ │ ├── llm_factory.py # Creates specific LLM handlers
│ │ └── model_registry.py # Manages available LLM handlers
│ ├── utils/ # Common utilities
│ │ ├── init.py
│ │ ├── logging_config.py # Centralized logging setup
│ │ └── config.py # Configuration management (defines PROJECT_ROOT)
│ ├── docs/ # STUB PACKAGE: For documentation system
│ │ ├── init.py
│ │ └── documentation_agent.py # STUB: Agent for managing documentation
│ └── init.py # Makes 'mindx' a package
├── scripts/ # Executable scripts
│ └── run_mindx_coordinator.py # Main CLI entry point for MindX system
├── data/ # Data generated and used by MindX (persistent state)
│ ├── config/ # Optional location for mindx_config.json
│ ├── logs/ # Application logs (e.g., mindx_system.log)
│ ├── self_improvement_work_sia/ # Data specific to SelfImprovementAgent instances
│ │ └── self_improve_agent/ # Subdirectory named after SIA script stem
│ │ ├── archive/ # SIA's detailed attempt history (improvement_history.jsonl)
│ │ └── fallback_versions/ # Backups of SIA script after successful self-updates
│ ├── temp_sia_contexts/ # Temporary files for Coordinator to pass large contexts to SIA CLI
│ ├── improvement_backlog.json # Coordinator's prioritized list of improvement tasks
│ ├── improvement_campaign_history.json # Coordinator's log of dispatched SIA campaigns
│ ├── sea_campaign_history_*.json # StrategicEvolutionAgent's campaign history files
│ ├── bdi_notes/ # Example notes directory for BDI tools
│ └── performance_metrics.json # Persisted data from PerformanceMonitor
├── tests/ # Placeholder for unit and integration tests
├── .env # Local environment variables (API keys, overrides - GIT IGNORED)
├── mindx_config.json # Optional global JSON configuration file (example)
├── pyproject.toml # Project metadata, dependencies, tool configurations
└── README.md # This file

Getting Started

Prerequisites

  • Python 3.9 or higher.
  • pip (Python package installer).
  • Access to Large Language Models:
    • Ollama (Recommended for local development): Install Ollama and pull desired models (e.g., ollama pull deepseek-coder:6.7b-instruct, ollama pull nous-hermes2:latest).
    • Google Gemini: An API key from Google AI Studio.
    • Other providers can be integrated by extending mindx.llm.llm_factory.py.

Installation

  1. Clone Repository: If applicable.
  2. Create Virtual Environment:
    python3 -m venv .venv
    source .venv/bin/activate  # Linux/macOS
    # .venv\Scripts\activate   # Windows
  3. Install Dependencies: The pyproject.toml lists dependencies.
    pip install -e .[dev] 
    # Installs MindX in editable mode with development dependencies (like pytest, ruff)
    # Or, for runtime only: pip install .
    This will install packages like psutil, python-dotenv, PyYAML, ollama, google-generativeai.

Configuration

  1. Create .env file: In the project root (augmentic_mindx/), create a .env file. You can copy .env.example if one is provided. This file is for secrets like API keys and local overrides. It should be in .gitignore. Example .env content:
    MINDX_LOG_LEVEL="INFO" # Or DEBUG for more verbosity
    
    MINDX_LLM__DEFAULT_PROVIDER="ollama"
    MINDX_LLM__OLLAMA__DEFAULT_MODEL="nous-hermes2:latest"
    MINDX_LLM__OLLAMA__DEFAULT_MODEL_FOR_CODING="deepseek-coder:6.7b-instruct"
    # MINDX_LLM__OLLAMA__BASE_URL="http://localhost:11434" # Default
    
    # GEMINI_API_KEY="YOUR_GEMINI_API_KEY" # For direct SDK use if needed by a tool
    MINDX_LLM__GEMINI__API_KEY="YOUR_GEMINI_API_KEY_HERE"
    MINDX_LLM__GEMINI__DEFAULT_MODEL="gemini-1.5-flash-latest"
    
    MINDX_COORDINATOR__AUTONOMOUS_IMPROVEMENT__ENABLED="false" # Start with false
    MINDX_COORDINATOR__AUTONOMOUS_IMPROVEMENT__REQUIRE_HUMAN_APPROVAL_FOR_CRITICAL="true"

About

mindX Augmentic Intelligence

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •