Project Directed by: Steven Fisher
Designed by: ChatGPT
Implemented using: Cursor
Powered by: Claude
This is an advanced Artificial Mind (Artificial Mind) system that represents a unique collaboration between human direction and AI capabilities. The project demonstrates AI designing and building AI, showcasing the potential for recursive intelligence development where AI systems can contribute to their own evolution and the creation of more sophisticated AI architectures.
A short demo video of the project running: YouTube Demo Video
This project embodies the concept of "AI Building AI" - where artificial intelligence systems are not just tools, but active participants in the design and implementation of more advanced AI systems. It represents a step toward recursive self-improvement and collaborative intelligence development.
The Artificial Mind system consists of several interconnected components that work together to create a comprehensive artificial intelligence platform:
- π§ FSM Engine - Finite State Machine for cognitive state management
- π― HDN (Hierarchical Decision Network) - AI planning and execution system with ethical safeguards
- βοΈ Principles API - Ethical decision-making system for AI actions
- πͺ Conversational Layer - Natural language interface with thinking mode
- π§ Tool System - Extensible tool framework for AI capabilities
- π Monitor UI - Real-time visualization and control interface
- π§ Thinking Mode - Real-time AI introspection and transparency
- Real-time Thought Expression - See inside the AI's reasoning process
- Ethical Safeguards - Built-in principles checking for all actions
- Hierarchical Planning - Multi-level task decomposition and execution
- Natural Language Interface - Conversational AI with full transparency
- Tool Integration - Extensible framework for AI capabilities
- Knowledge Growth - Continuous learning and adaptation
- System Overview - High-level system architecture
- Architecture Details - Detailed technical architecture
- Solution Architecture Diagram - Visual system design
- HDN Architecture - Hierarchical Decision Network design
- Thinking Mode - Real-time AI introspection and transparency
- Reasoning & Inference - AI reasoning capabilities
- Reasoning Implementation - Technical implementation details
- Knowledge Growth - Continuous learning system
- Domain Knowledge - Knowledge representation and management
- Conversational AI Summary - Natural language interface
- Natural Language Interface - Language processing capabilities
- API Reference - Complete API documentation
- Principles Integration - Ethical decision-making system
- Content Safety - Safety mechanisms and content filtering
- Dynamic Integration Guide - Dynamic system integration
- Setup Guide - Complete setup instructions for new users
- Configuration Guide - Docker, LLM, and deployment configuration
- Secure Packaging Guide - Binary encryption and security
- Implementation Summary - Development overview
- Integration Guide - System integration instructions
- Refactoring Plan - Code organization and refactoring
- Tool Metrics - Performance monitoring and metrics
- Docker Compose - Local development deployment
- Kubernetes (k3s) - Production Kubernetes deployment
- Docker Resource Config - Container configuration
- Docker Reuse Strategy - Container optimization
- Tool Metrics - Performance monitoring
- Intelligent Execution - Execution monitoring and analysis
# 1. Clone the repository
git clone https://github.com/yourusername/agi-project.git
cd agi-project
# 2. Start infrastructure (Redis, Neo4j, Weaviate, NATS)
docker compose up -d # or: docker-compose up -d
# 3. Start app services without touching infra (safer on macOS)
./scripts/start_servers.sh --skip-infra
# 4. Open your browser to http://localhost:8082- Docker & Docker Compose - Download here
- Git - Download here
- LLM Provider - OpenAI, Anthropic, or local LLM (see Setup Guide)
-
Copy environment template:
cp env.example .env
-
Edit configuration (see Configuration Guide):
nano .env
-
Configure your environment:
# Copy the example configuration cp .env.example .env # Edit with your settings nano .env
The
.envfile contains all configuration including:- LLM Provider Settings (OpenAI, Anthropic, Ollama, Mock)
- Service URLs (Redis, NATS, Neo4j, Weaviate)
- Database Configuration (Neo4j credentials, Qdrant URL)
- Docker Resource Limits (Memory, CPU, PIDs)
- Performance Tuning (Concurrent executions, timeouts)
# Start all services with x86_64 optimized images
docker-compose -f docker-compose.x86.yml up -d
# Check status
docker-compose -f docker-compose.x86.yml ps
# View logs
docker-compose -f docker-compose.x86.yml logs -f# Start all services with ARM64 images
docker compose up -d # prefer v2 syntax if available; otherwise use docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -fIf you already started infrastructure with Compose, you can start just the Go services without touching Docker ports using the new flag:
./scripts/start_servers.sh --skip-infraThis avoids killing Docker Desktop proxy processes on macOS and prevents daemon disruptions.
# Build for multiple architectures
./scripts/build-multi-arch.sh -r your-registry.com -t latest --push
# Or use Makefile for local builds
make build-x86 # Build for x86_64
make build-arm64 # Build for ARM64
make build-all-archs # Build for both# Deploy to k3s cluster on ARM Raspberry Pi
kubectl apply -f k3s/namespace.yaml
kubectl apply -f k3s/pvc-*.yaml
kubectl apply -f k3s/redis.yaml -f k3s/weaviate.yaml -f k3s/neo4j.yaml -f k3s/nats.yaml
kubectl apply -f k3s/principles-server.yaml -f k3s/hdn-server.yaml -f k3s/goal-manager.yaml -f k3s/fsm-server.yaml -f k3s/monitor-ui.yaml
# Check deployment
kubectl -n agi get pods,svcNote: All Kubernetes configurations use kubernetes.io/arch: arm64 node selectors and are optimized for ARM Raspberry Pi hardware with Drone CI execution methods.
See k3s/README.md for detailed Kubernetes deployment instructions.
# Build all components
make build
# Start services individually
./bin/principles-server &
./bin/hdn-server -mode=server &
./bin/goal-manager -agent=agent_1 &
./bin/fsm-server &# Test basic functionality
curl http://localhost:8081/health
# Test chat with thinking mode
curl -X POST http://localhost:8081/api/v1/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello! Think out loud about what you can do.", "show_thinking": true}'
# Test specific LLM provider
curl -X POST http://localhost:8081/api/v1/chat \
-H "Content-Type: application/json" \
-d '{"message": "What LLM provider are you using?", "show_thinking": true}'Create security files for production:
# Create secure directory and keypairs
mkdir -p secure/
openssl genrsa -out secure/customer_private.pem 2048
openssl rsa -in secure/customer_private.pem -pubout -out secure/customer_public.pem
openssl genrsa -out secure/vendor_private.pem 2048
openssl rsa -in secure/vendor_private.pem -pubout -out secure/vendor_public.pem
echo "your-token-content-here" > secure/token.txtSee Secure Packaging Guide for details.
Experience real-time AI introspection with our revolutionary thinking mode:
{
"message": "Please learn about black holes and explain them to me",
"show_thinking": true
}Features:
- Real-time thought streaming via WebSockets/SSE
- Multiple thought styles (conversational, technical, streaming)
- Confidence visualization and decision tracking
- Tool usage monitoring and execution transparency
- Educational interface for understanding AI reasoning
- Pre-execution checking - All actions validated before execution
- Dynamic rule loading - Update ethical rules without restarting
- Fail-safe design - Continues operation with safety checks
- Transparent decision-making - Clear reasoning for all decisions
- Multi-level task decomposition - Break complex tasks into manageable steps
- Dynamic task analysis - Handles LLM-generated tasks intelligently
- Context-aware execution - Maintains context across task hierarchies
- Progress tracking - Real-time monitoring of task execution
- Conversational AI - Natural language interaction with full transparency
- Intent recognition - Understands user goals and context
- Multi-modal communication - Text, structured data, and visual interfaces
- Session management - Persistent conversation context
| Service | Port | Description |
|---|---|---|
| Principles API | 8080 | Ethical decision-making |
| HDN Server | 8081 | AI planning and execution |
| Monitor UI | 8082 | Real-time visualization |
| FSM Server | 8083 | Cognitive state management |
POST /api/v1/chat- Chat with thinking mode enabledGET /api/v1/chat/sessions/{id}/thoughts- Get AI thoughtsGET /api/v1/chat/sessions/{id}/thoughts/stream- Stream thoughts in real-time
POST /api/v1/interpret/execute- Natural language task executionPOST /api/v1/hierarchical/execute- Complex task planningPOST /api/v1/docker/execute- Code execution in containers
GET /api/v1/tools- List available toolsPOST /api/v1/tools/execute- Execute specific toolsGET /api/v1/intelligent/capabilities- AI capabilities
curl -X POST http://localhost:8081/api/v1/chat \
-H "Content-Type: application/json" \
-d '{
"message": "Explain quantum computing in simple terms",
"show_thinking": true,
"session_id": "demo_session"
}'# Stream AI thoughts in real-time
curl http://localhost:8081/api/v1/chat/sessions/demo_session/thoughts/streamcurl -X POST http://localhost:8081/api/v1/interpret/execute \
-H "Content-Type: application/json" \
-d '{
"input": "Scrape https://example.com and analyze the content"
}'curl -X POST http://localhost:8081/api/v1/docker/execute \
-H "Content-Type: application/json" \
-d '{
"code": "def fibonacci(n): return n if n <= 1 else fibonacci(n-1) + fibonacci(n-2)",
"language": "python"
}'make test-integrationmake test-principles # Test ethical decision-making
make test-hdn # Test AI planning system
make test-thinking # Test thinking mode featuresmake test-performance # Load and stress testing
make test-metrics # Performance metricsmake dev # Start all services with auto-reloadmake fmt # Format code
make lint # Lint code
make coverage # Generate coverage report- Create feature branch
- Implement changes
- Add tests
- Update documentation
- Submit pull request
This project represents a unique approach where AI systems actively participate in their own development and the creation of more advanced AI architectures.
The thinking mode provides unprecedented insight into AI decision-making processes, enabling trust and understanding.
Built-in ethical safeguards ensure all AI actions are evaluated against moral principles before execution.
Multi-level planning and execution capabilities that can handle complex, multi-step tasks intelligently.
The system grows and adapts through experience, demonstrating true learning capabilities.
We welcome contributions from the AI and research community! This project represents a collaborative effort between human intelligence and artificial intelligence.
- Fork the repository
- Create a feature branch
- Implement your changes
- Add comprehensive tests
- Update documentation
- Submit a pull request
- New AI capabilities - Extend the tool system
- Ethical frameworks - Improve the principles system
- Interface improvements - Enhance user experience
- Performance optimization - Improve system efficiency
- Documentation - Help others understand the system
This project is licensed under the MIT License with Attribution Requirement.
- β Free to use for personal and commercial projects
- β Free to modify and create derivative works
- β Free to distribute and sell
- π Must attribute Steven Fisher as the original author
- π Must include this license file in derivative works
When using this software, you must:
- Include the original copyright notice: "Copyright (c) 2025 Steven Fisher"
- Display "Steven Fisher" in README files, credits, or documentation
- Include this LICENSE file in your project
- Preserve attribution in any derivative works
See the LICENSE file for complete terms.
This license ensures Steven Fisher receives proper credit while allowing maximum freedom for others to use and build upon this work.
- Steven Fisher - Project Direction and Vision
- ChatGPT - System Design and Architecture
- Cursor - Development Environment and Tools
- Claude - Implementation and Code Generation
- Open Source Community - Foundational technologies and libraries
"The best way to predict the future is to invent it, and the best way to invent the future is to have AI help us build it."
This project demonstrates that the future of AI development is not just human-led or AI-led, but a collaborative partnership between human creativity and artificial intelligence capabilities.