Skip to content

ContainMind enables AI assistants to interact intelligently with containerized environments, regardless of the underlying runtime (Docker, containerd, CRI-O, etc.). Through its Model Context Protocol (MCP) interface, AI agents can inspect, monitor, and control containers using natural language

Notifications You must be signed in to change notification settings

Ashfaqbs/ContainMind

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ContainMind πŸ‹πŸ§ 

alt text

A Universal Container Management MCP Server for Intelligent Container Operations

ContainMind is an MCP (Model Context Protocol) server that provides AI assistants with powerful container management capabilities across multiple container runtimes. It enables seamless inspection, monitoring, and analysis of containerized environments through natural language interactions.

🎯 What is ContainMind?

ContainMind bridges the gap between AI assistants (like Claude) and container runtimes, allowing you to manage and analyze your containers using conversational interfaces. It's a unified API that works with multiple container engines, providing real-time insights and automation capabilities.

πŸ”₯ Problem It Solves

The Challenge

Modern containerized applications often face these issues:

  • Fragmented tooling: Different commands for Docker, Podman, and other runtimes
  • Complex debugging: Diving through logs and metrics across multiple containers
  • Performance visibility: Difficult to get quick insights into resource usage
  • Manual inspection: Time-consuming manual checks for container health and configuration
  • Context switching: Jumping between CLI tools and monitoring dashboards

The Solution

ContainMind provides:

  • Unified interface across Docker and Podman (with more runtimes coming)
  • AI-powered analysis through natural language queries
  • Real-time monitoring with easy-to-parse metrics
  • Automated diagnostics for troubleshooting container issues
  • Single entry point for all container operations

πŸš€ Features

Multi-Runtime Support

  • βœ… Docker - Full support for Docker Engine
  • βœ… Podman - Complete Podman compatibility
  • πŸ”„ Auto-detection - Automatically detects and connects to available runtime
  • πŸ”Œ Pluggable architecture - Easy to extend for additional runtimes

Comprehensive Container Operations

πŸ“¦ Resource Discovery

  • List all containers (running and stopped)
  • List container images
  • List volumes and their mount points
  • List networks and connected containers

πŸ” Deep Inspection

  • Detailed container analysis (configuration, environment, mounts)
  • Real-time resource statistics (CPU, memory, network I/O)
  • Container logs with configurable tail length
  • System-wide information and capacity

⚑ Performance Monitoring

  • CPU usage percentage
  • Memory usage and limits
  • Network throughput (RX/TX)
  • Parallel stats collection for multiple containers

πŸ› οΈ Available Tools

Tool Description Use Case
initialize_connection Connect to container runtime Initial setup, auto-detect or specify runtime
list_all_containers List containers Quick overview of all/running containers
list_all_images List images Check available images and sizes
list_all_volumes List volumes Identify storage usage and orphaned volumes
list_all_networks List networks Network topology and connectivity
analyze_container Deep container inspection Debug configuration, check mounts and env vars
get_container_stats Resource usage metrics Performance monitoring and optimization
get_container_logs Retrieve container logs Troubleshoot application issues
get_system_info System information Check overall capacity and utilization

πŸ’‘ How ContainMind Helps

1. Issue Analysis

Scenario: Your application is crashing

You: "Why is my mongo_local_standard container showing high memory usage?"
AI: [Uses get_container_stats + analyze_container]
    "The container is using 89% of its 2GB memory limit. 
     Checking the configuration... no memory limit set in docker-compose.
     Recommendation: Set explicit memory limits and check for memory leaks."

2. Performance Tuning

Scenario: Optimizing resource allocation

You: "Which containers are using the most CPU?"
AI: [Uses get_container_stats with all_running=True]
    "Analysis of running containers:
     1. kafka-1: 45% CPU - consider adding more brokers
     2. redis-server: 12% CPU - within normal range
     Recommendation: Kafka is CPU-bound, consider horizontal scaling."

3. Quick Diagnostics

Scenario: Container won't start

You: "Check why weaviate_db failed to start"
AI: [Uses get_container_logs + analyze_container]
    "Log analysis shows port 8080 binding failure.
     Inspecting configuration... port already in use by another service.
     Recommendation: Change port mapping or stop conflicting service."

4. Resource Cleanup

Scenario: Disk space management

You: "Show me unused volumes taking up space"
AI: [Uses list_all_volumes + list_all_containers]
    "Found 45 anonymous volumes not attached to any containers.
     Total reclaimable space: ~3.2 GB
     Safe to remove: [lists volume IDs]"

5. Health Monitoring

Scenario: Proactive monitoring

You: "Give me a health report of my container environment"
AI: [Uses get_system_info + get_container_stats]
    "System Health Report:
     - 7 total containers (1 running, 6 stopped)
     - Memory: 8.2GB / 16GB used (51%)
     - CPU: Normal load across running containers
     - Network: No bottlenecks detected
     Alerts: 6 stopped containers may need attention."

6. Configuration Auditing

Scenario: Security and best practices

You: "Check environment variables for containers with sensitive data"
AI: [Uses analyze_container for each container]
    "Security audit complete:
     - mongo_local_standard: Contains DB credentials in env vars
     - redis-server: No authentication configured
     Recommendation: Use Docker secrets or external secret management."

πŸ“‹ Installation & Setup

Prerequisites

  • Python 3.8+
  • Docker or Podman installed
  • Access to container runtime socket

Install Dependencies

pip install -r requirements.txt

Run the Server

python containmind.py

The server starts on http://127.0.0.1:8081 by default.

Configure with Claude Desktop

Add to your Claude Desktop configuration (claude_desktop_config.json):

{"mcpServers": {
"docker-Mcp": {
    "command": "uv",
    "args": [
        "run",
        "--with",
        "fastmcp",
        "fastmcp",
        "run",
        "path\to\proxy.py" 
    ],
    "env": {},
    "transport": "stdio"
}

    }
    
}

πŸ”§ Usage Examples

Auto-detect Runtime

# ContainMind automatically detects Docker or Podman
initialize_connection()

Specify Runtime

# Force Docker
initialize_connection(backend="docker")

# Force Podman
initialize_connection(backend="podman")

# Custom socket
initialize_connection(base_url="unix:///run/podman/podman.sock")

Get Container Stats

# Single container
get_container_stats(container_id="mongo_local_standard")

# All running containers
get_container_stats(all_running=True, parallel=True)

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         AI Assistant (Claude)           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                 β”‚ MCP Protocol
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚          ContainMind Server             β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚    Tool Interface Layer          β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚   Container Inspector            β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚   Backend Abstraction Layer      β”‚   β”‚
β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”     β”‚   β”‚
β”‚  β”‚  β”‚ Docker β”‚      β”‚ Podman β”‚     β”‚   β”‚
β”‚  β”‚  β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜     β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚               β”‚
    β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”
    β”‚  Docker   β”‚   β”‚  Podman  β”‚
    β”‚  Engine   β”‚   β”‚  Runtime β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ” Security Considerations

  • ContainMind requires access to the container runtime socket
  • Runs with the same permissions as the user/process executing it
  • No authentication layer (relies on MCP transport security)
  • Recommended: Use in trusted environments or add authentication

🚧 Roadmap

  • Container lifecycle management (start/stop/restart)
  • Image building and management
  • Volume management operations
  • Network configuration tools
  • Container health checks
  • Web UI dashboard

🀝 Contributing

Contributions are welcome! Areas for improvement:

  • Additional container runtime support
  • Enhanced metrics collection
  • Performance optimizations
  • Documentation improvements

πŸ“„ License

This project is licensed under the MIT License.

πŸ™ Acknowledgments

Built with:


ContainMind - Making container management conversational, intelligent, and efficient. πŸš€

About

ContainMind enables AI assistants to interact intelligently with containerized environments, regardless of the underlying runtime (Docker, containerd, CRI-O, etc.). Through its Model Context Protocol (MCP) interface, AI agents can inspect, monitor, and control containers using natural language

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Languages