A Universal Container Management MCP Server for Intelligent Container Operations
ContainMind is an MCP (Model Context Protocol) server that provides AI assistants with powerful container management capabilities across multiple container runtimes. It enables seamless inspection, monitoring, and analysis of containerized environments through natural language interactions.
ContainMind bridges the gap between AI assistants (like Claude) and container runtimes, allowing you to manage and analyze your containers using conversational interfaces. It's a unified API that works with multiple container engines, providing real-time insights and automation capabilities.
Modern containerized applications often face these issues:
- Fragmented tooling: Different commands for Docker, Podman, and other runtimes
- Complex debugging: Diving through logs and metrics across multiple containers
- Performance visibility: Difficult to get quick insights into resource usage
- Manual inspection: Time-consuming manual checks for container health and configuration
- Context switching: Jumping between CLI tools and monitoring dashboards
ContainMind provides:
- Unified interface across Docker and Podman (with more runtimes coming)
- AI-powered analysis through natural language queries
- Real-time monitoring with easy-to-parse metrics
- Automated diagnostics for troubleshooting container issues
- Single entry point for all container operations
- β Docker - Full support for Docker Engine
- β Podman - Complete Podman compatibility
- π Auto-detection - Automatically detects and connects to available runtime
- π Pluggable architecture - Easy to extend for additional runtimes
- List all containers (running and stopped)
- List container images
- List volumes and their mount points
- List networks and connected containers
- Detailed container analysis (configuration, environment, mounts)
- Real-time resource statistics (CPU, memory, network I/O)
- Container logs with configurable tail length
- System-wide information and capacity
- CPU usage percentage
- Memory usage and limits
- Network throughput (RX/TX)
- Parallel stats collection for multiple containers
| Tool | Description | Use Case |
|---|---|---|
initialize_connection |
Connect to container runtime | Initial setup, auto-detect or specify runtime |
list_all_containers |
List containers | Quick overview of all/running containers |
list_all_images |
List images | Check available images and sizes |
list_all_volumes |
List volumes | Identify storage usage and orphaned volumes |
list_all_networks |
List networks | Network topology and connectivity |
analyze_container |
Deep container inspection | Debug configuration, check mounts and env vars |
get_container_stats |
Resource usage metrics | Performance monitoring and optimization |
get_container_logs |
Retrieve container logs | Troubleshoot application issues |
get_system_info |
System information | Check overall capacity and utilization |
Scenario: Your application is crashing
You: "Why is my mongo_local_standard container showing high memory usage?"
AI: [Uses get_container_stats + analyze_container]
"The container is using 89% of its 2GB memory limit.
Checking the configuration... no memory limit set in docker-compose.
Recommendation: Set explicit memory limits and check for memory leaks."
Scenario: Optimizing resource allocation
You: "Which containers are using the most CPU?"
AI: [Uses get_container_stats with all_running=True]
"Analysis of running containers:
1. kafka-1: 45% CPU - consider adding more brokers
2. redis-server: 12% CPU - within normal range
Recommendation: Kafka is CPU-bound, consider horizontal scaling."
Scenario: Container won't start
You: "Check why weaviate_db failed to start"
AI: [Uses get_container_logs + analyze_container]
"Log analysis shows port 8080 binding failure.
Inspecting configuration... port already in use by another service.
Recommendation: Change port mapping or stop conflicting service."
Scenario: Disk space management
You: "Show me unused volumes taking up space"
AI: [Uses list_all_volumes + list_all_containers]
"Found 45 anonymous volumes not attached to any containers.
Total reclaimable space: ~3.2 GB
Safe to remove: [lists volume IDs]"
Scenario: Proactive monitoring
You: "Give me a health report of my container environment"
AI: [Uses get_system_info + get_container_stats]
"System Health Report:
- 7 total containers (1 running, 6 stopped)
- Memory: 8.2GB / 16GB used (51%)
- CPU: Normal load across running containers
- Network: No bottlenecks detected
Alerts: 6 stopped containers may need attention."
Scenario: Security and best practices
You: "Check environment variables for containers with sensitive data"
AI: [Uses analyze_container for each container]
"Security audit complete:
- mongo_local_standard: Contains DB credentials in env vars
- redis-server: No authentication configured
Recommendation: Use Docker secrets or external secret management."
- Python 3.8+
- Docker or Podman installed
- Access to container runtime socket
pip install -r requirements.txtpython containmind.pyThe server starts on http://127.0.0.1:8081 by default.
Add to your Claude Desktop configuration (claude_desktop_config.json):
{"mcpServers": {
"docker-Mcp": {
"command": "uv",
"args": [
"run",
"--with",
"fastmcp",
"fastmcp",
"run",
"path\to\proxy.py"
],
"env": {},
"transport": "stdio"
}
}
}# ContainMind automatically detects Docker or Podman
initialize_connection()# Force Docker
initialize_connection(backend="docker")
# Force Podman
initialize_connection(backend="podman")
# Custom socket
initialize_connection(base_url="unix:///run/podman/podman.sock")# Single container
get_container_stats(container_id="mongo_local_standard")
# All running containers
get_container_stats(all_running=True, parallel=True)βββββββββββββββββββββββββββββββββββββββββββ
β AI Assistant (Claude) β
ββββββββββββββββββ¬βββββββββββββββββββββββββ
β MCP Protocol
ββββββββββββββββββΌβββββββββββββββββββββββββ
β ContainMind Server β
β βββββββββββββββββββββββββββββββββββ β
β β Tool Interface Layer β β
β ββββββββββββββββ¬βββββββββββββββββββ β
β ββββββββββββββββΌβββββββββββββββββββ β
β β Container Inspector β β
β ββββββββββββββββ¬βββββββββββββββββββ β
β ββββββββββββββββΌβββββββββββββββββββ β
β β Backend Abstraction Layer β β
β β ββββββββββ ββββββββββ β β
β β β Docker β β Podman β β β
β β βββββ¬βββββ βββββ¬βββββ β β
β ββββββββΌββββββββββββββββΌβββββββββββ β
βββββββββββΌββββββββββββββββΌβββββββββββββββ
β β
βββββββΌββββββ ββββββΌββββββ
β Docker β β Podman β
β Engine β β Runtime β
βββββββββββββ ββββββββββββ
- ContainMind requires access to the container runtime socket
- Runs with the same permissions as the user/process executing it
- No authentication layer (relies on MCP transport security)
- Recommended: Use in trusted environments or add authentication
- Container lifecycle management (start/stop/restart)
- Image building and management
- Volume management operations
- Network configuration tools
- Container health checks
- Web UI dashboard
Contributions are welcome! Areas for improvement:
- Additional container runtime support
- Enhanced metrics collection
- Performance optimizations
- Documentation improvements
This project is licensed under the MIT License.
Built with:
- FastMCP - MCP server framework
- Docker SDK for Python - Container API client
- Anthropic Claude - AI assistant integration
ContainMind - Making container management conversational, intelligent, and efficient. π
