Intelligent automation and management tools for modern infrastructure operations.
This project showcases AI-first approaches to operations, featuring awesh - Awe-Inspired Workspace Environment Shell that serves as a "free cursor" for shell-native AI assistance, plus supporting MCP (Model Context Protocol) servers for various infrastructure platforms.
💡 Core Vision: AI assistance in the terminal without IDE bloat - the benefits of AI-powered development without editor overhead or opinionated tool constraints.
📍 Note: The awesh project has been moved to a standalone repository: awesh
awesh is an AI-aware interactive shell that provides intelligent assistance while preserving all the power and familiarity of traditional bash operations. It's a "free cursor" for shell-native AI assistance - bringing AI-powered development to your terminal without IDE bloat.
📖 Visit the awesh project → for full documentation, installation guide, and usage examples.
AIOps: Artificial Intelligence for IT Operations - A comprehensive guide to the AI revolution in IT operations, documenting real-world transformations and practical implementation strategies. Written by the creator of this toolkit, it provides the theoretical foundation and strategic insights behind these tools.
AI is Baseline + Human Creativity = Breakthrough Tools
This project uses AI as the baseline, then pushes further with human creativity to achieve breakthroughs beyond what AI training alone provides. We believe:
- AI is Baseline: AI provides the superior baseline capabilities we build upon
- Human Creativity Pushes Further: We use human creativity to go beyond AI's training limitations
- Beyond Training Data: Human creativity enforces new concepts that weren't in AI's training
- AI is Starting Point, Not End: AI serves as our baseline; human creativity is what creates innovation
- Push Past Boundaries: Human creativity pushes past what AI learned from training data
🎯 Our Approach:
- Use AI as the superior baseline
- Push further with human creativity beyond training patterns
- Implement creative approaches that go beyond what AI learned
- Create novel behaviors that training data couldn't anticipate
🌟 The Result: Tools like awesh that represent breakthrough innovation - using AI as the base layer, then implementing non-traditional techniques that contradict conventional training to create AI-aware shells with behaviors no training data could have anticipated.
Open source thrives on experimentation and innovation. AI provides the base infrastructure; non-traditional, training-contradicting techniques are what create genuine breakthroughs.
"AI by default, Bash when I mean Bash"
An intelligent shell that seamlessly blends natural language AI interaction with traditional bash operations. Built by Ops, for Ops - designed for systems administrators, DevOps engineers, and infrastructure professionals who live in the terminal.
Direct natural language to Kubernetes API
A Model Context Protocol server that converts natural language prompts directly to Kubernetes API calls, bypassing kubectl entirely. Ideal for infrastructure automation and monitoring.
Key Features:
- Natural Language Processing: Convert plain English to Kubernetes operations
- Direct API Calls: Uses Kubernetes Python client for direct cluster communication
- Smart Intent Recognition: Automatically detects what you want to do
- Rich Output: Human-readable summaries with raw data
- Local Cluster Support: Works with your local k3d/k3s cluster
aiops/
├── kubernetes/ # Kubernetes MCP server
│ ├── smart_k8s_mcp.py # Natural language K8s server
│ └── interactive_client.py
├── deployment/ # Deployment MCP and automation
├── credential_store/ # Secure credential management
├── executor/ # Command execution framework
├── interaction/ # User interaction components
├── nlp/ # Natural language processing
├── planner/ # Task planning and orchestration
└── state_store/ # State management
github.com/joebertj/awesh # AI-aware interactive shell (standalone repository)
├── awesh.c # C frontend
├── awesh_backend/ # Python backend
├── awesh_sandbox.c # Security sandbox
├── security_agent.c # Security middleware
├── deployment_mcp.py # Build and deployment automation
└── README.md # Full documentation
For awesh installation and usage instructions, please visit:
The awesh project is now maintained as a standalone project with its own documentation and installation guide.
-
Install dependencies:
pip3 install kubernetes
-
Ensure kubectl is configured:
kubectl cluster-info
-
Run interactive mode:
cd kubernetes/ python3 interactive_client.py
Example prompts you can try:
- "Show me the cluster health"
- "What nodes do I have?"
- "Get pods in default namespace"
- "Show me the services"
- "Check cluster status"
- "Get pods from kube-system namespace"
- "Describe pod traefik-5d45fc8cc9-h7vj8"
- "Show me the logs for pod coredns-ccb96694c-wxnc2"
- "Show me all deployments"
- "Get deployment status in kube-system"
- "Check deployment health"
Run the MCP server directly:
python3 smart_k8s_mcp.pyUse the basic MCP server for traditional tool calls:
python3 kubernetes_mcp_server.py- Natural Language Input: You type a prompt like "Show me the cluster health"
- Intent Recognition: The server parses your prompt and identifies the intent
- Parameter Extraction: Automatically extracts namespaces, pod names, etc.
- API Call: Makes direct Kubernetes API calls using the Python client
- Smart Output: Provides human-readable summaries with raw data
- ✅ Get cluster overview and health
- ✅ List all nodes with status
- ✅ List all namespaces
- ✅ Component status monitoring
- ✅ List pods by namespace
- ✅ Get pod details and status
- ✅ Retrieve pod logs
- ✅ Pod health monitoring
- ✅ Pod creation and management
- ✅ Pod binding and eviction
- ✅ List services by namespace
- ✅ Service configuration details
- ✅ Port and endpoint information
- ✅ Service creation and management
- ✅ Service proxy operations
- ✅ List deployments by namespace
- ✅ Deployment status and replicas
- ✅ Rolling update information
- ✅ Deployment scaling and updates
- ✅ Deployment history and rollbacks
- ✅ Deployment creation and management
- ✅ ConfigMaps & Secrets: Management and listing
- ✅ Persistent Volumes: Storage management
- ✅ RBAC: Role and role binding management
- ✅ Networking: Ingress, Network Policies
- ✅ Storage: Storage classes, CSI drivers
- ✅ Batch Jobs: CronJobs and Jobs
- ✅ Autoscaling: HPA management
- ✅ Policy: Pod disruption budgets
The server automatically detects your Kubernetes configuration:
- kubeconfig file (default:
~/.kube/config) - In-cluster service account (if running inside cluster)
- Environment variables (KUBECONFIG, etc.)
🤖 Processing: Show me the cluster health
--------------------------------------------------
🏥 **Cluster Health Overview**
🖥️ **Nodes**: 2 nodes are running
• 2/2 nodes are ready
📁 **Namespaces**: 5 namespaces
• 5/5 namespaces are active
📦 **Pods**: 8 pods across all namespaces
• 7/8 pods are running
📊 Raw Data:
{
"nodes": [...],
"namespaces": [...],
"pods": [...]
}
-
"No module named 'kubernetes'"
pip3 install kubernetes
-
"Could not load kubeconfig"
- Ensure kubectl is configured:
kubectl cluster-info - Check your kubeconfig file:
kubectl config view
- Ensure kubectl is configured:
-
"Failed to connect to cluster"
- Verify your cluster is running:
kubectl get nodes - Check cluster status:
kubectl cluster-info
- Verify your cluster is running:
Enable debug logging by modifying the logging level in the server files:
logging.basicConfig(level=logging.DEBUG)Your Kubernetes MCP server has access to the full Kubernetes API surface:
- 🔧 Core V1: Pods, Services, Nodes, Namespaces, ConfigMaps, Secrets
- 🚀 Apps V1: Deployments, StatefulSets, DaemonSets, ReplicaSets
- 🌐 Networking V1: Ingress, Network Policies, Service CIDRs
- 🔐 RBAC V1: Roles, RoleBindings, ClusterRoles, ClusterRoleBindings
- 💾 Storage V1: Storage Classes, CSI Drivers, Volume Attachments
- ⚡ Batch V1: Jobs, CronJobs
- 📈 Autoscaling V1: Horizontal Pod Autoscalers
- 🛡️ Policy V1: Pod Disruption Budgets
- Create: Deployments, Pods, Services, ConfigMaps, Secrets
- List: All resources across namespaces
- Get: Detailed resource information
- Update: Resource modifications and scaling
- Delete: Resource cleanup
- Watch: Real-time resource monitoring
- "Show me all deployments"
- "Get deployment status in kube-system"
- "Scale deployment traefik to 3 replicas"
- "Check deployment health"
- "Show deployment history"
- "Create a new nginx deployment"
- "List all storage classes"
- "Show RBAC roles in kube-system"
The Deployment MCP provides comprehensive CI/CD automation for awesh with two main modes: CI Build (development) and Production Install (deployment). It handles complete pipelines with syntax checking, process management, git operations, and sanity testing.
- 🏗️ CI/CD Pipelines: Separate build (CI) and install (deploy) workflows
- 🔍 Syntax Checking: Validates C code and Python code before deployment
- 🔨 Build Management: Clean builds, compilation with proper flags
- 🛑 Process Management: Kill running awesh processes and clean up sockets
- 📦 Deployment: Install binaries to
~/.local/binwith backup - 🧪 Sanity Testing: Test socket communication and backend functionality
- 📝 Git Integration: Automated git pull, commit, and push operations
The Deployment MCP is a standalone Python script that doesn't require external MCP libraries:
cd deployment/
python3 deployment_mcp.py [command]# CI Build Pipeline (Development)
# Checks → Bins → Git push
python3 deployment_mcp.py build
# Production Install Pipeline (Deployment)
# Git pull → Skip build → Kill procs → Copies
python3 deployment_mcp.py install
# Clean Install Pipeline (Development)
# Checks → Kill procs → Build → Deploy → Git push (no git pull)
python3 deployment_mcp.py clean_install# Check syntax for C and Python code
python3 deployment_mcp.py syntax_check
# Build awesh (incremental)
python3 deployment_mcp.py build_only
# Build awesh (clean build)
python3 deployment_mcp.py build_clean
# Kill running awesh processes
python3 deployment_mcp.py kill
# Force kill processes (SIGKILL)
python3 deployment_mcp.py kill_force
# Deploy binary to ~/.local/bin
python3 deployment_mcp.py deploy_only
# Test deployment and backend communication
python3 deployment_mcp.py test
# Git operations
python3 deployment_mcp.py git_pull # Pull latest changes
python3 deployment_mcp.py git_push # Commit and push changesFor development - checks, bins, git push
- 📋 Checks: Validates all C and Python code syntax
- 🔨 Bins: Clean build of C frontend + Python backend installation
- 📝 Git Push: Commit changes and push to repository
For deployment - git pull, skip build, kills procs, copies
- 📥 Git Pull: Pull latest changes from repository
- 🛑 Kill Procs: Terminates existing awesh processes
- 📦 Copies: Install binary to
~/.local/binwith backup
For development - build and deploy without git pull
- 📋 Checks: Validates all C and Python code syntax
- 🛑 Kill Procs: Terminates existing awesh processes
- 🔨 Build: Clean build of C frontend + Python backend installation
- 📦 Deploy: Install binary to
~/.local/binwith backup - 📝 Git Push: Commit changes and push to repository
$ python3 deployment_mcp.py full_deploy
🚀 Starting full deployment pipeline...
📋 Step 1: Syntax Check
🔍 Checking C syntax...
✅ awesh.c: Syntax OK
🔍 Checking Python syntax...
✅ server.py: Syntax OK
✅ ai_client.py: Syntax OK
🛑 Step 2: Kill Existing Processes
🛑 Terminated awesh (PID: 12345)
🧹 Removed socket: /home/user/.awesh.sock
🔨 Step 3: Build
🧹 Cleaning build...
🔨 Building C frontend...
✅ C frontend built successfully
📦 Installing Python backend...
✅ Python backend installed
📦 Step 4: Deploy
💾 Backed up existing awesh to /home/user/.local/bin/awesh.bak
✅ Deployed awesh to /home/user/.local/bin/awesh
✅ Binary is executable and ready
🧪 Step 5: Test Deployment
✅ Binary exists and is executable
🧪 Testing backend socket communication...
✅ Socket connection successful
✅ STATUS command works: AI_LOADING
✅ Command execution works
✅ Backend sanity test passed
✅ Deployment test passed
📝 Step 6: Git Commit & Push
📝 Git: Adding changes...
📝 Git: Committing changes...
📝 Git: Pushing to remote...
✅ Changes committed and pushed successfully
🎉 Deployment pipeline completed successfully!For awesh development, use the Deployment MCP to ensure consistent builds and deployments:
# During development - quick test
python3 deployment_mcp.py build && python3 deployment_mcp.py deploy
# Before committing - full validation
python3 deployment_mcp.py full_deploy
# Debugging backend issues
python3 deployment_mcp.py kill_force && python3 deployment_mcp.py testThe Deployment MCP ensures reliable, repeatable deployments and catches issues early in the development cycle.
AIOps represents a paradigm shift from traditional infrastructure management to AI-first operations. The goal is to make infrastructure as conversational and intuitive as possible while maintaining the precision and reliability that operations teams require.
AI Base Layer + Experimental Innovation: We leverage AI as our base infrastructure layer, then experiment with non-traditional, training-contradicting techniques to create breakthrough solutions. AI provides the foundational capabilities; experimental techniques that contradict training patterns are what create genuine innovation.
- 🐚 awesh: AI-aware shell (current showcase)
- ☸️ Kubernetes MCP: Natural language Kubernetes management (available)
- 🔒 Security MCP: AI-powered security analysis and remediation
- 📊 Monitoring MCP: Intelligent alerting and incident response
- 🚀 CI/CD MCP: Natural language deployment pipelines
- ☁️ Cloud Provider MCPs: AWS, GCP, Azure natural language management
- 📈 Analytics Engine: Cross-platform operational intelligence
- awesh v1.0: Complete AI-aware shell with full MCP integration
- Multi-MCP Support: Connect multiple MCP servers simultaneously
- Advanced NLP: Context-aware command interpretation
- Workflow Automation: AI-generated operational runbooks
- Predictive Operations: Proactive issue detection and resolution
- Team Collaboration: Shared AI context and knowledge bases
This project demonstrates several key principles:
- Natural Language Interface: Operations should be as easy as having a conversation
- Context Awareness: AI should understand your infrastructure and history
- Safety by Design: AI suggestions with human approval workflows
- Gradual Adoption: Works alongside existing tools and processes
- Knowledge Sharing: AI learns from team practices and tribal knowledge
AIOps leverages the Model Context Protocol to provide secure, standardized AI tool execution across multiple infrastructure platforms. Each MCP server specializes in a specific domain while maintaining consistent interfaces and security policies.
Natural language to Kubernetes API - Direct cluster communication
Our flagship MCP server that converts plain English into direct Kubernetes API calls, bypassing kubectl entirely for more efficient and AI-friendly cluster management.
🚀 Key Features:
- Direct API Access: Uses Kubernetes Python client for native cluster communication
- Natural Language Processing: "Show me unhealthy pods" → API calls + human-readable output
- Smart Intent Recognition: Automatically detects operations from conversational input
- Rich Contextual Output: Human summaries with raw data for AI consumption
- Multi-Namespace Support: Seamlessly works across cluster namespaces
- Real-time Monitoring: Live cluster state analysis and reporting
📋 Supported Operations:
- Cluster Health: Overall cluster status, node health, component monitoring
- Pod Management: List, describe, logs, create, delete, scale operations
- Service Discovery: Service listing, endpoint analysis, port mapping
- Deployment Control: Rollouts, scaling, history, rollback operations
- Resource Management: ConfigMaps, Secrets, PVs, Storage Classes
- RBAC & Security: Role analysis, permission checking, policy management
- Batch Operations: Jobs, CronJobs, scheduled task management
🔧 Usage:
cd kubernetes/
python3 interactive_client.py
# Try these natural language prompts:
"Show me the cluster health"
"What pods are failing in kube-system?"
"Scale the traefik deployment to 3 replicas"
"Show me all services and their endpoints"📖 Full Kubernetes MCP Documentation →
AI-powered deployment automation and pipeline management
Advanced deployment orchestration with natural language controls for CI/CD pipelines, release management, and deployment strategies.
🎯 Planned Features:
- Pipeline Orchestration: "Deploy version 2.1.3 to staging"
- Release Management: Automated rollback, canary deployments, blue-green strategies
- Multi-Environment Control: Development, staging, production deployment flows
- Integration Hub: GitHub Actions, GitLab CI, Jenkins, ArgoCD connectivity
- Deployment Analytics: Success rates, performance metrics, failure analysis
Will be copied from ~/AI/kubernetes_web deployment automation components
Intelligent test execution and quality assurance automation
Comprehensive testing automation with AI-driven test selection, execution, and result analysis for continuous quality assurance.
🎯 Planned Features:
- Smart Test Selection: "Run tests affected by the API changes"
- Quality Gate Management: Automated pass/fail criteria with AI analysis
- Test Environment Provisioning: Dynamic test infrastructure creation
- Result Intelligence: AI-powered failure analysis and debugging suggestions
- Coverage Analysis: Gap identification and test recommendation
Will be implemented in vi-based development environment and copied from ~/AI/kubernetes_web test automation framework once completed.
All MCP servers in AIOps follow strict security and operational standards:
- 🛡️ Policy Enforcement: Configurable allow-lists for commands and resources
- 📊 Audit Logging: Complete operation trails with redacted sensitive data
- ⏱️ Resource Limits: CPU, memory, and timeout controls for all operations
- 🔐 Authentication: Integration with existing cluster RBAC and credentials
- 🚨 Safety Controls: Dry-run modes and confirmation workflows for destructive operations
- 📈 Monitoring: Built-in metrics and health checks for MCP server performance
The AIOps MCP framework provides:
- 🏗️ Server Templates: Rapid development of new infrastructure MCP servers
- 🧪 Testing Utilities: Comprehensive test suites for MCP server validation
- 📚 Documentation Tools: Auto-generated API docs and usage examples
- 🔄 Hot Reloading: Development-friendly server restart and configuration updates
- 📊 Performance Profiling: Built-in metrics and performance analysis tools
During development of the Test Suite MCP, we encountered limitations with Cursor's AI tools that prevented proper implementation:
🚫 Opinionated Tool Prompts
- Cursor's AI tools introduce their own prompts and rules without user control
- These tool-level prompts conflict with our custom test suite logic and safety rules
- The AI assistant violates the specific rules and constraints we set for our MCP servers
- This makes it impossible to implement domain-specific AI behavior that contradicts the tool's opinions
📋 Specific Issue:
- Our Test Suite MCP requires strict rule adherence for safety-critical testing scenarios
- Cursor's tool prompts override our custom behavioral constraints
- The AI ignores project-specific rules in favor of the tool's generic guidelines
💡 Development Philosophy: As vi users, we prefer simple, direct tools that don't impose their own opinions. IDEs are often overkill - vi is sufficient for most development tasks. Even modern alternatives like neovim introduce unnecessary complexity. The best tools get out of your way and let you work.
🤔 The "Free Cursor" Vision: We use Cursor but don't really leverage any of the VSCode features - we just want the AI assistance without the IDE overhead. What we need is a "free cursor" that's purely shell-based: AI assistance in the terminal without the bloated editor interface or opinionated tool prompts.
This is essentially what awesh represents - a shell-native AI assistant that provides the benefits of AI-powered development without the constraints of IDE-based tools.
🔧 Solution:
- Test Suite MCP development moved to vi + terminal environment
- Clean development without IDE tool interference
- Direct control over AI behavior and constraints
- awesh as the prototype for shell-native AI assistance
This experience reinforces why simple, unopinionated tools are superior for specialized development work.
We welcome contributions that advance AI-powered operations and break new ground:
- New MCP Servers: Add support for additional platforms
- Enhanced NLP: Improve natural language understanding
- Safety Features: Better guardrails and validation
- Documentation: Help others adopt AIOps practices
- Novel AI Applications: Push beyond conventional AI use cases
- Human-AI Collaboration: Show how creativity enhances AI capabilities
🧠 AI-Foundation Development Welcome: We encourage and celebrate using AI's superiority as your foundation. Start with AI's superior capabilities, then apply your creativity to transcend training data boundaries. The best contributions use AI as the base and add human concepts that go beyond what AI learned.
Note: When contributing MCP servers with custom AI behavior, consider using simple tools like vi + terminal that don't impose their own AI opinions on your specialized use cases.
This project is open source and available under the Apache License 2.0.
AIOps: Where artificial intelligence meets infrastructure operations. Making the complex simple, the manual automatic, and the reactive proactive.