An MCP server that exposes Performance Co-Pilot (PCP) metrics to AI agents via pmproxy. Enables Claude and other LLMs to query live and historical performance data from any PCP-monitored host.
No PCP infrastructure? No problem. The bundled compose stack generates a week of realistic synthetic performance data — a SaaS production host with daily traffic patterns, morning ramps, lunch lulls, and periodic CPU/disk spikes — seeds it into pmproxy's time-series backend, and has everything ready for Claude to analyse.
podman compose up -dThis runs six services in order:
- pmlogsynth-generator — generates PCP archives from
profiles/scenarios/saas-diurnal-week.yml - redis-stack — time-series backend (Valkey/Redis, port 6379)
- pmlogsynth-seeder — loads the archives into the time-series store
- pcp — pmcd + pmproxy, ready to serve queries (port 44322)
- grafana — Grafana with PCP plugin and auto-provisioned datasources (port 3000)
- mcp-grafana — MCP server for Grafana, SSE transport (port 8000)
The generator and seeder are one-shot jobs; allow ~30–60 seconds for them to complete. Check progress with:
podman compose logs -f pmlogsynth-generator pmlogsynth-seederOnce seeded, verify data is queryable:
curl -s "http://localhost:44322/series/query?expr=kernel.all.cpu.user" | head -c 200Open http://localhost:3000 — no login required (anonymous admin is enabled). Navigate to Connections → Data sources to see the auto-provisioned PCP Valkey (historical) and PCP Vector (live) datasources.
The mcp-grafana service exposes a Grafana MCP server at http://localhost:8000/sse for AI agents that need to create dashboards or query Grafana programmatically.
git clone <repository-url>
cd pmmcp
uv syncAdd to .mcp.json in your project root (or ~/.claude/mcp.json for global config):
{
"mcpServers": {
"pmmcp": {
"command": "uv",
"args": ["run", "pmmcp", "--pmproxy-url", "http://localhost:44322"]
}
}
}Restart Claude Code (or /mcp to reload) and confirm pmmcp appears in the connected servers list.
The seeded dataset is saas-prod-01 — a simulated production host with a week of
realistic diurnal traffic. Try these to get a feel for what pmmcp can do:
Explore the data:
What hosts and metrics are available?
Spot the daily pattern:
Show me CPU utilisation on saas-prod-01 over the past 7 days. Are there any recurring spikes?
Drill into an incident:
There's a CPU and disk spike that seems to happen every day on saas-prod-01.
When exactly does it occur, how severe is it, and how long does it last?
Compare periods:
Compare the morning peak to the overnight baseline on saas-prod-01 across CPU, memory, disk, and network.
Use a prompt template for a guided investigation workflow:
/investigate_subsystem subsystem=cpu host=saas-prod-01
podman compose down --volumes--volumes purges the generated archive data so the next up starts fresh.
pmmcp gives AI agents 9 MCP tools and 7 MCP prompt templates for performance investigation. See Investigation Flow Architecture for how the coordinator-specialist pattern works.
Tools
| Tool | Description |
|---|---|
pcp_get_hosts |
List monitored hosts with labels and metadata |
pcp_discover_metrics |
Browse the metric namespace tree or search by keyword |
pcp_get_metric_info |
Get full metadata for one or more metrics |
pcp_fetch_live |
Fetch current metric values from a live host |
pcp_fetch_timeseries |
Fetch historical time-series data with auto-interval resolution |
pcp_query_series |
Execute raw PCP series query expressions |
pcp_compare_windows |
Statistical comparison of two time windows |
pcp_search |
Full-text search across metric names and help text |
pcp_derive_metric |
Create computed metrics on the fly |
Prompt templates
Invoke any prompt from your MCP client to get a guided investigation workflow. All prompts follow a discovery-first pattern and include guard clauses for missing tools, no-metrics-found, and out-of-retention timeranges.
| Prompt | Required args | Optional args | What it does |
|---|---|---|---|
session_init |
(none) | host, timerange |
Registers derived metrics, then points to coordinate_investigation |
coordinate_investigation |
request |
host, time_of_interest, lookback |
Dispatches 6 specialists in parallel, synthesises unified root-cause narrative |
specialist_investigate |
subsystem |
request, host, time_of_interest, lookback |
Deep domain-expert investigation for one subsystem |
investigate_subsystem |
subsystem |
host, timerange, symptom |
Discovery-first investigation of a single subsystem (cpu, memory, disk, network, process, or general) |
incident_triage |
symptom |
host, timerange |
Maps a symptom to likely subsystems, confirms host-specific vs fleet-wide scope, delivers ranked findings with recommended actions |
compare_periods |
baseline_start, baseline_end, comparison_start, comparison_end |
host, subsystem, context |
Broad-scan comparison between two time windows, ranked by magnitude of change, with root-cause hypothesis |
fleet_health_check |
(none) | timerange, subsystems, detail_level |
Checks all fleet hosts across default subsystems and produces a host-by-subsystem summary with OK/WARN/CRIT indicators |
- PCP installed and running on at least one monitored host
- pmproxy running and accessible (default port: 44322)
- Time-series tools (
pcp_fetch_timeseries,pcp_query_series,pcp_compare_windows) require pmproxy configured with a Valkey/Redis backend
- Time-series tools (
- Python 3.11+ with uv
- Claude Code or another MCP client
git clone <repository-url>
cd pmmcp
uv syncAdd to .mcp.json:
{
"mcpServers": {
"pmmcp": {
"command": "uv",
"args": ["run", "pmmcp", "--pmproxy-url", "http://your-pmproxy-host:44322"]
}
}
}See Running pmmcp below for all CLI flags and environment variables.
| Flag | Default | Description |
|---|---|---|
--pmproxy-url |
(env) | pmproxy base URL; overrides PMPROXY_URL |
--timeout |
30.0 |
HTTP request timeout in seconds |
--transport |
stdio |
MCP transport: stdio or streamable-http |
--host |
127.0.0.1 |
Bind host for HTTP transport |
--port |
8080 |
Bind port for HTTP transport |
| Variable | Default | Description |
|---|---|---|
PMPROXY_URL |
(required) | pmproxy base URL |
PMPROXY_TIMEOUT |
30.0 |
HTTP request timeout in seconds |
PMMCP_TRANSPORT |
stdio |
MCP transport mode |
PMMCP_HOST |
127.0.0.1 |
Bind host for HTTP transport |
PMMCP_PORT |
8080 |
Bind port for HTTP transport |
PMMCP_GRAFANA_FOLDER |
pmmcp-triage |
Grafana folder for investigation dashboards |
PMMCP_REPORT_DIR |
~/.pmmcp/reports |
Output directory for HTML fallback reports |
Precedence: CLI flag > environment variable > default.
For shared team access, run pmmcp in HTTP mode:
# Direct
uv run pmmcp --transport streamable-http --host 0.0.0.0 --port 8080 --pmproxy-url http://your-pmproxy:44322
# Docker
docker run -e PMPROXY_URL=http://your-pmproxy:44322 pmmcp:latest
# Compose (includes full PCP stack)
docker compose up -dMCP client config for a remote pmmcp server:
{
"mcpServers": {
"pmmcp": {
"url": "http://pmmcp-host:8080/mcp"
}
}
}The /healthcheck endpoint (HTTP mode only) returns JSON with pmproxy connectivity status:
curl http://localhost:8080/healthcheckSee CONTRIBUTING.md.
"Connection refused"
- Verify pmproxy is running:
systemctl status pmproxy - Check the URL and port in your MCP configuration
- Ensure firewall allows access to port 44322:
curl http://your-pmproxy-host:44322/series/sources
"No time series data available"
- Time-series queries require pmproxy's
[pmseries]section to be configured with Valkey/Redis - Verify:
curl http://your-pmproxy-host:44322/series/query?expr=kernel.all.load
"No metrics found"
- Verify PCP collectors are running:
pminfo -f kernel.all.load - Check pmproxy connectivity:
curl http://your-pmproxy-host:44322/pmapi/metric?prefix=kernel
Slow responses
- Reduce the time window or use a coarser interval
- Use fewer metrics per query
- Check pmproxy and Valkey/Redis performance independently
