Quick Start · Claude Code · CLI Reference · Models · Docs
Open source Generative Engine Optimization (GEO) CLI tool. Analyze how AI models (ChatGPT, Claude, Gemini, Perplexity, DeepSeek, Grok, Llama) reference and recommend your brand.
SEO analytics, but for AI search engines.
Brand Input → Research → Generate Queries → Run Against AI Models → Analyze → Report
- Research your brand — scrapes your website, builds a brand profile with competitors, USPs, keywords
- Generate queries — creates realistic search queries real people would type into ChatGPT/Perplexity (brand-blind, so queries never mention your brand)
- Execute — sends queries to multiple AI models via OpenRouter (or direct API keys)
- Analyze — measures mention rate, sentiment, mindshare, competitor positioning, narrative themes, USP coverage gaps
- Report — generates interactive HTML reports with charts, plus JSON/CSV/Markdown exports
- Python 3.11+
- An OpenRouter API key (one key for all AI models), or individual provider API keys
pip install voyage-geoOr install from source:
git clone https://github.com/onvoyage-ai/voyage-geo-agent.git
cd voyage-geo-agent
pip install -e .cp .env.example .env
# Edit .env — at minimum, set OPENROUTER_API_KEY# Full pipeline
python3 -m voyage_geo run -b "YourBrand" -w "https://yourbrand.com" --no-interactive
# Or with specific providers
python3 -m voyage_geo run -b "YourBrand" -w "https://yourbrand.com" \
-p chatgpt,gemini,claude,perplexity-or -f html,json,csv,markdown --no-interactiveVoyage GEO ships 8 interactive skills that work with Claude Code, OpenClaw, and any agent that supports the SKILL.md format.
Tell your agent to fetch and follow the install instructions:
https://raw.githubusercontent.com/onvoyage-ai/voyage-geo-agent/main/AGENTS.md
The agent will pip install voyage-geo and create the skill files automatically. Works with Claude Code, OpenClaw, and any agent that supports SKILL.md.
| Command | Description |
|---|---|
/geo-run |
Full GEO analysis — setup, brand research, query generation, execution, analysis, and reporting |
/geo-leaderboard |
Category-wide brand comparison — ranks all brands by AI visibility |
You can run an optional local GUI + API without changing CLI/agent workflows.
Install app extras:
pip install "voyage-geo[app]"Start app mode:
python3 -m voyage_geo app --host 127.0.0.1 --port 8765Then open http://127.0.0.1:8765.
- GUI handles run discovery, job progress, and logs.
- Backend API (
/api/*) is the shared glue for GUI + Claude/Codex automation. - Existing CLI and skill-based agent mode continue to work unchanged.
The local GUI includes:
- Start GEO runs and leaderboard runs from a visual form
- Model selection with click-to-toggle checkboxes
- Live jobs table + streaming logs
- Past runs browser with one-click report opening
- Auto-generated HTML report fallback when only JSON exists
# Full analysis pipeline
python3 -m voyage_geo run -b "<brand>" -w "<url>" -p chatgpt,gemini,claude --no-interactive
# Research a brand (builds profile)
python3 -m voyage_geo research "<brand>" -w "<url>"
# List configured providers
python3 -m voyage_geo providers
# Health check providers
python3 -m voyage_geo providers --test
# Generate reports from an existing run
python3 -m voyage_geo report -r <run-id> -f html,json,csv,markdown
# Build trend index from completed snapshots
python3 -m voyage_geo trends-index -o ./data/runs --out-file ./data/trends/snapshots.json
# Query trend series for one brand (includes competitor-relative fields)
python3 -m voyage_geo trends -b "YourBrand" --metric overall_score --json
# Generate interactive HTML trends dashboard
python3 -m voyage_geo trends-dashboard -b "YourBrand"
# Start optional local GUI + API mode
python3 -m voyage_geo app --host 127.0.0.1 --port 8765
# List past runs
python3 -m voyage_geo runs
# Show version
python3 -m voyage_geo version| Flag | Description |
|---|---|
-b, --brand |
Brand name (required) |
-w, --website |
Brand website URL |
-p, --providers |
Comma-separated providers (default: all via OpenRouter) |
-q, --queries |
Number of queries to generate (default: 20) |
-f, --formats |
Report formats: html, json, csv, markdown (default: html,json) |
-r, --resume |
Resume from existing run ID |
--as-of-date |
Logical run date (YYYY-MM-DD) for trend tracking/backfills |
--stop-after |
Stop after a stage (research, query-generation) |
--no-interactive |
Skip interactive review prompts |
All models are accessible through a single OpenRouter API key:
| CLI Name | Model | Provider |
|---|---|---|
chatgpt |
GPT-5 Mini | OpenAI |
gemini |
Gemini 3 Flash Preview | |
claude |
Claude Sonnet 4.5 | Anthropic |
perplexity-or |
Sonar Pro | Perplexity |
deepseek |
DeepSeek V3.2 | DeepSeek |
grok |
Grok 3 | xAI |
llama |
Llama 4 Maverick | Meta |
You can also use direct API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) for individual providers.
# OpenRouter (recommended — one key for all models)
OPENROUTER_API_KEY=sk-or-v1-...
# Direct provider keys (optional)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AI...
PERPLEXITY_API_KEY=pplx-...
# Optional
LOG_LEVEL=info
VOYAGE_GEO_OUTPUT_DIR=./data/runs
VOYAGE_GEO_CONCURRENCY=3Each run creates a self-contained directory:
data/runs/<run-id>/
├── metadata.json # Run metadata (schema_version, status, brand/category, providers, config hash)
├── brand-profile.json # Brand research output
├── queries.json # Generated search queries
├── results/
│ ├── results.json # All raw AI responses (+ schema_version)
│ └── by-provider/ # Split by provider
├── analysis/
│ ├── analysis.json # Full analysis (+ schema_version)
│ ├── summary.json # Executive summary (+ schema_version)
│ ├── snapshot.json # Stable time-series KPI snapshot for trend indexing
│ └── *.csv # CSV exports
└── reports/
├── report.html # Interactive HTML report
├── report.json
├── report.md
└── charts/ # PNG chart images
schema_versionis included in persisted core artifacts (metadata.json,results/results.json,analysis/analysis.json,analysis/summary.json).analysis/snapshot.jsonis the canonical compact record for over-time visualization and database indexing.config_hashinmetadata.jsonlets you detect whether runs are directly comparable.
src/voyage_geo/
├── cli.py # CLI entry (Typer + Rich)
├── config/ # Pydantic schemas, defaults, config loader
├── core/ # Engine, pipeline, context, errors
├── providers/ # AI model providers (OpenRouter, OpenAI, Anthropic, Google, Perplexity)
├── stages/
│ ├── research/ # Stage 1: Brand research + web scraping
│ ├── query_generation/ # Stage 2: Generate search queries (keyword, persona, intent strategies)
│ ├── execution/ # Stage 3: Run queries against providers
│ ├── analysis/ # Stage 4: Analyze results (6 analyzers)
│ └── reporting/ # Stage 5: Generate reports (HTML/JSON/CSV/Markdown)
├── storage/ # File-based persistence
├── types/ # Shared Pydantic type definitions
└── utils/ # Text helpers, Rich progress displays
| What | Interface | Location |
|---|---|---|
| AI Provider | BaseProvider ABC |
src/voyage_geo/providers/ |
| Query Strategy | async generate() function |
src/voyage_geo/stages/query_generation/strategies/ |
| Analyzer | Analyzer Protocol |
src/voyage_geo/stages/analysis/analyzers/ |
| Report Format | Method in ReportingStage |
src/voyage_geo/stages/reporting/stage.py |
See the docs/ directory for detailed guides on adding providers, analyzers, and query strategies.
pip install -e ".[dev]"
python3 -m pytest tests/ -v
python3 -m ruff check src/ tests/
python3 -m mypy src/voyage_geo/ --ignore-missing-importsSee CONTRIBUTING.md for guidelines.
MIT — see LICENSE for details.
