WPGen is a complete Python-based tool that generates WordPress themes from natural language descriptions. Simply describe your website, and WPGen will create a fully functional WordPress theme with all necessary files, then push it to GitHub.
- 🎨 Graphical User Interface: Modern Gradio-based GUI with drag-and-drop file uploads
- 🖼️ Multi-Modal AI: Upload design mockups and screenshots - AI analyzes visual layouts and styles
- 📄 Document Processing: Upload content files (PDF, Markdown, Text) to guide theme generation
- 💬 Natural Language Input: Describe your website in plain English
- 🤖 AI-Powered Generation: Uses OpenAI GPT-4 Vision, Anthropic Claude 3+, or local LLMs (LM Studio, Ollama) for intelligent theme creation
- 🔒 Local LLM Support: Run 100% locally with LM Studio or Ollama - no cloud API keys required!
- 📦 Complete WordPress Themes: Generates all necessary files (style.css, functions.php, templates, etc.)
- 🎭 Theme Identity: Every theme includes a valid style.css header and auto-generated screenshot.png (from your uploads or a branded placeholder)
- ✨ Optional Features: WooCommerce support, custom Gutenberg blocks, dark mode toggle, animated preloader
- 🚀 Always-On UX: Smooth page transitions and mobile-first, thumb-friendly navigation in every theme
- 🐙 GitHub Integration: Automatically pushes generated themes to GitHub repositories
- 🖥️ Three Interfaces: Graphical UI, Web UI, or CLI - choose your preference
- 🏗️ Modular Architecture: Clean, extensible codebase
- 🚀 Deployment Ready: Optional GitHub Actions workflows for automated deployment
- Python 3.10 or higher (3.10, 3.11, 3.12 supported)
- One of the following AI providers:
- OpenAI API key, OR
- Anthropic API key, OR
- Local LLM via LM Studio or Ollama (free, no API key needed!)
- GitHub personal access token (optional, for GitHub integration)
- Git installed on your system (optional, for GitHub integration)
git clone https://github.com/blueibear/wpgen.git
cd wpgenpython -m venv venv
# On Windows
venv\Scripts\activate
# On macOS/Linux
source venv/bin/activateWPGen supports optional dependency groups for different use cases:
Basic installation (CLI only):
pip install -e .For development (includes testing and linting tools):
pip install -e .[dev]For web UI and Gradio GUI:
pip install -e .[ui]For WordPress REST API integration:
pip install -e .[wp]For GitHub integration:
pip install -e .[git]Full installation (all features):
pip install -e .[dev,ui,git,wp]This flexible installation allows you to install only what you need. Contributors should use the full installation.
wpgen initThis creates a .env file. Edit it and add your API keys:
# LLM Provider API Keys
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# GitHub Configuration
GITHUB_TOKEN=your_github_token_hereEdit config.yaml to customize settings:
- Choose between OpenAI or Anthropic
- Configure output directory
- Set GitHub repository options
- Adjust logging preferences
The easiest way to use WPGen is through the graphical interface:
wpgen guiOr use the module launcher:
python -m wpgen guiThen open your browser to http://localhost:7860
Features:
- ✅ Upload design mockups and screenshots
- ✅ Upload content documents (PDF, Markdown, Text)
- ✅ Guided Mode for structured theme specifications
- ✅ Real-time generation status
- ✅ Visual file tree preview
- ✅ One-click GitHub push
Alongside the natural-language prompt, Guided Mode lets you specify:
- Brand basics: Site name, tagline, primary goal (inform/convert/sell)
- Pages: Select top-level pages (Home, About, Blog, Contact, Services, etc.)
- Style: Mood (modern-minimal, playful, brutalist, elegant), color palette (hex codes), typography (sans/serif/mono)
- Layout: Header style (centered/split/stacked), hero type (image/video/text), sidebar position, container width
- Components: Blog, cards, gallery, testimonials, pricing, FAQ, contact form, newsletter, CTA, breadcrumbs
- Accessibility: Keyboard navigation, high-contrast mode, reduced-motion support
- Integrations: WooCommerce, SEO, analytics, newsletter
- Performance: LCP (Largest Contentful Paint) target in milliseconds
These explicit choices override AI inferences and are translated into CSS variables, template parts, and generator options for more consistent, production-ready themes.
WPGen now includes advanced optional features you can enable via checkboxes:
- WooCommerce support & styling: Adds WooCommerce template compatibility, basic product loop styles, and shop page support (theme works even without WooCommerce plugin installed)
- Custom Gutenberg blocks:
- Featured Products: Showcase product highlights
- Lifestyle Image: Large image block with overlay text
- Promo Banner: Call-to-action banner with custom styling
- Light/Dark mode toggle: Floating toggle button with localStorage persistence and
prefers-color-schemesupport - Animated loading logo (preloader): Smooth page preloader with spinner (auto-hides after load, max 3s timeout)
Always-on defaults (included in every theme):
- Smooth page transitions: CSS opacity fade on navigation + hover transitions
- Thumb-friendly mobile navigation: Minimum 44×44px tap targets, responsive hamburger menu, mobile-first CSS
Note on generation time: When the GUI displays "🏗️ Generating WordPress theme files…", theme generation can take a couple of minutes depending on complexity. Please keep the tab open during this process.
Windows sometimes resolves localhost to IPv6. Use IPv4:
set GRADIO_SERVER_NAME=127.0.0.1
wpgen gui --server-name 127.0.0.1 --server-port 7860CLI flags:
--server-name- Host to bind (default 0.0.0.0)--server-port- Port to bind (default 7860)--share- Create a Gradio public share link
Create a public share link:
wpgen gui --shareCustom port:
wpgen gui --server-port 8080Using environment variables:
export GRADIO_SERVER_NAME=127.0.0.1
export GRADIO_SERVER_PORT=7860
wpgen gui📖 See GUI_FEATURES.md for complete GUI documentation
wpgen generate "Create a dark-themed photography portfolio site with a blog and contact form"wpgen generate --interactivewpgen generate "Your description" --no-pushwpgen generate "Your description" --repo-name my-custom-themewpgen generate "Your description" --config my-config.yamlwpgen serveThen open your browser to http://localhost:5000
The web interface provides:
- Visual form for entering descriptions
- Quick example prompts
- Real-time generation status
- Analysis-only mode to preview requirements
Here are some example prompts to get you started:
Photography Portfolio
Create a dark-themed photography portfolio site with a blog and contact form
Corporate Website
Build a modern corporate website with services page, team page, testimonials, and a light blue color scheme
Minimal Blog
Design a minimal blog theme with clean typography, full-width layout, and sidebar
E-commerce
Create an e-commerce ready theme with WooCommerce support, product showcase, and shopping cart
Magazine
Build a magazine-style theme with featured posts, multiple categories, and advertisement areas
wpgen/
├── wpgen/ # Main package
│ ├── __init__.py
│ ├── llm/ # LLM provider abstractions
│ │ ├── base.py # Base provider interface
│ │ ├── openai_provider.py # OpenAI implementation
│ │ └── anthropic_provider.py # Anthropic implementation
│ ├── parsers/ # Prompt parsing
│ │ └── prompt_parser.py
│ ├── generators/ # WordPress code generation
│ │ └── wordpress_generator.py
│ ├── github/ # GitHub integration
│ │ └── integration.py
│ └── utils/ # Utilities
│ └── logger.py
├── web/ # Flask web application
│ ├── app.py
│ ├── templates/
│ │ └── index.html
│ └── static/
├── .github/
│ └── workflows/
│ └── deploy.yml # Deployment workflow template
├── config.yaml # Configuration file
├── main.py # CLI entry point
├── requirements.txt # Python dependencies
├── .env.example # Example environment variables
└── README.md # This file
The config.yaml file contains all configuration options:
llm:
# Options: "openai", "anthropic", "local-lmstudio", "local-ollama"
provider: "openai"
openai:
model: "gpt-4-turbo-preview"
max_tokens: 4096
temperature: 0.7
anthropic:
model: "claude-3-5-sonnet-20241022"
max_tokens: 4096
temperature: 0.7
# Local LLM providers (no API key required!) - Dual-Model Configuration
local-lmstudio:
# Brains model (text-only reasoning)
brains_model: "Meta-Llama-3.1-8B-Instruct"
brains_base_url: "http://localhost:1234/v1"
# Vision model (image analysis, optional)
vision_model: "Llama-3.2-Vision-11B-Instruct"
vision_base_url: "http://localhost:1234/v1"
local-ollama:
# Brains model (text-only reasoning)
brains_model: "llama3.1:8b-instruct"
brains_base_url: "http://localhost:11434/v1"
# Vision model (image analysis, optional)
vision_model: "llama3.2-vision:11b-instruct"
vision_base_url: "http://localhost:11434/v1"See the "Using Local LLMs with LM Studio or Ollama (Dual-Model)" section below for complete setup instructions.
wordpress:
theme_prefix: "wpgen"
wp_version: "6.4"
include_sample_content: true
theme_type: "standalone" # or "child"
author: "WPGen"
license: "GPL-2.0-or-later"github:
api_url: "https://api.github.com"
repo_name_pattern: "wp-{theme_name}-{timestamp}"
auto_create: true
private: false
default_branch: "main"output:
output_dir: "output"
clean_before_generate: false
keep_local_copy: truedeployment:
enabled: false
method: "github_actions" # or "ftp", "ssh"
ftp:
host: ""
port: 21
username: ""
remote_path: "/public_html/wp-content/themes"
ssh:
host: ""
port: 22
username: ""
remote_path: "/var/www/html/wp-content/themes"
key_file: "~/.ssh/id_rsa"- Go to OpenAI Platform
- Sign in or create an account
- Navigate to API Keys
- Click "Create new secret key"
- Copy the key and add it to your
.envfile
- Go to Anthropic Console
- Sign in or create an account
- Navigate to API Keys
- Click "Create Key"
- Copy the key and add it to your
.envfile
WPGen uses secure authentication via GIT_ASKPASS - your token is never embedded in Git remote URLs or stored in Git config.
- Go to GitHub Settings > Tokens
- Click "Generate new token (classic)"
- Select minimal scopes needed:
repo(for repository creation and push)workflow(optional, only if using GitHub Actions)
- Click "Generate token"
- Copy the token and add it to your
.envfile
Security Note: WPGen uses a temporary GIT_ASKPASS script to provide credentials securely. Your token will never appear in:
- Git remote URLs
- Git configuration files
- Log output (automatically redacted)
- Command history
WPGen now supports running 100% locally with dual-model configuration: separate brains (text-only reasoning) and vision (image analysis) models. No cloud API keys required! Both LM Studio and Ollama provide OpenAI-compatible API servers.
- Privacy: All theme generation happens on your machine
- Cost: No API usage fees after initial setup
- Offline: Works without internet connection
- Control: Full control over model selection and parameters
- Dual-Model: Use separate brains (text) and vision (images) models for optimal performance
WPGen's local LLM support uses two models:
-
Brains Model (Text-Only)
- Used for: Prompt analysis, code generation without images, text-based reasoning
- Examples:
Llama-3.1-8B-Instruct,Qwen2.5-14B-Instruct - Faster, lighter, handles all text-only tasks
-
Vision Model (Image-Capable)
- Used for: Design analysis, image-guided code generation, mockup interpretation
- Examples:
Llama-3.2-Vision-11B-Instruct,qwen2-vl:7b-instruct,llava:13b - Required ONLY when uploading design images/mockups
Automatic Routing: WPGen automatically routes requests to the appropriate model:
- Images present → Uses vision model
- Text-only → Uses brains model
Vision Model is Optional: If you only provide text prompts (no images), you don't need a vision model.
LM Studio provides a user-friendly interface for running local LLMs with an OpenAI-compatible server.
-
Install LM Studio
- Download from lmstudio.ai
- Available for Windows, macOS, and Linux
-
Download Both Models
Brains Model (Text-Only - Required):
- Open LM Studio's model search
- Download one of:
Meta-Llama-3.1-8B-Instruct(balanced, ~8GB RAM)Qwen2.5-14B-Instruct(stronger reasoning, ~14GB RAM)Mixtral-8x7B-Instruct(best quality, needs GPU ~30GB VRAM)
Vision Model (Image Analysis - Optional):
- Download one of:
Llama-3.2-Vision-11B-Instruct(recommended, ~11GB RAM)Qwen2-VL-7B-Instruct(lighter, ~7GB RAM)Phi-3.5-Vision-Instruct(lightweight, ~3.8GB RAM)
-
Start the OpenAI-Compatible Server
- In LM Studio, go to the "Local Server" tab
- Load your brains model first (e.g.,
Meta-Llama-3.1-8B-Instruct) - Click "Start Server"
- Default endpoint:
http://localhost:1234/v1 - Note: LM Studio can switch models on the fly. When WPGen requests vision, manually switch to your vision model in LM Studio, or run two instances on different ports.
-
Configure WPGen (Dual-Model)
Edit config.yaml:
llm:
provider: "local-lmstudio"
temperature: 0.4
max_tokens: 2048
timeout: 60
local-lmstudio:
# Brains model (text-only reasoning)
brains_model: "Meta-Llama-3.1-8B-Instruct"
brains_base_url: "http://localhost:1234/v1"
# Vision model (for image analysis)
vision_model: "Llama-3.2-Vision-11B-Instruct"
vision_base_url: "http://localhost:1234/v1" # Same server, switch models manuallyFor separate servers (recommended for production):
local-lmstudio:
brains_model: "Meta-Llama-3.1-8B-Instruct"
brains_base_url: "http://localhost:1234/v1" # First LM Studio instance
vision_model: "Llama-3.2-Vision-11B-Instruct"
vision_base_url: "http://localhost:1235/v1" # Second LM Studio instance on different port- Generate Your Theme
# CLI (text-only, uses brains model)
wpgen generate "Modern portfolio with dark mode" --provider local-lmstudio
# GUI with image uploads (uses both brains + vision models)
wpgen gui
# 1. Select "local-lmstudio" from LLM Provider
# 2. Upload design mockups
# 3. Generate (automatically routes to vision model for images)Brains Models (Text-Only Reasoning):
| Model | Best For | RAM | Settings |
|---|---|---|---|
Meta-Llama-3.1-8B-Instruct |
General theming, balanced | ~8GB | temp: 0.4-0.6, max_tokens: 2048 |
Qwen2.5-14B-Instruct |
Complex layouts, stronger reasoning | ~14GB | temp: 0.3-0.5, max_tokens: 4096 |
Mixtral-8x7B-Instruct |
Best quality, detailed specs | ~30GB VRAM | temp: 0.3-0.5, max_tokens: 4096 |
Vision Models (Image Analysis):
| Model | Best For | RAM | Settings |
|---|---|---|---|
Llama-3.2-Vision-11B-Instruct |
Best balance, design analysis | ~11GB | temp: 0.4-0.6, max_tokens: 2048 |
Qwen2-VL-7B-Instruct |
Lighter, good color/layout extraction | ~7GB | temp: 0.4-0.6, max_tokens: 2048 |
Phi-3.5-Vision-Instruct |
Fastest, basic mockup analysis | ~3.8GB | temp: 0.5-0.7, max_tokens: 2048 |
Brains Model (Text):
You are WPGen's Theme Architect. Generate precise WordPress theme requirements (pages, template parts, color tokens, typography, accessibility defaults). Be deterministic: choose defaults when input is missing. Output clean, production-ready specs.
Vision Model (Images):
You are a design analyst. Extract layout patterns, color palette (hex codes), typography vibe, spacing density, and component list from design mockups. Output concise, structured findings: colors, fonts, layout type, key components.
Ollama is a powerful CLI tool for running LLMs locally with excellent model management and easy switching between models.
- Install Ollama
# macOS/Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Or download from https://ollama.ai/download
# Windows: Download installer from ollama.ai-
Pull Both Models
Brains Model (Text-Only - Required):
# Recommended baseline ollama pull llama3.1:8b-instruct # Or stronger alternatives: ollama pull qwen2.5:14b-instruct # Better reasoning ollama pull mixtral:8x7b-instruct # Best quality (needs GPU)
Vision Model (Image Analysis - Optional):
# Recommended for design analysis ollama pull llama3.2-vision:11b-instruct # Or alternatives: ollama pull qwen2-vl:7b-instruct # Lighter, good color extraction ollama pull llava:13b # Mature VSN model
View all available models at ollama.ai/library
-
Verify Ollama Server
Ollama automatically runs as a service on port 11434:
# Check if running
curl http://localhost:11434/v1/models
# If not running, start manually:
ollama serveThe OpenAI-compatible API is available at /v1.
- Configure WPGen (Dual-Model)
Edit config.yaml:
llm:
provider: "local-ollama"
temperature: 0.4
max_tokens: 2048
timeout: 60
local-ollama:
# Brains model (text-only reasoning)
brains_model: "llama3.1:8b-instruct"
brains_base_url: "http://localhost:11434/v1"
# Vision model (for image analysis)
vision_model: "llama3.2-vision:11b-instruct"
vision_base_url: "http://localhost:11434/v1" # Same server, Ollama auto-switchesOllama automatically switches models - no need to run multiple instances!
- Generate Your Theme
# CLI (text-only, uses brains model)
wpgen generate "Minimal blog with sidebar" --provider local-ollama
# GUI with image uploads (uses both brains + vision models)
wpgen gui
# 1. Select "local-ollama" from LLM Provider
# 2. Upload design mockups
# 3. Generate (automatically routes to vision model for images)Brains Models (Text-Only Reasoning):
| Model | Best For | RAM/VRAM | Settings |
|---|---|---|---|
llama3.1:8b-instruct |
General theming, baseline | ~8GB | temp: 0.4-0.6, max_tokens: 2048 |
qwen2.5:14b-instruct |
Complex layouts, better reasoning | ~14GB | temp: 0.3-0.5, max_tokens: 4096 |
mixtral:8x7b-instruct |
Best quality, detailed specs | ~30GB VRAM | temp: 0.3-0.5, max_tokens: 4096 |
Vision Models (Image Analysis):
| Model | Best For | RAM/VRAM | Settings |
|---|---|---|---|
llama3.2-vision:11b-instruct |
Best balance, design analysis | ~11GB | temp: 0.4-0.6, max_tokens: 2048 |
qwen2-vl:7b-instruct |
Lighter, good color/layout extraction | ~7GB | temp: 0.4-0.6, max_tokens: 2048 |
llava:13b |
Mature VSN, reliable mockup analysis | ~13GB | temp: 0.4-0.6, max_tokens: 2048 |
Brains Model (Text):
System: You are WPGen's Theme Architect. Generate precise WordPress theme requirements and file plans.
- Use user prompt + guided options.
- Define tokens: --color-primary, --color-surface, --radius-lg; fonts and fallbacks.
- Output page templates and template-parts required.
- Use accessible, mobile-first patterns with smooth transitions.
Vision Model (Images):
System: You are a design analyst for WordPress themes. Extract these details from design mockups:
- Color palette (hex codes for primary, secondary, background, text)
- Typography (font families, sizes, weights)
- Layout patterns (grid, flexbox, columns)
- Component list (header, hero, cards, footer, forms)
- Spacing/density (tight, balanced, spacious)
Output structured, concise findings.
WPGen's CompositeLLMProvider automatically routes requests to the appropriate model:
from openai import OpenAI
# Create two clients (brains + vision)
brains_client = OpenAI(base_url="http://localhost:11434/v1", api_key="local")
vision_client = OpenAI(base_url="http://localhost:11434/v1", api_key="local")
# WPGen routes automatically:
# - Text-only request → uses brains_model
# - Request with images → uses vision_model
# Example text-only (brains):
response = brains_client.chat.completions.create(
model="llama3.1:8b-instruct", # Brains model
messages=[
{"role": "system", "content": "You are WPGen's Theme Architect."},
{"role": "user", "content": "Build a modern portfolio theme..."}
],
temperature=0.4,
max_tokens=2048,
)
# Example with images (vision):
response = vision_client.chat.completions.create(
model="llama3.2-vision:11b-instruct", # Vision model
messages=[
{"role": "user", "content": [
{"type": "text", "text": "Extract colors and layout from this mockup"},
{"type": "image_url", "image_url": {"url": "data:image/jpeg;base64,..."}}
]}
],
temperature=0.4,
max_tokens=2048,
)Documentation:
# Text-only generation (uses brains model)
wpgen generate "Dark portfolio theme" --provider local-lmstudio
# Or with Ollama
wpgen generate "Corporate website" --provider local-ollama
# GUI allows image uploads (automatically uses vision model when images present)
wpgen gui- Launch the Gradio GUI:
wpgen gui-
Expand the "🤖 LLM Provider" accordion
-
Select your provider:
local-lmstudiofor LM Studiolocal-ollamafor Ollama
-
(Optional) Override the base URL or model name
-
Describe your theme and click "🚀 Generate WordPress Theme"
The GUI will show the selected provider in the status: 🤖 Initializing AI provider (local-ollama)...
llm:
provider: "local-ollama"This automatically uses:
- LM Studio:
http://localhost:1234/v1withMeta-Llama-3.1-8B-Instruct - Ollama:
http://localhost:11434/v1withllama3.1:8b-instruct
llm:
provider: "local-lmstudio"
local-lmstudio:
base_url: "http://192.168.1.100:1234/v1" # Remote LM Studio server
model: "Qwen2.5-14B-Instruct"
temperature: 0.3
max_tokens: 4096
timeout: 120LM Studio Issues:
-
"Connection refused"
- Ensure LM Studio server is running (green indicator)
- Check port 1234 is not blocked by firewall
- Try:
curl http://localhost:1234/v1/models
-
"Model not found"
- Model name in config.yaml must match loaded model in LM Studio
- Check model name in LM Studio's server tab
-
Slow generation
- Use GPU acceleration (Settings > Hardware)
- Try a smaller model (8B instead of 14B)
- Reduce
max_tokensin config
Ollama Issues:
-
"Connection refused"
- Start Ollama:
ollama serve - Check if running:
ps aux | grep ollama - Verify endpoint:
curl http://localhost:11434/v1/models
- Start Ollama:
-
"Model not found"
- Pull the model first:
ollama pull llama3.1:8b-instruct - List models:
ollama list - Use exact tag format from
ollama list
- Pull the model first:
-
Out of memory
- Use smaller model:
ollama pull llama3.1:8b-instruct - Set OLLAMA_NUM_GPU=0 to use CPU only
- Close other applications
- Use smaller model:
Hardware Recommendations:
- Minimum: 8GB RAM, CPU-only (slow but works)
- Recommended: 16GB RAM, NVIDIA GPU with 8GB+ VRAM
- Optimal: 32GB RAM, NVIDIA GPU with 24GB+ VRAM (RTX 3090/4090)
Speed vs Quality:
- Fast (1-2 min/theme):
llama3.1:8b-instruct - Balanced (3-5 min/theme):
qwen2.5:14b-instruct - Quality (5-10 min/theme):
mixtral:8x7b-instruct
GPU Acceleration:
- LM Studio: Enable in Settings > Hardware > Use GPU
- Ollama: Automatically uses GPU if available (CUDA/Metal/ROCm)
| Feature | Cloud (OpenAI/Anthropic) | Local (LM Studio/Ollama) |
|---|---|---|
| Cost | Pay per token (~$0.01-0.10/theme) | Free after setup |
| Privacy | Data sent to cloud | 100% local |
| Quality | Excellent (GPT-4, Claude 3.5) | Good (Llama 3.1, Mixtral) |
| Speed | Fast (2-10 sec/theme) | Slower (1-10 min/theme) |
| Setup | API key only | Download models (~4-40GB) |
| Hardware | None | 8GB+ RAM, GPU recommended |
| Offline | No | Yes |
Test local providers with both text-only and vision scenarios:
# ===== LM Studio Dual-Model Test =====
# 1. Download models in LM Studio:
# - Meta-Llama-3.1-8B-Instruct (brains)
# - Llama-3.2-Vision-11B-Instruct (vision)
# 2. Start server on port 1234 with brains model loaded
# 3. Update config.yaml:
# provider: local-lmstudio
# brains_model: Meta-Llama-3.1-8B-Instruct
# brains_base_url: http://localhost:1234/v1
# vision_model: Llama-3.2-Vision-11B-Instruct
# vision_base_url: http://localhost:1234/v1
# Test 1: Text-only generation (uses brains model)
wpgen generate "Modern blog with dark mode" --provider local-lmstudio
# Expected: Theme generates using brains model, no vision needed
# Test 2: GUI with image upload (uses vision model)
wpgen gui
# 1. Select local-lmstudio provider
# 2. Upload a design mockup image
# 3. Generate
# Expected: GUI shows "Dual-model: Brains + Vision" in status
# Manually switch to vision model in LM Studio when image analysis starts
# Test 3: Missing vision model error
# 1. In GUI, clear vision_model field
# 2. Upload an image
# 3. Try to generate
# Expected: Clear error message telling user to set vision model or remove images
# ===== Ollama Dual-Model Test =====
# 1. Pull both models:
ollama pull llama3.1:8b-instruct
ollama pull llama3.2-vision:11b-instruct
# 2. Ensure Ollama server running:
curl http://localhost:11434/v1/models
# 3. Update config.yaml:
# provider: local-ollama
# brains_model: llama3.1:8b-instruct
# vision_model: llama3.2-vision:11b-instruct
# (both use http://localhost:11434/v1)
# Test 1: Text-only (uses brains model)
wpgen generate "Minimal portfolio" --provider local-ollama
# Expected: Theme generates locally, logs show brains model usage
# Test 2: GUI with image (uses vision model)
wpgen gui
# 1. Select local-ollama
# 2. Upload mockup
# 3. Generate
# Expected: Ollama automatically switches to vision model when analyzing images
# Test 3: Verify automatic routing
# Check logs for model switching: brains → vision → brains
# ===== Verify GUI Hover Tooltips =====
# 1. Launch GUI: wpgen gui
# 2. Hover over each control and verify info tooltips appear:
# - Website Description
# - LLM Provider dropdown
# - Brains Model/Base URL
# - Vision Model/Base URL
# - All Guided Mode fields (Site name, Tagline, Goal, etc.)
# - Optional Features (WooCommerce, Gutenberg blocks, Dark mode, Preloader)
# - Image upload
# - Text upload
# - Generation Options (Push to GitHub, Deploy to WordPress, Repo name)
# Expected: Every control has a clear 1-2 line tooltip explaining its purpose- LM Studio: lmstudio.ai | Docs
- Ollama: ollama.ai | Model Library | GitHub
- Recommended Models: DataCamp 2024 Guide | Collabnix 2025 Roundup
When you generate a theme, WPGen creates the following files:
theme-name/
├── style.css # Theme stylesheet with header
├── functions.php # Theme functions and setup
├── index.php # Main template file
├── header.php # Header template
├── footer.php # Footer template
├── sidebar.php # Sidebar template
├── single.php # Single post template
├── page.php # Static page template
├── archive.php # Archive template
├── search.php # Search results template
├── 404.php # 404 error page
├── page-{custom}.php # Custom page templates
├── screenshot.txt # Screenshot placeholder
├── README.md # Theme documentation
├── .gitignore # Git ignore file
└── wp-config-sample.php # WordPress config sample
If deployment is enabled in config.yaml, WPGen will create a .github/workflows/deploy.yml file in your theme directory. This workflow can deploy your theme via FTP or SSH.
In your GitHub repository, add these secrets:
For FTP deployment:
FTP_HOST: FTP server hostnameFTP_USERNAME: FTP usernameFTP_PASSWORD: FTP password
For SSH deployment:
SSH_HOST: SSH server hostnameSSH_USERNAME: SSH usernameSSH_PRIVATE_KEY: SSH private keySSH_REMOTE_PATH: Remote path on server
- Navigate to the output directory
- Create a ZIP file of your theme directory
- In WordPress admin, go to Appearance > Themes > Add New > Upload Theme
- Upload the ZIP file and activate
# Install development dependencies
pip install -r requirements-dev.txt
# Run tests
pytest tests/
# Run with coverage
pytest --cov=wpgen tests/This project follows PEP 8 guidelines. Format your code with:
black wpgen/
isort wpgen/Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
Error: "API key is required"
- Make sure you've created a
.envfile (runwpgen init) - Verify your API key is correctly set in
.env - Check that you're using the correct provider (OpenAI or Anthropic)
Error: "Failed to push to GitHub"
- Verify your GitHub token has
repoandworkflowscopes - Check that the repository name is valid
- Ensure you have permission to create repositories
Error: "Failed to generate code"
- Check your internet connection
- Verify your API key is valid and has credits
- Try a simpler prompt first
- Check the logs in
logs/wpgen.log
Error: "Address already in use"
- Another process is using port 5000
- Change the port in
config.yamlunderweb.port
Logs are written to logs/wpgen.log by default. You can configure logging in config.yaml:
logging:
level: "INFO" # DEBUG, INFO, WARNING, ERROR, CRITICAL
log_file: "logs/wpgen.log"
format: "json" # or "text"
console_output: true
colored_console: true- Never commit your
.envfile to version control - Keep your API keys secret
- Use environment variables for sensitive data
- Generated themes follow WordPress security best practices
- Review generated code before deploying to production
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Python 3.10+
- Uses OpenAI GPT-4 or Anthropic Claude for AI generation
- Flask for web interface
- GitPython for Git operations
- WordPress coding standards
For issues, questions, or contributions:
- Open an issue on GitHub
- Check the documentation
- Review existing issues for solutions
If you find WPGen useful, consider supporting the project:
Future enhancements planned:
- Support for more LLM providers
- Theme customization wizard
- Plugin generation
- Theme preview before generation
- Batch theme generation
- Custom template library
- WordPress.org theme repository compliance
- Docker support
- CI/CD integration examples
- Initial release
- OpenAI and Anthropic support
- CLI and Web UI
- GitHub integration
- WordPress theme generation
- Deployment workflows
Made with ❤️ by WPGen
Problem: ModuleNotFoundError when importing wpgen
Solution: Install dependencies:
pip install -r requirements.txtProblem: OPENAI_API_KEY environment variable is required
Solution: Set your API key:
export OPENAI_API_KEY=sk-...
# or
export ANTHROPIC_API_KEY=sk-ant-...Problem: Configuration validation failed
Solution: Run validation endpoint to see detailed errors:
curl -X POST http://localhost:5000/api/config/validateFix the reported fields in config.yaml.
Problem: Cannot push to GitHub
Solution: Verify token has repo scope and is set:
export GITHUB_TOKEN=ghp_...Problem: LLM requests timeout
Solution: Increase timeout in config.yaml:
llm:
timeout: 120 # secondsProblem: File too large error
Solution: Adjust limits via environment variables:
export WPGEN_MAX_UPLOAD_SIZE=52428800 # 50MB in bytes
export WPGEN_MAX_IMAGE_SIZE=10485760 # 10MBProblem: Theme generates but has validation warnings
Solution: Enable strict mode in config to see all issues:
validation:
enabled: true
strict: trueFor more help, see:
