Pull documentation from any website and converts it into clean, AI-ready Markdown. Fast, type-safe, secure, and optimized for building knowledge bases or training datasets.
NEW in v1.3.0: Rich structured metadata extraction (Open Graph, JSON-LD) for enhanced AI/RAG integration.
v1.2.0: 15 major features including language filtering, deduplication, auto-indexing, multi-source configuration, and more. Real-world testing shows 58% size reduction with automatic optimization.
Unlike tools like wget or httrack, docpull extracts only the main content, removing ads, navbars, and clutter. Output is clean Markdown with optional YAML frontmatter—ideal for RAG systems, offline docs, or ML pipelines.
- Works on any documentation site
- Smart extraction of main content
- Async + parallel fetching (up to 10× faster)
- Optional JavaScript rendering via Playwright
- Sitemap + link crawling
- Rate limiting, timeouts, content-type checks
- Saves docs in structured Markdown with YAML metadata
- Built-in Stripe profile as reference implementation (custom profiles easily added)
- Structured Metadata: Extract Open Graph, JSON-LD, and microdata during fetch
- Enhanced Frontmatter: Adds author, description, keywords, images, publish dates, and more
- AI/RAG Ready: Richer context for embeddings and retrieval systems
- Opt-in Feature: Enabled with
--rich-metadataflag
- Language Filtering: Auto-detect and filter by language (skip 352+ translation files)
- Deduplication: Remove duplicates with SHA-256 hashing (save 10+ MB on duplicate content)
- Auto-Index Generation: Create navigable INDEX.md with tree/TOC/categories/stats
- Size Limits: Control file and total download size (skip/truncate oversized files)
- Multi-Source Configuration: Configure multiple docs in one YAML file
- Selective Crawling: Include/exclude URL patterns for targeted fetching
- Content Filtering: Remove verbose sections (Examples, Changelog, etc.)
- Format Conversion: Output to Markdown, TOON (compact), JSON, or SQLite
- Smart Naming: 4 naming strategies (full, short, flat, hierarchical)
- Metadata Extraction: Extract titles, URLs, stats to metadata.json
- Update Detection: Only download changed files (checksums, ETags)
- Incremental Mode: Resume interrupted downloads with checkpointing
- Hooks & Plugins: Python plugin system for custom processing
- Git Integration: Auto-commit changes with customizable messages
- Archive Mode: Create tar.gz/zip archives for distribution
Real-world impact: Testing with 1,914 files (31 MB) → 13 MB (58% reduction) with all optimizations enabled.
pip install docpull
docpull --doctor # verify installation
# Basic usage
docpull https://aptos.dev
docpull stripe # use a built-in profile
# NEW: Simple optimization (v1.2.0)
docpull https://code.claude.com/docs --language en --create-index
# NEW: Rich metadata extraction (v1.3.0)
docpull https://docs.anthropic.com --rich-metadata --create-index
# NEW: Advanced optimization (v1.2.0)
docpull https://aptos.dev \
--deduplicate \
--keep-variant mainnet \
--max-file-size 200kb \
--create-index
# NEW: Multi-source configuration (v1.2.0)
docpull --sources-file examples/multi-source-optimized.yamlpip install docpull[js]
python -m playwright install chromium
docpull https://site.com --jsfrom docpull import GenericAsyncFetcher
fetcher = GenericAsyncFetcher(
url_or_profile="https://aptos.dev",
output_dir="./docs",
max_pages=100,
max_concurrent=20,
)
fetcher.fetch()--doctor– verify installation and dependencies--max-pages N– limit crawl size--max-depth N– restrict link depth--max-concurrent N– control parallel fetches--js– enable Playwright rendering--output-dir DIR– output directory--rate-limit X– seconds between requests--no-skip-existing– re-download existing files--dry-run– test without downloading
--language LANG– filter by language (e.g.,en)--exclude-languages LANG [LANG ...]– exclude languages--deduplicate– remove duplicate files--keep-variant PATTERN– keep files matching pattern when deduplicating--max-file-size SIZE– max file size (e.g.,200kb,1mb)--max-total-size SIZE– max total download size--include-paths PATTERN [PATTERN ...]– only crawl matching URLs--exclude-paths PATTERN [PATTERN ...]– skip matching URLs--exclude-sections NAME [NAME ...]– remove sections by header name--format {markdown,toon,json,sqlite}– output format--naming-strategy {full,short,flat,hierarchical}– file naming strategy--create-index– generate INDEX.md with navigation--extract-metadata– extract metadata to metadata.json--rich-metadata– extract rich structured metadata (Open Graph, JSON-LD) during fetch--update-only-changed– only download changed files--incremental– enable incremental mode with resume--git-commit– auto-commit changes--git-message MSG– commit message template--archive– create compressed archive--archive-format {tar.gz,tar.bz2,tar.xz,zip}– archive format--sources-file PATH– multi-source configuration file
See docpull --help for complete list of options.
Async fetching drastically reduces runtime:
| Pages | Sync | Async | Speedup |
|---|---|---|---|
| 50 | ~50s | ~6s | 8× faster |
Higher concurrency yields even better results.
Each downloaded page becomes a Markdown file:
---
url: https://stripe.com/docs/payments
fetched: 2025-11-13
---
# Payment Intents
...With --rich-metadata, the frontmatter includes Open Graph, JSON-LD, and other structured metadata:
---
url: https://stripe.com/docs/payments
fetched: 2025-11-13
title: Accept a payment
description: Learn how to accept payments with the Payment Intents API
author: Stripe
keywords: [payments, api, stripe, checkout]
image: https://stripe.com/img/docs-preview.png
type: article
site_name: Stripe Documentation
---
# Payment Intents
...Directory layout mirrors the target site's structure.
output_dir: ./docs
rate_limit: 0.5
sources:
- stripe # Built-in profile
- https://docs.example.com # Or any URLRun with:
docpull --config config.yamlsources:
anthropic:
url: https://docs.anthropic.com
language: en
max_file_size: 200kb
create_index: true
rich_metadata: true # Extract Open Graph, JSON-LD metadata
claude-code:
url: https://code.claude.com/docs
language: en # Skips 352 translation files!
create_index: true
aptos:
url: https://aptos.dev
deduplicate: true
keep_variant: mainnet # Skips 304 duplicates!
max_file_size: 200kb
include_paths:
- "build/guides/*"
output_dir: ./docs
rate_limit: 0.5
git_commit: true
git_message: "Update docs - {date}"
extract_metadata: true
archive: trueRun with:
docpull --sources-file config.yamlSee examples/ directory for more configuration examples.
docpull includes a Stripe profile as reference. Create custom profiles for other sites:
from docpull.profiles.base import SiteProfile
MY_PROFILE = SiteProfile(
name="mysite",
domains={"docs.mysite.com"},
include_patterns=["/docs/", "/api/"],
sitemap_url="https://docs.mysite.com/sitemap.xml",
rate_limit=0.5,
)Want to contribute profiles? Submit a PR with your custom profile! Popular ones may be added to the core or a community profiles repository.
- HTTPS-only
- Blocks private network IPs
- 50MB page size limit
- Timeout controls
- Validates content-type
- Playwright sandboxing
- Installation issues: Run
docpull --doctorto diagnose problems - Missing dependencies: See TROUBLESHOOTING.md for common fixes
- Site requires JS: install Playwright +
--js - Slow or rate limited: lower concurrency or raise
--rate-limit - Large sites: set
--max-pages
For detailed troubleshooting, see TROUBLESHOOTING.md.
Automatically detect and filter documentation by language:
# English only (auto-detects /en/, _en_, docs_en_, etc.)
docpull https://code.claude.com/docs --language en --create-indexImpact: Claude Code docs downloaded in 9 languages = 352 unnecessary files for English-only users.
Remove duplicate files based on content hash:
# Keep mainnet version, skip testnet/devnet duplicates
docpull https://aptos.dev --deduplicate --keep-variant mainnet --create-indexImpact: Aptos Move reference docs across 3 environments = 304 duplicate files (~10 MB saved).
Convert to different formats for various use cases:
# TOON format (40-60% size reduction, optimized for LLMs)
docpull https://docs.anthropic.com --format toon --language en
# SQLite database with full-text search
docpull https://docs.anthropic.com --format sqlite --language en
# Structured JSON
docpull https://docs.anthropic.com --format json --language enOnly download changed files:
docpull https://docs.anthropic.com \
--incremental \
--update-only-changed \
--git-commit \
--git-message "Update docs - {date}"Use case: Regular documentation updates with minimal bandwidth usage.
Combine all optimizations:
docpull --sources-file examples/multi-source-optimized.yamlSee examples/ directory for comprehensive configuration examples.
Real-world results: Testing with 4 documentation sources (Anthropic, Claude Code, Aptos, Shelby):
- Before: 1,914 files, 31 MB, no navigation
- After: 1,250 files, 13 MB (58% reduction), full indexes generated
- One command instead of 4+ separate commands with manual optimization
This release adds rich structured metadata extraction for better AI/RAG integration.
New Feature:
- Rich Metadata Extraction: Extract Open Graph, JSON-LD, microdata, and other structured metadata during fetch
- Adds author, description, keywords, images, publish dates, and more to frontmatter
- Enhances AI/RAG systems with richer context
- Enabled with
--rich-metadataflag orrich_metadata: truein config - Powered by the extruct library
Example enhanced frontmatter:
---
url: https://docs.example.com/guide
fetched: 2025-11-20
title: Getting Started Guide
description: Learn the basics of our platform
author: John Doe
keywords: [tutorial, guide, api]
image: https://docs.example.com/og-image.png
type: article
published_time: 2024-01-15T10:00:00Z
---Backward Compatible: All existing workflows continue to work unchanged. Rich metadata is opt-in.
This release adds 15 major features across 4 phases. See CHANGELOG.md for complete release notes.
Highlights:
- Multi-source YAML configuration
- Language filtering with auto-detection
- SHA-256 based deduplication
- Auto-index generation (tree, TOC, categories, stats)
- 4 output formats (Markdown, TOON, JSON, SQLite)
- Incremental mode with resume capability
- Git integration and archive creation
- Python plugin/hook system
Backward Compatible: All v1.0+ workflows continue to work unchanged.
MIT License - see LICENSE file for details