Archi is a powerful Go-based command-line tool that analyzes directory structures and generates AI-powered insights about your project's architecture. It creates detailed reports, visualizations, and architectural recommendations by leveraging AI to understand file contents and folder organization.
This project uses the AI Queuer project for request orchestration and batching when communicating with AI services. AI Queuer (https://github.com/Hydevs-Corp/ai-queuer) handles queued/batched requests to the AI API, retry behavior, and provider-specific plumbing so Archi can focus on analysis and report generation.
At the moment, Archi itself is not planned to integrate other AI providers directly; however, AI Queuer plans to add multi-provider support in the future, which will allow Archi to target multiple AI backends via the queuer without changing Archi's core request logic.
- 🔍 Smart Directory Analysis: Recursively scans directories and analyzes file contents
- 🤖 AI-Powered Insights: Uses Mistral AI models to understand and describe files and folders
- 📊 Multiple Output Formats: Generates JSON, Markdown, and detailed reports
- 🖼️ Image Analysis: Supports analysis of images using vision AI
- 📄 Document Support: Reads and analyzes DOCX, XLSX, PDF, and text files
- ⚡ Estimation Mode: Quickly estimates processing time before full analysis
- 🏗️ Architecture Analysis: Provides detailed architectural recommendations
- ⚙️ Modern CLI: Built with Cobra for intuitive command structure
- 🔧 Flexible Config: YAML/JSON configuration with environment variable support
- 🧵 Batched Requests: Control concurrency with a configurable batch size
Here are some of the features and improvements planned for future releases:
- Architecture Generation: Offer the possibility to create a recommended folder architecture on the filesystem.
- Enhanced Visualization: Generate interactive diagrams (e.g., using D3.js or Mermaid.js) of the folder structure and dependencies.
- Cost Estimation Improvements: Refine cost and time estimations based on file types and token counts.
- Context Analysis: Implement a way predefined context to the file, folder, and architecture analysis.
- Metadata Injection: Inject file metadata into the context analysis.
- Advanced File Outlines: Introduce specific file outlining for security vulnerabilities, and redundant code warnings.
- Flat Analysis Mode: Add a "flat analysis" mode to get a file architecture overview without deep content analysis, while retaining duplicate file warnings.
- Expanded Media Support: Add support for more media file types, including audio and video formats.
- Go 1.25 or later
- An AI API service running (default:
http://localhost:3005) - The AI service should support:
/askendpoint for text analysis/analyze-imageendpoint for image analysis
- Clone the repository:
git clone https://github.com/Hydevs-Corp/archi
cd archi- Install dependencies:
go mod tidy- Build the application:
go buildDownload the latest release for your platform from the Releases page.
- Download the
archi-windows-amd64.exefile from the latest release - Place it in a directory of your choice (e.g.,
C:\tools\archi\) - Optionally, add the directory to your system PATH for global access
Using the Windows binary:
# Run from the same directory
archi-windows-amd64.exe --help
# If added to PATH, you can use it globally
archi-windows-amd64.exe estimate C:\your\project
# Analyze current directory
archi-windows-amd64.exe
# Generate architectural recommendations
archi-windows-amd64.exe architectureWindows Command Examples:
# Quick estimation
archi-windows-amd64.exe estimate
# Analyze a specific directory
archi-windows-amd64.exe "C:\Users\YourName\Documents\MyProject"
# Generate reports with custom config
archi-windows-amd64.exe --config config.yaml "C:\path\to\project"
# Folders only analysis (faster)
# Configure via config.yaml (mode: folder-only) or set env var:
set ARCHI_MODE=folder-only
archi-windows-amd64.exe "C:\your\project"Windows Configuration Notes:
- Configuration files can be placed in the same directory as the executable
- Use forward slashes (
/) or double backslashes (\\) in paths within configuration files - Environment variables work the same way:
set ARCHI_APIBASEURL=http://localhost:3005
Create a config.yaml file from the example:
cp config.yaml.example config.yamlExample config.yaml:
# API Configuration
apiBaseURL: "http://localhost:3005"
# Output Configuration
defaultOutputDir: "."
jsonOutputFile: "output.json"
markdownOutputFile: "output.md"
reportOutputFile: "report.md"
estimationFile: "estimation.md"
# AI Model Configuration (single or multi-model)
# Option A: Single model (string) — only for Mistral when sent as a string
fileAnalysisModel: "mistral-small-2501"
folderAnalysisModel: "mistral-small-2501"
architectureAnalysisModel: "mistral-small-2501"
imageAnalysisModel: "magistral-small-2509"
# Option B: Multi-model (takes precedence over the single-model keys above if non-empty)
# fileAnalysisModels:
# - provider: "gemini"
# model: "gemini-2.5-flash"
# - provider: "mistral"
# model: "mistral-small-2503"
# folderAnalysisModels: []
# architectureAnalysisModels: []
# imageAnalysisModels: []
# Processing Configuration
maxFileSize: 1048576 # 1MB in bytes
requestDelay: "200ms"
batchSize: 5 # Number of concurrent requests per batch
# Analysis Mode
# Set how analysis runs: "full", "description-only" (no content in JSON), or "folder-only" (folders only)
mode: "full"You can also use JSON configuration:
cp config.json.example config.jsonConfiguration Parameters:
apiBaseURL: Base URL for the AI API servicedefaultOutputDir: Directory where output files will be writtenjsonOutputFile: Name of the JSON output file containing the tree structuremarkdownOutputFile: Name of the Markdown output file with tree visualizationreportOutputFile: Name of the architectural analysis report fileestimationFile: Name of the estimation report file (estimate mode)fileAnalysisModel: AI model to use for individual file content analysisfolderAnalysisModel: AI model to use for folder content analysisarchitectureAnalysisModel: AI model to use for architectural analysisimageAnalysisModel: AI model to use for image analysisfileAnalysisModels/folderAnalysisModels/architectureAnalysisModels/imageAnalysisModels: arrays of{ provider, model }entries. When provided, these arrays are sent to the API instead of the single string.maxFileSize: Maximum file size to process (in bytes)requestDelay: Delay between API requests to avoid overwhelming the servicebatchSize: Number of concurrent requests per batch (default: 5)concurrency: Object controlling concurrency behavior. Contains two fields:archiAnalysis: Number of goroutines used to analyze chunks in parallel (default: 4, clamped to 32)reportChunking: Number of goroutines used to combine groups during reduction (default: 4, clamped to 32)
You can override any configuration using environment variables with the ARCHI_ prefix:
export ARCHI_APIBASEURL="http://localhost:3005"
export ARCHI_REQUESTDELAY="300ms"
export ARCHI_MAXFILESIZE="2097152"
# For nested config keys Viper maps dots to underscores. Use these env vars to set the nested concurrency fields:
export ARCHI_CONCURRENCY_ARCHIANALYSIS="8"
export ARCHI_CONCURRENCY_REPORTCHUNKING="4"Archi uses a modern CLI interface with subcommands:
# Show help
./archi --help
# Show help for specific commands
./archi estimate --help
./archi architecture --help# Analyze current directory
./archi
# Analyze specific directory
./archi /path/to/project
# Analysis modes are configured via config.yaml (mode: full | description-only | folder-only)
# Example: set mode: "folder-only" in config.yaml to only include folders# Get quick file/folder counts and time estimation
./archi estimate
# Estimate a specific directory
./archi estimate /path/to/project# Generate architectural recommendations (requires existing analysis)
./archi architecture
# Alternative aliases
./archi arch
./archi archi--config string: Path to configuration file (YAML or JSON)
- Quick estimation (no AI analysis, fast):
./archi estimate- Full analysis with custom config:
./archi --config my-config.yaml /path/to/project- Folders only (faster, focuses on structure):
# Option A: use a config file with `mode: folder-only`
./archi --config my-config.yaml
# Option B: set environment variable for this run
ARCHI_MODE="folder-only" ./archi- Generate architectural report (run after basic analysis):
./archi architecture- Analysis without file content (smaller output files):
# Option A: use a config file with `mode: description-only`
./archi --config my-config.yaml
# Option B: set environment variable for this run
ARCHI_MODE="description-only" ./archi- Using environment variables:
ARCHI_APIBASEURL="http://custom-api:3005" ./archi
# You can also control analysis mode via env var
ARCHI_MODE="folder-only" ./archioutput.json: Complete directory tree with AI descriptions in JSON formatoutput.md: Human-readable tree visualization in Markdownestimation.md: Time estimation report (withestimate)report.md: Architectural analysis and recommendations (witharchitecture)
The tool processes various file types:
- Text files: Content extracted and analyzed
- Documents: DOCX, XLSX, PDF files are parsed
- Images: JPG, PNG, GIF, BMP analyzed with vision AI
- Code files: All programming languages supported
- Binary files: Skipped or analyzed by type
# Get quick overview
./archi estimate
# Full analysis if time permits
./archi
# Generate architectural recommendations
./archi architecture# Start with estimation
./archi estimate /large/project
# Analyze structure only first
ARCHI_MODE="folder-only" ./archi /large/project
# Full analysis with content
./archi /large/project
# Generate final report
./archi architecture# Generate comprehensive documentation
./archi --config doc-config.yaml /project
# Create architectural report
./archi architecture# Quick check in CI pipeline
./archi estimate .
# Generate reports for documentation
./archi --config ci-config.yaml .
./archi architecture
# Upload results to documentation system- File processing: ~4 seconds per file for AI analysis
- Folder processing: ~7 seconds per folder for AI analysis
- Request delay: Configurable delay between API calls (default: 200ms)
- Batch size: Controls concurrency for file and folder analyses (default: 5)
- Large projects: Use
./archi estimatefirst to estimate time - Memory usage: Large files are truncated to 5000 characters for analysis
Archi follows Go best practices:
├── cmd/ # CLI commands (Cobra)
│ ├── architecture.go # Architecture analysis command
│ ├── estimate.go # Estimate command
│ └── root.go # Root command & CLI setup
├── internal/ # Private application packages
│ ├── analyzer/ # Core analysis logic
│ │ ├── ai_client.go # AI API communication
│ │ ├── analyzer.go # Main analysis orchestration
│ │ ├── filereaders.go # File content extraction
│ │ ├── output.go # Output generation
│ │ └── types.go # Core type definitions
│ ├── app/ # Application orchestration
│ │ └── app.go # High-level app logic
│ └── config/ # Configuration management
│ └── config.go # Viper-based configuration
├── main.go # Simple entry point (7 lines)
├── config.yaml.example # YAML configuration template
└── go.mod # Go module with dependencies
-
API connection errors:
- Ensure AI service is running on configured URL
- Check firewall settings
- Verify API endpoints are available
-
Large file processing:
- Files over 5000 characters are truncated for analysis
- Set
mode: description-onlyto reduce output size - Consider
mode: folder-onlyfor structure analysis
-
Permission errors:
- Ensure read permissions on analyzed directories
- Check write permissions for output directory
-
Memory issues:
- Use
./archi estimatefor very large projects - Process subdirectories separately
- Use
- Use custom configuration with appropriate request delays
- Start with
./archi estimateto understand scope - Use
mode: folder-onlyfor structural analysis - Configure output directory to SSD for faster writes
The tool expects an AI service with these endpoints:
POST /ask
{
"history": [{"role": "user", "content": "..."}],
"model": "mistral-small-2501"
}POST /analyze-image
{
"image": "base64-encoded-image",
"model": "mistral-small-2501"
}The refactored application includes a comprehensive help system:
# Main help
./archi --help
# Command-specific help
./archi estimate --help
./archi architecture --help
# See all available commands
./archi helpestimate- Estimate files, folders, and processing time (alias:count)architecture(aliases:arch,archi) - Generate architectural recommendationscompletion- Generate shell completion scriptshelp- Help about any command
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
Copyright (c) 2025 Hydevs
For issues and questions:
- Check the troubleshooting section
- Review configuration options
- Examine output files for error messages
- Ensure AI service is properly configured