local-AI-infra-generation leverages local Large Language Models (LLMs) to analyze code repositories and automatically generate infrastructure files such as Dockerfiles and docker-compose.yml. All processing is performed locally, ensuring privacy and control over your codebase.
- Codebase Embedding: Index and embed your codebase for semantic search and retrieval.
- Natural Language Q&A: Ask questions about your codebase and receive context-aware answers.
- Automated Infrastructure Generation: Generate Dockerfiles and docker-compose.yml files tailored to your project.
- Multi-language Support: Works with Python, JavaScript, TypeScript, and Go projects.
-
Python 3.11+
Ensure you have Python 3.11 or higher installed.
Check with:python --version
-
Ollama
Download and install Ollama for local LLM inference. -
C/C++ Build Tools
Required for building tree-sitter-languages.
Create and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activateInstall the package from this repository:
pip install .This will install the infra-gen command-line tool and all necessary dependencies.
Make sure the Ollama server is running in the background:
ollama serve &Once installed, you can use the infra-gen command.
infra-gen --help-
Embed a Project:
infra-gen embed /path/to/your/project
-
Ask a Question:
infra-gen ask "How does authentication work?" --project your_project_name -
List Embedded Projects:
infra-gen list
-
Generate Full Infrastructure (Dockerfile, Compose, etc.):
infra-gen generate-infra /path/to/your/project --output ./infra
-
Generate Only a Dockerfile:
infra-gen generate-docker --project your_project_name
-
Generate Only a docker-compose.yml:
infra-gen generate-compose --project your_project_name
The tool uses a config.yaml file for settings. The configuration is loaded in the following order of priority:
- Via
--configflag: Provide a direct path to a.yamlfile.infra-gen --config /path/to/my-config.yaml embed /path/to/project
- User-level config: Place a file at
~/.config/infra-generator/config.yaml. - Default package config: If no other config is found, a default version bundled with the package is used.
You can customize model names, ChromaDB storage directories, Ollama URLs, and more in your custom config file.
If you want to contribute to the development of this tool, you can install it in editable mode.
- Clone the repository:
git clone https://github.com/yourusername/local-AI-infra-generation.git cd local-AI-infra-generation - Create and activate a virtual environment:
python -m venv .venv source .venv/bin/activate - Install in editable mode:
pip install -e .
This allows you to make changes to the source code and have them reflected immediately when you run the infra-gen command.
TODO: Add unit tests and instructions for running them.
-
Ollama not found:
Ensure Ollama is installed and available in your PATH. -
tree-sitter language .so files missing:
If you encounter errors about missing.sofiles, ensure tree-sitter-languages is installed and built correctly. -
Model download issues:
The first run will download required models. Ensure you have a stable internet connection.
- Add comprehensive unit and integration tests.
- Improve error handling and user feedback.
- Add support for more programming languages (e.g., Java, Rust).
- Enhance prompt templates for better infrastructure generation.
- Add web or GUI interface.
- Document API for programmatic usage.
- Support for private model registries and custom LLMs.
- Optimize embedding and retrieval for large codebases.
- Add CI/CD pipeline for automated testing and deployment.
This project is licensed under the MIT License.