A command-line tool to generate content for a target file using Large Language Models (LLMs), leveraging existing repository files as context. Currently supports Google Gemini and local Ollama models.
Don't miss the showcase!
- Ollama Support! You can now use local Ollama models for content generation. This allows for offline usage and greater control over your LLM.
- Improved Logging: Structured JSON logging provides better debugging and monitoring capabilities.
- More Gemini Models: You can now specify different Gemini models, including the latest
gemini-2.0-flash
.
- Generates content based on a given prompt.
- Uses specified files from a repository as context for content generation.
- Supports specifying the LLM provider (Gemini or Ollama) and model to use.
- Outputs generated content to the console or a file.
- Uses structured JSON logging for better debugging and monitoring.
- Python 3.8 or higher
- pip (Python package installer)
- For Gemini: A Google Gemini API Key (from Google AI Studio)
- For Ollama: A local installation of Ollama and a compatible model pulled (e.g.,
ollama pull qwen2.5-coder:1.5b
).
-
Clone the Repository:
gh repo clone deniskropp/gemini-repo-cli cd gemini-repo-cli
-
Install Dependencies:
pip install -r requirements.txt
-
Install the CLI tool:
pip install -e .
(Run this from the same directory where
setup.py
exists.)
-
Set the Gemini API Key:
You must set your Google Gemini API key as an environment variable.
export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"
(Replace
YOUR_GEMINI_API_KEY
with your actual API key.) You might want to add this line to your.bashrc
or.zshrc
file for persistence.
-
Ensure Ollama is running:
Start your local Ollama server. The default host is
http://localhost:11434
. -
Pull a model:
Use the
ollama pull
command to download a model to your local machine. For example:ollama pull qwen2.5-coder:1.5b
gemini-repo-cli <repo_name> <target_file_name> <prompt> [--provider <provider>] [--files <file_path1> <file_path2> ...] [--output <output_file>] [--gemini-api-key <api_key>] [--gemini-model <model_name>] [--ollama-model <model_name>] [--ollama-host <host>] [--debug]
<repo_name>
: The name of the repository (used for context).<target_file_name>
: The name of the target file to generate.<prompt>
: The prompt to guide the content generation.
--provider <provider>
: The LLM provider to use. Choices aregemini
(default) orollama
.--files <file_path1> <file_path2> ...
: A list of file paths to include in the prompt as context (space-separated).--output <output_file>
: The path to the file where the generated content will be written. If not provided, output to stdout.--debug
: Enable debug logging.
--gemini-api-key <api_key>
: The Google Gemini API key. If not provided, it will be read from theGEMINI_API_KEY
environment variable.--gemini-model <model_name>
: The name of the Gemini model to use. Defaults togemini-2.0-flash
. Consider tryinggemini-1.5-pro-latest
for more advanced tasks!
--ollama-model <model_name>
: The name of the Ollama model to use. Defaults toqwen2.5-coder:1.5b
.--ollama-host <host>
: The Ollama host URL (e.g.,http://localhost:11434
). If not provided, it will use the default or theOLLAMA_HOST
environment variable.
-
Generate content and print to stdout:
gemini-repo-cli my-project my_new_file.py "Implement a function to calculate the factorial of a number." --files utils.py helper.py
-
Generate content and write to a file:
gemini-repo-cli my-project my_new_file.py "Implement a function to calculate the factorial of a number." --files utils.py helper.py --output factorial.py
-
Specify the API key and model name:
gemini-repo-cli my-project my_new_file.py "Implement a function to calculate the factorial of a number." --gemini-api-key YOUR_API_KEY --gemini-model gemini-1.5-pro-latest
-
Generate content using Ollama and print to stdout:
gemini-repo-cli my-project my_new_file.py "Implement a function to calculate the factorial of a number." --provider ollama --files utils.py helper.py
-
Specify the Ollama model:
gemini-repo-cli my-project my_new_file.py "Implement a function to calculate the factorial of a number." --provider ollama --ollama-model codellama:34b --files utils.py
-
Specify the Ollama host (if not the default):
gemini-repo-cli my-project my_new_file.py "Implement a function to calculate the factorial of a number." --provider ollama --ollama-host http://my-ollama-server:11434 --files utils.py
- Gemini API Key Issues: Double-check that your
GEMINI_API_KEY
environment variable is correctly set and that the API key is valid. - Ollama Connection Errors: Ensure that your Ollama server is running and accessible at the specified host and port. Verify that the model you are trying to use has been pulled.
- Context Issues: Make sure the file paths specified with
--files
are correct and that the files exist in your repository. - General Errors: Enable debug logging with the
--debug
flag for more detailed error messages.
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License. See the LICENSE file for details.