Focus on writing your code, let LLMs write the documentation for you.
With just a few keystrokes in your terminal by using the OpenAI API or 100% local LLMs without any data leaks.
Built with langchain, lama.cpp and treesitter.
- 📝 Create documentation comment blocks for all methods in a file
- e.g. Javadoc, JSDoc, Docstring, Rustdoc etc.
- ✍️ Create inline documentation comments in method bodies
- 🌳 Treesitter integration
- 💻 Local LLM support
Note
Documentations will only be added to files without unstaged changes, so nothing is overwritten.
Create documentations for any method in the file with GPT-3.5 Turbo model:
aicomments <RELATIVE_FILE_PATH>
Create also documentation comments in the method body:
aicomments <RELATIVE_FILE_PATH> --inline
Use GPT-4 model (Default is GPT-3.5):
aicomments <RELATIVE_FILE_PATH> --gpt4
Guided mode, confirm documentation generation for each method:
aicomments <RELATIVE_FILE_PATH> --guided
Use a local LLM on your machine:
aicomments <RELATIVE_FILE_PATH> --local_model <MODEL_PATH>
Note
How to download models from huggingface for local usage see Local LLM usage
Important
The results by using a local LLM will highly be affected by your selected model. To get similar results compared to GPT-3.5/4 you need to select very large models which require a powerful hardware.
- Python
- Typescript
- Javascript
- Java
- Rust
- Kotlin
- Go
- C++
- C
- Scala
- Python >= 3.9
Create your personal OpenAI API key and add it as $OPENAI_API_KEY
to your environment with:
export OPENAI_API_KEY=<YOUR_API_KEY>
Install with pipx
:
pipx install doc-comments-ai
It is recommended to use
pipx
for installation, nonetheless it is also possible to usepip
.
By using a local LLM no API key is required. On first usage of --local_model
you will be asked for confirmation to intall llama-cpp-python
with its dependencies.
The installation process will take care of the hardware-accelerated build tailored to your hardware and OS. For further details see:
installation-with-hardware-acceleration
To download a model from huggingface for local usage the most convenient way is using the huggingface-cli
:
huggingface-cli download TheBloke/CodeLlama-13B-Python-GGUF codellama-13b-python.Q5_K_M.gguf
This will download the codellama-13b-python.Q5_K_M
model to ~/.cache/huggingface/
.
After the download has finished the absolute path of the .gguf
file is printed to the console which can be used as the value for --local_model
.
Important
Since llama.cpp
is used the model must be in the .gguf
format.