GPTCLI is a CLI client written entirely in Python for accessing the LLM of your choice without the need for Web or Desktop apps.
- Pypi:
- Open your preferred terminal.
- Install the project via pypi:
pip install dbc-gptcli. - Run
gptcli [mistral|openai] [chat|se]. - For more info on usage, check the builtin help docs:
gptcli -hgptcli [mistral|openai] [-h|--help]gptcli [mistral|openai] [chat|se] [-h|--help].
- Docker:
- Open your preferred terminal.
- Start docker via the desktop app or running
sudo systemctl start docker. - Pull the docker image with
docker pull deathbychocolate/gptcli:latest. - Start and enter a container with
docker run --rm -it --entrypoint /bin/bash deathbychocolate/gptcli:latest. - Run
gptcli [mistral|openai] [chat|se]orpython3 gptcli/main.py [mistral|openai] [chat|se].
You need valid API keys to communicate with the AI models.
For OpenAI:
- Create an OpenAI account here: https://chat.openai.com/
- Generate an OpenAI API key here: https://platform.openai.com/api-keys
For Mistral AI:
- Create a Mistral AI account here: https://chat.mistral.ai/chat
- Generate a Mistral AI API key here: https://console.mistral.ai/api-keys
The project uses the API of LLM providers to perform chat completions. It does so by sending message objects converted to JSON payloads and sent over HTTPS POST requests. For now, GPTCLI is for purely text based LLMs.
GPTCLI facilitates access to 2 LLM providers, Mistral AI and OpenAI. Each provider offers 2 modes to communicate with the LLM of your choosing, Chat and Single-Exchange:
Chat mode allows the user to have a conversation that is similar to ChatGPT by creating a MESSAGE-REPLY thread. For example, you say hello:
You can have a conversation multiline conversations:
You can load the last conversation you had with the LLM provider (OpenAI, Mistral):
And if you want to know about in-chat commands, you can view them by asking for help:
Chat mode also automatically:
- Stores chats locally as oneline
jsonfiles via the--storeand--no-storeflags. - Uses previously sent messages as context via the
--contextand--no-contextflags. - Loads the provider's API to environment variables; you may overwrite this behaviour by providing a different key with the
--keyflag.
Single-Exchange is functionally similar to chat mode, but it only allows one exchange of messages to happen (1 message sent from client-side and 1 response message from server-side) and then exit. This encourages loading all the context and instructions in one message. It is also more suitable for automating multiple calls to the API with different payloads, and flags. This mode will show you output similar to the following:
This mode does not store chats locally. It is expected the user implements their own solution via piping or similar.
- Send text based messages to Mistral AI API.
- Send text based messages to OpenAI API.
- Store API keys locally.
- Allow context retention for chats with all providers.
- Allow streaming of text based messages for all providers.
- Allow storage of chats locally for all providers.
- Allow loading of chats from local storage as context for all providers.
- Add in-chat commands.
- Add multiline option for chat mode.
- Add spinner animation for chat mode.
- Add OCR as a new mode.
- Send OCR queries for images and PDF documents to Mistral AI API.
- Allow storage of OCR results locally.
- Send OCR queries for images and PDF documents to OpenAI API.
- Add FTS for chats in storage.
- Add FTS for OCR results in storage.
- Add role-based messages for Mistral AI:
usersystemassistantdevelopertoolfunction - Add role-based messages for OpenAI:
usersystemassistantdevelopertoolfunction
| Abbreviation | Definition |
|---|---|
| OCR | Optical Character Recognition |
| FTS | Full Text Search |
- GPTCLI does not use any software developed by OpenAI or Mistral AI, except for counting tokens.
- GPTCLI prioritizes features that make the CLI useful and easy to use.
- GPTCLI aims to eventually have all the features of its WebApp counterparts in the terminal.




