DeepShell is a powerful and versatile command-line program that seamlessly blends the familiar environment of your local shell with the immense knowledge and capabilities of Large Language Models (LLMs). Imagine having direct access to the world's most advanced AI models—from local Ollama instances to cloud-based services like Google's Gemini—all unified within a single, efficient terminal interface.
Designed for developers, researchers, and power users, DeepShell abstracts away the complexity of API integrations. It offers a streamlined pathway to query both open-source and proprietary LLMs, transforming your command prompt into a conduit for deep AI intelligence.
- Multi-LLM Support:
- Seamlessly connect to Ollama servers (local or remote).
- Integrate with the Google Gemini API.
- Conversational Memory & Customization:
- Engage in multi-turn conversations using the interactive mode (
-i
). - Set the conversation history limit (defaults to 25 turns).
- Toggle response streaming for immediate (plain-text) or complete (Markdown-rendered) output. Streaming is disabled by default to preserve formatting.
- Enable or disable Markdown rendering for each LLM service individually.
- Engage in multi-turn conversations using the interactive mode (
- Unified & Interactive Configuration:
- A central, user-friendly settings menu (
-s
) guides you through all configuration tasks. - Manages LLM service details, including server addresses (Ollama) and API keys (Gemini).
- Stores configuration securely in
~/.deepshell/deepshell.conf
.
- A central, user-friendly settings menu (
- Flexible Service & Model Management:
- Easily switch between configured LLM services (
-l
). - Quickly jump back to the previously used LLM service (
-j
). - List available models from your connected LLM service and change the default model per service (
-m
).
- Easily switch between configured LLM services (
- Advanced Gemini API Key Management:
- Store and manage multiple Gemini API keys with user-defined nicknames.
- Easily add new keys or set an active key from your stored list (
-set-key
). - Display the currently active Gemini API key's nickname and value (
-show-key
). - Quickly check your Gemini API key status and get a link to your usage dashboard (
-gq
).
- Intuitive User Experience:
- Send queries directly from your command line (
-q
). - Beautiful Markdown rendering for LLM responses in the terminal, powered by
rich
. - Engaging progress animation while waiting for the LLM.
- Clear, colored console output for enhanced readability.
- Well-formatted and alphabetized help messages (
-h
).
- Send queries directly from your command line (
-
Prerequisites:
- Python 3.7 or higher.
pip
(Python package installer).
-
Clone the Repository:
git clone https://github.com/ashes00/deepshell.git cd deepshell
-
Install Dependencies: The required Python modules are listed in
modules.txt
. You can install them manually or use the provided development setup script.pip install -r <(grep -vE "^\s*#|^\s*$" modules.txt)
-
Run DeepShell:
- From source:
python3 main.py [OPTIONS]
- As an executable (if you've built one):
./deepshell [OPTIONS]
- From source:
The first time you run DeepShell, or anytime you want to manage settings, use the -s
or --setup
flag:
./deepshell -s
This launches a comprehensive, interactive menu that allows you to:
- Add or Reconfigure LLM Services:
- For Ollama: Enter your server address (e.g.,
http://localhost:11434
) and select a default model from those available on your server. - For Gemini: Manage your API keys (add, remove, set active) and select a default model from the Gemini API.
- For Ollama: Enter your server address (e.g.,
- Switch the active LLM service.
- Change the default model for the currently active service.
- Manage Gemini API keys specifically.
- View your current configuration or delete it entirely.
- Toggle Markdown Rendering: Enable or disable Markdown formatting for the active service's responses.
- Set Interactive History Limit: Change the number of conversation turns remembered in interactive mode.
- Toggle Response Streaming: Enable or disable streaming responses. (Note: Markdown is not supported in streaming mode).
Your settings will be saved to ~/.deepshell/deepshell.conf
.
Query the active LLM
./deepshell -q "What are the benefits of using a CLI for LLM interaction?"
./deepshell --query "Write a python function to calculate a factorial"
Enter the main settings menu
./deepshell -s (or --setup)
Switch active service or configure services (shortcut to a settings sub-menu)
./deepshell -l (or --llm)
Quickly jump to the previously used LLM service
./deepshell -j (or --jump-llm)
Change the default model for the active service (shortcut)
./deepshell -m (or --model-change)
Interactively manage Gemini API keys (add, remove, set active)
./deepshell -set-key (or --set-api-key)
Show the active Gemini API key nickname and value
./deepshell -show-key (or --show-api-key)
Check Gemini API key status and get quota info
./deepshell -gq (or --gemini-quota)
Display the currently active configuration details
./deepshell -show-config (or --show-full-conf)
Delete the entire configuration file (use with caution!)
./deepshell -d (or --delete-config)
Show the help message
./deepshell -h (or --help)
Show the program's version
./deepshell -v (or --version)
Start an interactive chat session
./deepshell -i (or --interactive)
DeepShell stores its configuration in a JSON file located at ~/.deepshell/deepshell.conf
. While you can view this file, it's recommended to manage settings through DeepShell's command-line options for safety and ease of use.
An example configuration might look like this:
{
"active_llm_service": "gemini",
"previous_active_llm_service": "ollama",
"llm_services": {
"ollama": {
"server_address": "http://localhost:11434",
"model": "llama3:latest",
"render_markdown": true
},
"gemini": {
"api_keys": [
{
"nickname": "personal-key",
"key": "BIsa8y..."
}
],
"active_api_key_nickname": "personal-key",
"model": "models/gemini-1.5-flash",
"render_markdown": true
}
}
}
- Ollama: Connect to any Ollama instance serving models like Llama, Mistral, etc.
- Google Gemini: Access Gemini models (e.g.,
gemini-1.5-pro
,gemini-1.5-flash
) via the Google AI Studio API.
- Copy deepshell to your Environment path:
nano .bashrc
export PATH=$PATH:/home/user/APPS-DIR
- Create an aliases for ds & dsq for quick keyboard actions.
nano .bashrc
alias dsq="deepshell -q"
alias ds="deepshell"
alias dsi="deepshell -i"
- Save .bashrc file.
Ctrl+s & Ctrl+x
- Update your .bashrc file to use commands
source .bashrc
- Use the alias
dsq
to quickly query the LLM
dsq What is the best LLM?
- Use the alias
ds
to quickly access features with options
ds -v
- Use the alias
dsi
to enter interactive mode
dsi
Happy Querying!