A command-line interface for Ollama LLM models, designed for seamless integration with shell scripts and command-line workflows.
- Command-line interface for text generation and chat
- System roles for specialized tasks (shell commands, git commits, etc.)
- Chat history persistence
- Shell pipeline integration
- Multiple LLM provider support (currently Ollama)
⚠️ Important: GraalVM native image build may have some limitations:
- Build process might fail on some platforms (especially Windows) due to missing development tools
- GraalVM environment variables and paths must be properly configured
- Reflection and dynamic class loading may require additional configuration
- Build times vary significantly across machines
Use the latest GraalVM version and ensure all dependencies are installed.
⚠️ REQUIRES JAVA 21+ GRAAL VM SDK
- Build native image:
./gradlew clean nativeBuild
-
The executable will be created at
${projectRoot}/build/native/nativeCompile/jllama
-
Run with:
./jllama chat "Hello, help me please"
⚠️ REQUIRES JAVA 21+
- Build project:
./gradlew clean build
-
The JAR file will be at
${projectRoot}/build/libs/jllama-cli-*-all.jar
-
Run with:
java -jar jllama-cli-*-all.jar chat "Hello, help me please"
- Native executable or JAR file (platform testing in progress)
The application creates a default configuration file on first launch.
- Or you can find default
jllama.yaml
in the project root.
~/.jllama/jllama.yaml
providers:
ollama:
baseUrl: "http://localhost:11434" # Ollama server URL
modelName: null # ⚠️ Set your installed model name
-
modelName: Set your installed Ollama model Example for mistral:
ollama pull mistral
Then update config:
modelName: "mistral"
-
baseUrl: Change if your Ollama server uses a different address
jllama "Hello, are you there?"
If you see "model not found":
- Check if the model is installed (
ollama list
) - Verify the model name in config
- Ensure Ollama server is running (
ollama serve
)
Basic usage:
jllama "write a simple program" # chat is the default command
Use role:
jllama --role-name "role name" # or -r
Start new chat:
jllama --new-chat # or -n
Generate text:
jllama generate "Your prompt here"
Chat mode:
jllama chat "Your message"
The CLI comes with predefined roles:
default
- General-purpose helpful assistantshell
- Generates shell commands for your OSgit-autocommit
- Creates commit messages based on changesdescribe-shell
- Explains shell commands
Generate commit message based on git changes:
jllama -r git-autocommit generate "$(git diff)"
Get shell command explanation:
jllama -r describe-shell generate "ls -la | grep '^d'"
Set LLM provider: Ollama is only supported and default by now
jllama -p ollama generate "Hello"
- Default provider: Ollama (http://localhost:11434)
- Default role: general-purpose assistant
- Chat history is automatically preserved
The application is built using:
- Micronaut framework for dependency injection and HTTP client
- Picocli for command-line parsing
- Reactive programming with Project Reactor
- YAML for configuration
- OpenAI API integration
- Mistral AI support
- Maybe others :)
- Image generating/processing
- "Suffix" support
- Advanced parameter configuration (temperature, top_p, etc.)
- Model Context Protocol (MCP) support for standardized LLM interactions
- Context window management
- Embeddings API
- API key management
MIT