A terminal-based LLM chat client written in Lua that connects to Ollama for local language model inference. Features peer-to-peer operation exchange, distributed inference across multiple machines, recursive task decomposition, and a rich terminal UI with markdown rendering and syntax highlighting.
- Interactive chat with any Ollama-hosted model, including streaming responses and thinking/reasoning mode
- Peer-to-peer mode — two nodes exchange operations and execute against their own local contexts, tracking divergences rather than forcing consensus
- Distributed LLM inference — split model layers across multiple GPUs/machines with pipeline parallelism
- Recursive task decomposition — break complex tasks into independent sub-tasks, each with its own LLM context and full tool access
- Tool system — automatic discovery of executable tools, with built-in file I/O, code writing, and custom tool support
- Rich terminal UI — real-time markdown rendering, syntax highlighting (Lua, C, Bash, Python, etc.), table formatting, and interactive model selection
- Blind mode — hide input while typing for voice input or privacy
- Context management — local resource access including filesystem, environment, processes, and system info
- Clone the repository:
git clone https://github.com/gabrilend/bot-chat-api.git
cd chatbot- Install dependencies:
./scripts/install-libs.sh- Initialize configuration:
./chatbot.lua --init- Edit
config/library_config.luato point at your Ollama instance:
return {
host = "localhost",
port = 11434,
model = "llama3",
timeout = 120,
}# Start an interactive chat session
./chatbot.lua
# Select a specific model
./chatbot.lua --model gemma2
# Hide input while typing (blind/speak mode)
./chatbot.lua --blind
# Listen for a peer connection
./chatbot.lua --peer-listen=9000
# Connect to a peer
./chatbot.lua --peer-connect=192.168.1.10:9000| Variable | Description |
|---|---|
CHAT_HOST |
Override Ollama host |
CHAT_PORT |
Override Ollama port |
CHAT_MODEL |
Override default model |
CHATBOT_DEBUG=1 |
Enable debug logging |
CHATBOT_BLIND=1 |
Enable blind mode |
PEER_LISTEN |
Port to listen for peers |
PEER_CONNECT |
host:port to connect to a peer |
chatbot.lua CLI entry point, model selection UI
core/
chat.lua Chat client, tool discovery, peer integration
ui.lua Terminal UI, markdown rendering, syntax highlighting
peer.lua WebSocket peer connection management
operation.lua Operation abstraction (everything is a tool call)
executor.lua Execute operations against local context
context.lua Local resource access (fs, env, proc, sys)
divergence.lua Track result divergences between peers
transport.lua TCP/WebSocket transport layer
tasklist.lua Recursive task decomposition (make_list tool)
distributed/
coordinator.lua Distributed inference session management
tensor.lua Tensor serialization for network transfer
config/
chatbot_config.lua Application settings
library_config.lua Ollama/model settings
libs/ Bundled dependencies (luasocket, dkjson, cJSON, etc.)
wrappers/ C, Bash, and Lua API bindings
docs/ Guides for tools, configuration, and design
Two nodes connect over WebSocket and exchange operations. Each node executes operations against its own local context. When results differ, divergences are tracked — never forcibly reconciled. This preserves each node's local truth while maintaining awareness of the other's perspective.
Model layers can be split between two machines using pipeline parallelism. A coordinator assigns layer ranges and manages the inference session while activation tensors are serialized and transferred over the network. Tokens stream to both peers as they are generated.
Tools are executables that respond to --tool-info with a JSON description and accept JSON arguments on stdin. The chatbot automatically discovers tools in libs/tools/ and project-level tools/ directories. See docs/tools-guide.md for details on creating custom tools.
config/chatbot_config.lua:
return {
output_line_width = 100, -- terminal text wrapping width
format_tables = true, -- render markdown tables
show_vision_debug = false, -- show vision model descriptions
}See docs/configuration.md for the full reference.
See the repository for license details.