⚠️ This project is archived. Modern language models now have built-in reasoning capabilities, making the traditional Chain of Thought approach implemented in this tool unnecessary. Please use the sister project chat-llm instead, which leverages these built-in reasoning capabilities.
Query LLM was a CLI tool for querying large language models (LLMs) using the Chain of Thought method. It supported both cloud-based LLM services and locally hosted LLMs.
Basic usage:
./query-llm.js
echo "Your question here?" | ./query-llm.jsThis tool supported various local LLM servers (llama.cpp, Ollama, LM Studio, etc.) and cloud services (OpenAI, Groq, OpenRouter, etc.).
Configuration was done via environment variables:
# Example for local server
export LLM_API_BASE_URL=http://127.0.0.1:8080/v1
export LLM_CHAT_MODEL='llama3.2'
# Example for cloud service
export LLM_API_BASE_URL=https://api.openai.com/v1
export LLM_API_KEY="your-api-key"
export LLM_CHAT_MODEL="gpt-4o-mini"Many modern language models now have built-in reasoning capabilities that make explicit Chain of Thought prompting unnecessary in most cases. These models can perform complex reasoning internally and generate more accurate responses without step-by-step guidance.
For current LLM interaction needs, please use chat-llm, which is designed to work with these newer models and their built-in reasoning capabilities.
This tool was created when Chain of Thought prompting was a necessary technique to improve reasoning in earlier LLM generations. As language models have evolved, this explicit approach has become less necessary.
For any questions or historical reference, the code remains available in this archived repository.