LLM-LOGR is a lightweight utility for tracing, logging, and viewing LLM (Large Language Model) API calls locally.
It provides structured logging of inputs, outputs, latency, and metadata for OpenAI API interactions.
- Log OpenAI API requests and responses
- JSON-based logging format
- Streamlit interface for viewing logged calls
- Local-first design with no external dependencies
- Clone the repository:
git clone
cd llm-logr
- Create and activate a virtual environment:
python3 -m venv logr-venv
source logr-venv/bin/activate # Windows: logr-venv\Scripts\activate
- Install the requirements:
pip install -r requirements.txt
- Install the package locally:
pip install -e .
- Add your OpenAI API key in a .env file you created in the root directory:
OPENAI_API_KEY=your-openai-key-here
from llm_logr.core.log_openai_call import track
import openai
response = track(
openai.ChatCompletion.create,
user_id="u123",
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Explain ISO 42001 in one line"}]
)
Logged results are saved in logs/llm_logs.json.
Run the Streamlit app:
streamlit run llm_logr/web/app.py
- Update requirements.txt by running:
pip freeze > requirements.txt
- Make sure .env, logr-venv/, and llm_logr.egg-info/ are listed in .gitignore.