Skip to content
/ llm-logr Public

Capture and Log interactions in your llm apps

Notifications You must be signed in to change notification settings

x01AI/llm-logr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-LOGR

LLM-LOGR is a lightweight utility for tracing, logging, and viewing LLM (Large Language Model) API calls locally.
It provides structured logging of inputs, outputs, latency, and metadata for OpenAI API interactions.

Features

  • Log OpenAI API requests and responses
  • JSON-based logging format
  • Streamlit interface for viewing logged calls
  • Local-first design with no external dependencies

Setup

  1. Clone the repository:
git clone 
cd llm-logr
  1. Create and activate a virtual environment:
python3 -m venv logr-venv
source logr-venv/bin/activate  # Windows: logr-venv\Scripts\activate
  1. Install the requirements:
pip install -r requirements.txt
  1. Install the package locally:
pip install -e .
  1. Add your OpenAI API key in a .env file you created in the root directory:
OPENAI_API_KEY=your-openai-key-here

Usage

Log an OpenAI call

from llm_logr.core.log_openai_call import track
import openai

response = track(
    openai.ChatCompletion.create,
    user_id="u123",
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Explain ISO 42001 in one line"}]
)

Logged results are saved in logs/llm_logs.json.

View logged calls

Run the Streamlit app:

streamlit run llm_logr/web/app.py

Development Notes

  • Update requirements.txt by running:
pip freeze > requirements.txt
  • Make sure .env, logr-venv/, and llm_logr.egg-info/ are listed in .gitignore.

About

Capture and Log interactions in your llm apps

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages