Skip to content

Commit 525d441

Browse files
authored
Update readme (#38)
1 parent 1da9342 commit 525d441

18 files changed

+405
-1584
lines changed

CLAUDE.md

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
# Lamoom Python Project Guide
2+
3+
## Build/Test/Lint Commands
4+
- Install deps: `poetry install`
5+
- Run all tests: `poetry run pytest --cache-clear -vv tests`
6+
- Run specific test: `poetry run pytest tests/path/to/test_file.py::test_function_name -v`
7+
- Run with coverage: `make test`
8+
- Format code: `make format` (runs black, isort, flake8, mypy)
9+
- Individual formatting:
10+
- Black: `make make-black`
11+
- isort: `make make-isort`
12+
- Flake8: `make flake8`
13+
- Autopep8: `make autopep8`
14+
15+
## Code Style Guidelines
16+
- Python 3.9+ compatible code
17+
- Type hints required for all functions and methods
18+
- Classes: PascalCase with descriptive names
19+
- Functions/Variables: snake_case
20+
- Constants: UPPERCASE_WITH_UNDERSCORES
21+
- Imports organization with isort:
22+
1. Standard library imports
23+
2. Third-party imports
24+
3. Local application imports
25+
- Error handling: Use specific exception types
26+
- Logging: Use the logging module with appropriate levels
27+
- Use dataclasses for structured data when applicable
28+
29+
## Project Conventions
30+
- Use poetry for dependency management
31+
- Add tests for all new functionality
32+
- Maintain >80% test coverage (current min: 81%)
33+
- Follow pre-commit hooks guidelines
34+
- Document public APIs with docstrings

README.md

Lines changed: 70 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,50 @@
44

55
Lamoom is a dynamic, all-in-one library designed for managing and optimizing prompts and making tests based on the ideal answer for large language models (LLMs) in production and R&D. It facilitates dynamic data integration, latency and cost metrics visibility, and efficient load distribution across multiple AI models.
66

7-
87
## Features
98

109
- **CI/CD testing**: Generates tests based on the context and ideal answer (usually written by the human).
1110
- **Dynamic Prompt Development**: Avoid budget exceptions with dynamic data.
1211
- **Multi-Model Support**: Seamlessly integrate with various LLMs like OpenAI, Anthropic, and more.
1312
- **Real-Time Insights**: Monitor interactions, request/response metrics in production.
1413
- **Prompt Testing and Evolution**: Quickly test and iterate on prompts using historical data.
14+
- **Smart Prompt Caching**: Efficiently cache prompts for 5 minutes to reduce latency while keeping them updated.
15+
- **Asynchronous Logging**: Record interactions without blocking the main execution flow.
16+
17+
## Core Functionality
18+
19+
### Prompt Management and Caching
20+
Lamoom implements an efficient prompt caching system with a 5-minute TTL (Time-To-Live):
21+
- **Automatic Updates**: When you call a prompt, Lamoom checks if a newer version exists on the server.
22+
- **Cache Invalidation**: Prompts are automatically refreshed after 5 minutes to ensure up-to-date content.
23+
- **Local Fallback**: If the server is unavailable, Lamoom falls back to the locally defined prompt.
24+
- **Version Control**: Track prompt versions between local and server instances.
25+
26+
![Lamoom Call Flow](docs/sequence_diagrams/pngs/lamoom_call.png)
27+
28+
### Test Generation and CI/CD Integration
29+
Lamoom supports two methods for test creation:
30+
1. **Inline Test Generation**: Add `test_data` with an ideal answer during normal LLM calls to automatically generate tests.
31+
2. **Direct Test Creation**: Use the `create_test()` method to explicitly create tests for specific prompts.
32+
33+
Tests automatically compare LLM responses to ideal answers, helping maintain prompt quality as models or prompts evolve.
34+
35+
![Test Creation Flow](docs/sequence_diagrams/pngs/lamoom_test_creation.png)
36+
37+
### Logging and Analytics
38+
Interaction logging happens asynchronously using a worker pattern:
39+
- **Performance Metrics**: Automatically track latency, token usage, and cost.
40+
- **Complete Context**: Store the full prompt, context, and response for analysis.
41+
- **Non-Blocking**: Logging happens in the background without impacting response times.
42+
43+
![Logging Flow](docs/sequence_diagrams/pngs/lamoom_save_user_interactions.png)
44+
45+
### Feedback Collection
46+
Improve prompt quality through explicit feedback:
47+
- **Ideal Answer Addition**: Associate ideal answers with previous responses using `add_ideal_answer()`.
48+
- **Continuous Improvement**: Use feedback to automatically generate new tests and refine prompts.
49+
50+
![Feedback Flow](docs/sequence_diagrams/pngs/lamoom_add_ideal_answer.png)
1551

1652
## Installation
1753

@@ -68,6 +104,7 @@ Lamoom(api_token='your_api_token')
68104

69105
## Usage Examples:
70106

107+
### Basic Usage
71108
```python
72109
from lamoom import Lamoom, Prompt
73110

@@ -80,17 +117,44 @@ prompt.add("You're {name}. Say Hello and ask what's their name.", role="system")
80117

81118
# Call AI model with Lamoom
82119
context = {"name": "John Doe"}
83-
# test_data - optional parameter used for generating tests
120+
response = client.call(prompt.id, context, "openai/gpt-4o")
121+
print(response.content)
122+
```
123+
124+
### Creating Tests While Using Prompts
125+
```python
126+
# Call with test_data to automatically generate tests
84127
response = client.call(prompt.id, context, "openai/gpt-4o", test_data={
85128
'ideal_answer': "Hello, I'm John Doe. What's your name?",
86129
'behavior_name': "gemini"
87-
}
130+
})
131+
```
132+
133+
### Creating Tests Explicitly
134+
```python
135+
# Create a test directly
136+
client.create_test(
137+
prompt_id="greet_user",
138+
test_context={"name": "John Doe"},
139+
ideal_answer="Hello, I'm John Doe. What's your name?"
140+
)
141+
```
142+
143+
### Adding Feedback to Previous Responses
144+
```python
145+
# Add an ideal answer to a previous response for quality assessment
146+
client.add_ideal_answer(
147+
response_id="greet_user#1620000000000",
148+
ideal_answer="Hello, I'm John Doe. What's your name?"
88149
)
89-
print(response.content)
90150
```
91-
- To review your created tests and score please go to https://cloud.lamoom.com/tests. You can update there Prompt and rerun tests for a published version, or saved version. If you will update and publish version online - library will automatically use the new updated version of the prompt. It's made for updating prompt without redeployment of the code, which is costly operation to do if it's required to update just prompt.
92151

93-
- To review logs please proceed to https://cloud.lamoom.com/logs, there you can see metrics like latency, cost, tokens;
152+
### Monitoring and Management
153+
- **Test Dashboard**: Review created tests and scores at https://cloud.lamoom.com/tests
154+
- **Prompt Management**: Update prompts and rerun tests for published or saved versions
155+
- **Analytics**: View logs with metrics (latency, cost, tokens) at https://cloud.lamoom.com/logs
156+
157+
The system is designed to allow prompt updates without code redeployment—simply publish a new prompt version online, and the library will automatically fetch and use it.
94158

95159
## Best Security Practices
96160
For production environments, it is recommended to store secrets securely and not directly in your codebase. Consider using a secret management service or encrypted environment variables.

docs/getting_started_notebook.ipynb

Lines changed: 166 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
@startuml
2+
3+
title Lamoom Feedback Flow: Adding ideal answers to existing responses
4+
5+
note over Client,LamoomService: Process of providing feedback on previous responses
6+
7+
Client->>Lamoom: add_ideal_answer(response_id, ideal_answer)
8+
activate Lamoom
9+
10+
Lamoom->>LamoomService: update_response_ideal_answer(api_token, log_id, ideal_answer)
11+
activate LamoomService
12+
13+
LamoomService->>LamoomService: Prepare feedback data
14+
15+
LamoomService->>LamoomService: PUT /lib/logs
16+
note right of LamoomService: Server updates existing log with:\n- Ideal answer for comparison\n- Used for quality assessment\n- Creating training data\n- Generating automated tests
17+
18+
LamoomService-->>Lamoom: Return feedback submission result
19+
deactivate LamoomService
20+
21+
Lamoom-->>Client: Return feedback submission result
22+
deactivate Lamoom
23+
24+
@enduml
37.4 KB
Loading
54.8 KB
Loading
48.9 KB
Loading

0 commit comments

Comments
 (0)