You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+70-6Lines changed: 70 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,14 +4,50 @@
4
4
5
5
Lamoom is a dynamic, all-in-one library designed for managing and optimizing prompts and making tests based on the ideal answer for large language models (LLMs) in production and R&D. It facilitates dynamic data integration, latency and cost metrics visibility, and efficient load distribution across multiple AI models.
6
6
7
-
8
7
## Features
9
8
10
9
-**CI/CD testing**: Generates tests based on the context and ideal answer (usually written by the human).
11
10
-**Dynamic Prompt Development**: Avoid budget exceptions with dynamic data.
12
11
-**Multi-Model Support**: Seamlessly integrate with various LLMs like OpenAI, Anthropic, and more.
13
12
-**Real-Time Insights**: Monitor interactions, request/response metrics in production.
14
13
-**Prompt Testing and Evolution**: Quickly test and iterate on prompts using historical data.
14
+
-**Smart Prompt Caching**: Efficiently cache prompts for 5 minutes to reduce latency while keeping them updated.
15
+
-**Asynchronous Logging**: Record interactions without blocking the main execution flow.
16
+
17
+
## Core Functionality
18
+
19
+
### Prompt Management and Caching
20
+
Lamoom implements an efficient prompt caching system with a 5-minute TTL (Time-To-Live):
21
+
-**Automatic Updates**: When you call a prompt, Lamoom checks if a newer version exists on the server.
22
+
-**Cache Invalidation**: Prompts are automatically refreshed after 5 minutes to ensure up-to-date content.
23
+
-**Local Fallback**: If the server is unavailable, Lamoom falls back to the locally defined prompt.
24
+
-**Version Control**: Track prompt versions between local and server instances.
'ideal_answer': "Hello, I'm John Doe. What's your name?",
86
129
'behavior_name': "gemini"
87
-
}
130
+
})
131
+
```
132
+
133
+
### Creating Tests Explicitly
134
+
```python
135
+
# Create a test directly
136
+
client.create_test(
137
+
prompt_id="greet_user",
138
+
test_context={"name": "John Doe"},
139
+
ideal_answer="Hello, I'm John Doe. What's your name?"
140
+
)
141
+
```
142
+
143
+
### Adding Feedback to Previous Responses
144
+
```python
145
+
# Add an ideal answer to a previous response for quality assessment
146
+
client.add_ideal_answer(
147
+
response_id="greet_user#1620000000000",
148
+
ideal_answer="Hello, I'm John Doe. What's your name?"
88
149
)
89
-
print(response.content)
90
150
```
91
-
- To review your created tests and score please go to https://cloud.lamoom.com/tests. You can update there Prompt and rerun tests for a published version, or saved version. If you will update and publish version online - library will automatically use the new updated version of the prompt. It's made for updating prompt without redeployment of the code, which is costly operation to do if it's required to update just prompt.
92
151
93
-
- To review logs please proceed to https://cloud.lamoom.com/logs, there you can see metrics like latency, cost, tokens;
152
+
### Monitoring and Management
153
+
-**Test Dashboard**: Review created tests and scores at https://cloud.lamoom.com/tests
154
+
-**Prompt Management**: Update prompts and rerun tests for published or saved versions
155
+
-**Analytics**: View logs with metrics (latency, cost, tokens) at https://cloud.lamoom.com/logs
156
+
157
+
The system is designed to allow prompt updates without code redeployment—simply publish a new prompt version online, and the library will automatically fetch and use it.
94
158
95
159
## Best Security Practices
96
160
For production environments, it is recommended to store secrets securely and not directly in your codebase. Consider using a secret management service or encrypted environment variables.
LamoomService->>LamoomService: Prepare feedback data
14
+
15
+
LamoomService->>LamoomService: PUT /lib/logs
16
+
note right of LamoomService: Server updates existing log with:\n- Ideal answer for comparison\n- Used for quality assessment\n- Creating training data\n- Generating automated tests
17
+
18
+
LamoomService-->>Lamoom: Return feedback submission result
19
+
deactivate LamoomService
20
+
21
+
Lamoom-->>Client: Return feedback submission result
0 commit comments