Skip to content

Commit 29fec65

Browse files
authored
Rename to lamoom (#31)
1 parent 66ea39f commit 29fec65

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+602
-1248
lines changed

.env.example

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
FLOW_PROMPT_API_TOKEN=
1+
LAMOOM_API_TOKEN=
22
AZURE_KEYS=
33
OPENAI_API_KEY=
44
BEARER_TOKEN=
5-
FLOW_PROMPT_API_URI=
5+
LAMOOM_API_URI=

.github/workflows/run-unit-tests.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,10 @@ jobs:
1717
echo CLAUDE_API_KEY=${{ secrets.CLAUDE_API_KEY }} >> .env
1818
echo GEMINI_API_KEY=${{ secrets.GEMINI_API_KEY }} >> .env
1919
echo OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }} >> .env
20+
echo LAMOOM_API_URI=${{ secrets.LAMOOM_API_URI }} >> .env
21+
echo LAMOOM_API_TOKEN=${{ secrets.LAMOOM_API_TOKEN }} >> .env
2022
echo FLOW_PROMPT_API_URI=${{ secrets.FLOW_PROMPT_API_URI }} >> .env
2123
echo FLOW_PROMPT_API_TOKEN=${{ secrets.FLOW_PROMPT_API_TOKEN }} >> .env
22-
cat .env
2324
2425
- name: Install dependencies
2526
run: |

.pre-commit-config.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ repos:
4343

4444
# - id: flake8
4545
# name: Validate with flake8
46-
# entry: poetry run flake8 flow_prompt
46+
# entry: poetry run flake8 lamoom
4747
# language: system
4848
# pass_filenames: false
4949
# types: [python]

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
### API Url
22
Add your own API url in `.env` if needed:
33
```
4-
FLOW_PROMPT_API_URI=your_api_uri
4+
LAMOOM_API_URI=your_api_uri
55
```

Makefile

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
PROJECT_FOLDER = 'flow_prompt'
1+
PROJECT_FOLDER = 'lamoom'
22

33
flake8:
44
flake8 ${PROJECT_FOLDER}
@@ -19,16 +19,16 @@ isort-check:
1919
isort --settings-path pyproject.toml --check-only .
2020

2121
autopep8:
22-
for f in `find flow_prompt -name "*.py"`; do autopep8 --in-place --select=E501 $f; done
22+
for f in `find lamoom -name "*.py"`; do autopep8 --in-place --select=E501 $f; done
2323

2424
lint:
2525
poetry run isort --settings-path pyproject.toml --check-only .
2626

2727
test:
28-
poetry run pytest -vv tests \
28+
poetry run pytest --cache-clear -vv tests \
2929
--cov=${PROJECT_FOLDER} \
3030
--cov-config=.coveragerc \
31-
--cov-fail-under=77 \
31+
--cov-fail-under=81 \
3232
--cov-report term-missing
3333

3434
.PHONY: format
@@ -52,6 +52,7 @@ clean-pyc:
5252
clean-test:
5353
rm -f .coverage
5454
rm -fr htmlcov/
55+
rm -rf .pytest_cache
5556

5657

5758
publish-test-prerelease:

README.md

Lines changed: 32 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Introduction
44

5-
Flow Prompt is a dynamic, all-in-one library designed for managing and optimizing prompts and making tests based on the ideal answer for large language models (LLMs) in production and R&D. It facilitates budget-aware operations, dynamic data integration, latency and cost metrics visibility, and efficient load distribution across multiple AI models.
5+
Lamoom is a dynamic, all-in-one library designed for managing and optimizing prompts and making tests based on the ideal answer for large language models (LLMs) in production and R&D. It facilitates dynamic data integration, latency and cost metrics visibility, and efficient load distribution across multiple AI models.
66

77
## Features
88

@@ -17,7 +17,7 @@ Flow Prompt is a dynamic, all-in-one library designed for managing and optimizin
1717
Install Flow Prompt using pip:
1818

1919
```bash
20-
pip install flow-prompt
20+
pip install lamoom
2121
```
2222

2323
## Authentication
@@ -26,128 +26,94 @@ pip install flow-prompt
2626
```python
2727
# setting as os.env
2828
os.setenv('OPENAI_API_KEY', 'your_key_here')
29-
# or creating flow_prompt obj
30-
FlowPrompt(openai_key="your_key", openai_org="your_org")
29+
# or creating lamoom obj
30+
Lamoom(openai_key="your_key", openai_org="your_org")
3131
```
3232

3333
### Azure Keys
3434
Add Azure keys to accommodate multiple realms:
3535
```python
3636
# setting as os.env
3737
os.setenv('AZURE_KEYS', '{"name_realm":{"url": "https://baseurl.azure.com/","key": "secret"}}')
38-
# or creating flow_prompt obj
39-
FlowPrompt(azure_keys={"realm_name":{"url": "https://baseurl.azure.com/", "key": "your_secret"}})
4038
```
4139

4240
### Model Agnostic:
4341
Mix models easily, and districute the load across models. The system will automatically distribute your load based on the weights. We support:
4442
- Claude
4543
- Gemini
46-
- OpenAI (Azure OpenAI models)
44+
- OpenAI (w/ Azure OpenAI models)
45+
- Nebius with (Llama, DeepSeek, Mistral, Mixtral, dolphin, Qwen and others)
4746
```
47+
from lamoom import LamoomModelProviders
48+
4849
def_behaviour = behaviour.AIModelsBehaviour(attempts=[
49-
AttemptToCall(
50-
ai_model=OpenAIModel(
51-
model='gpt-4o',
52-
max_tokens=128_000,
53-
),
54-
weight=100
55-
),
56-
AttemptToCall(
57-
ai_model=AzureAIModel(
58-
realm='useast,
59-
deployment_id='gpt-4o',
60-
max_tokens=128_000,
61-
),
62-
weight=100
50+
AttemptToCall(provider='openai', model='gpt-4o', weight=100),
51+
AttemptToCall(provider='azure', realm='useast-1', deployment_id='gpt-4o' weight=100),
52+
AttemptToCall(provider='azure', realm='useast-2', deployment_id='gpt-4o' weight=100),
53+
AttemptToCall(provider=LamoomModelProviders.anthropic, model='claude-3-5-sonnet-20240620', weight=100
6354
),
64-
AttemptToCall(
65-
ai_model=ClaudeAIModel(
66-
model = 'claude-3-5-sonnet-20240620',
67-
max_tokens=200_000,
68-
),
69-
weight=100
55+
AttemptToCall(provider=LamoomModelProviders.gemini, model='gemini-1.5-pro', weight=100
7056
),
71-
AttemptToCall(
72-
ai_model=GeminiAIModel(
73-
model = 'gemini-1.5-pro',
74-
max_tokens=1_000_000,
75-
),
76-
weight=100
57+
AttemptToCall(provider=LamoomModelProviders.nebius, model='deepseek-ai/DeepSeek-R1', weight=100
7758
)
7859
])
7960
80-
response_llm = fp.call(agent.id, context, def_behaviour)
61+
response_llm = client.call(agent.id, context, def_behaviour)
8162
```
8263

83-
### FlowPrompt Keys
64+
### Lamoom Keys
8465
Obtain an API token from Flow Prompt and add it:
8566

8667
```python
8768
# As an environment variable:
88-
os.setenv('FLOW_PROMPT_API_TOKEN', 'your_token_here')
69+
os.setenv('LAMOOM_API_TOKEN', 'your_token_here')
8970
# Via code:
90-
FlowPrompt(api_token='your_api_token')
71+
Lamoom(api_token='your_api_token')
9172
```
9273

9374
### Add Behavious:
9475
- use OPENAI_BEHAVIOR
9576
- or add your own Behaviour, you can set max count of attempts, if you have different AI Models, if the first attempt will fail because of retryable error, the second will be called, based on the weights.
9677
```
97-
from flow_prompt import OPENAI_GPT4_0125_PREVIEW_BEHAVIOUR
98-
flow_behaviour = OPENAI_GPT4_0125_PREVIEW_BEHAVIOUR
78+
from lamoom import OPENAI_GPT4_0125_PREVIEW_BEHAVIOUR
79+
behaviour = OPENAI_GPT4_0125_PREVIEW_BEHAVIOUR
9980
```
10081
or:
10182
```
102-
from flow_prompt import behaviour
103-
flow_behaviour = behaviour.AIModelsBehaviour(
83+
from lamoom import behaviour
84+
behaviour = behaviour.AIModelsBehaviour(
10485
attempts=[
105-
AttemptToCall(
106-
ai_model=AzureAIModel(
107-
realm='us-east-1',
108-
deployment_name="gpt-4-1106-preview",
109-
max_tokens=C_128K,
110-
support_functions=True,
111-
),
112-
weight=100,
113-
),
114-
AttemptToCall(
115-
ai_model=OpenAIModel(
116-
model="gpt-4-1106-preview",
117-
max_tokens=C_128K,
118-
support_functions=True,
119-
),
120-
weight=100,
121-
),
86+
AttemptToCall(provider='azure', realm='useast-1', deployment_id='gpt-4o' weight=100),
87+
AttemptToCall(provider='azure', realm='useast-2', deployment_id='gpt-4o' weight=100),
12288
]
12389
)
12490
```
12591

12692
## Usage Examples:
12793

12894
```python
129-
from flow_prompt import FlowPrompt, PipePrompt
95+
from lamoom import Lamoom, Prompt
13096

131-
# Initialize and configure FlowPrompt
132-
flow = FlowPrompt(openai_key='your_api_key', openai_org='your_org')
97+
# Initialize and configure Lamoom
98+
client = Lamoom(openai_key='your_api_key', openai_org='your_org')
13399

134100
# Create a prompt
135-
prompt = PipePrompt('greet_user')
101+
prompt = Prompt('greet_user')
136102
prompt.add("You're {name}. Say Hello and ask what's their name.", role="system")
137103

138-
# Call AI model with FlowPrompt
104+
# Call AI model with Lamoom
139105
context = {"name": "John Doe"}
140106
# test_data - optional parameter used for generating tests
141-
response = flow.call(prompt.id, context, flow_behaviour, test_data={
107+
response = client.call(prompt.id, context, behavior, test_data={
142108
'ideal_answer': "Hello, I'm John Doe. What's your name?",
143109
'behavior_name': "gemini"
144110
}
145111
)
146112
print(response.content)
147113
```
148-
- To review your created tests and score please go to https://cloud.flow-prompt.com/tests. You can update there Prompt and rerun tests for a published version, or saved version. If you will update and publish version online - library will automatically use the new updated version of the prompt. It's made for updating prompt without redeployment of the code, which is costly operation to do if it's required to update just prompt.
114+
- To review your created tests and score please go to https://cloud.lamoom.com/tests. You can update there Prompt and rerun tests for a published version, or saved version. If you will update and publish version online - library will automatically use the new updated version of the prompt. It's made for updating prompt without redeployment of the code, which is costly operation to do if it's required to update just prompt.
149115

150-
- To review logs please proceed to https://cloud.flow-prompt.com/logs, there you can see metrics like latency, cost, tokens;
116+
- To review logs please proceed to https://cloud.lamoom.com/logs, there you can see metrics like latency, cost, tokens;
151117

152118
## Best Security Practices
153119
For production environments, it is recommended to store secrets securely and not directly in your codebase. Consider using a secret management service or encrypted environment variables.

examples/evaluate_prompts_quality/evaluate_prompt_quality.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
import logging
22
import random
3-
from flow_prompt import FlowPrompt, behaviour, AttemptToCall, AzureAIModel, C_128K
3+
from lamoom import Lamoom, behaviour, AttemptToCall, AzureAIModel, C_128K
44
from prompt import prompt_to_evaluate_prompt
5-
from flow_prompt_service import get_all_prompts, get_logs
5+
from lamoom_service import get_all_prompts, get_logs
66
logger = logging.getLogger(__name__)
77

88

99

10-
flow_prompt = FlowPrompt()
10+
lamoom = Lamoom()
1111

1212
gpt4_behaviour = behaviour.AIModelsBehaviour(
1313
attempts=[
1414
AttemptToCall(
1515
ai_model=AzureAIModel(
16-
realm='westus',
17-
deployment_id="gpt-4-turbo",
16+
realm='useast',
17+
deployment_id="gpt-4o",
1818
max_tokens=C_128K,
1919
support_functions=True,
2020
),
@@ -41,7 +41,7 @@ def main():
4141
'prompt_data': prompt_chats,
4242
'prompt_id': prompt_id,
4343
}
44-
result = flow_prompt.call(prompt_to_evaluate_prompt.id, context, gpt4_behaviour)
44+
result = lamoom.call(prompt_to_evaluate_prompt.id, context, gpt4_behaviour)
4545
print(result.content)
4646

4747
if __name__ == '__main__':

examples/evaluate_prompts_quality/flow_prompt_service.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,22 +3,22 @@
33
import os
44
import dotenv
55
import requests
6-
from flow_prompt.settings import FLOW_PROMPT_API_URI
6+
from lamoom.settings import LAMOOM_API_URI
77

88
dotenv.load_dotenv(dotenv.find_dotenv())
99

1010
BEARER_TOKEN = os.getenv('BEARER_TOKEN')
1111

1212

1313
def get_all_prompts():
14-
response = requests.get(f'{FLOW_PROMPT_API_URI}/prompts', headers={'Authorization': f'Bearer {BEARER_TOKEN}'})
14+
response = requests.get(f'{LAMOOM_API_URI}/prompts', headers={'Authorization': f'Bearer {BEARER_TOKEN}'})
1515
prompts = response.json()
1616
return prompts
1717

1818

1919
def get_logs(prompt_id):
2020
response = requests.get(
21-
f'{FLOW_PROMPT_API_URI}/logs?prompt_id={prompt_id}&fields=response,context',
21+
f'{LAMOOM_API_URI}/logs?prompt_id={prompt_id}&fields=response,context',
2222
headers={'Authorization': f'Bearer {BEARER_TOKEN}'}
2323
)
2424
logs = response.json()

examples/evaluate_prompts_quality/prompt.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11

2-
from flow_prompt import PipePrompt
2+
from lamoom import Prompt
33

4-
prompt_to_evaluate_prompt = PipePrompt(id="prompt-improver")
4+
prompt_to_evaluate_prompt = Prompt(id="prompt-improver")
55

66
prompt_to_evaluate_prompt.add(role="system", content="You're a prompt engineer, tasked with evaluating and improving prompt quality.")
77

examples/poetry.lock

Lines changed: 3 additions & 3 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)