Skip to content

Commit

Permalink
put example
Browse files Browse the repository at this point in the history
  • Loading branch information
hakimkhalafi committed Apr 17, 2024
1 parent 2176969 commit 15cd59a
Show file tree
Hide file tree
Showing 4 changed files with 36 additions and 9 deletions.
22 changes: 14 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,11 @@ pip install paramount

```py
@paramount.record()
def my_ai_function(message_history, question, ...): # Inputs
# LLM invocations happen here
return llm_answer, llm_references, llm_message_history, ... # Outputs
def my_ai_function(message_history, new_question): # Inputs
# <LLM invocations happen here>
new_message = {'role': 'user', 'content': new_question}
updated_history = message_history + [new_message]
return updated_history # Outputs.
```

3. After `my_ai_function(...)` has run several times, launch the Paramount UI to evaluate results:
Expand All @@ -27,20 +29,24 @@ paramount

Your SMEs can now evaluate recordings and track accuracy improvements over time.

Paramount runs completely offline in your private environment.
Paramount can run completely offline in your private environment.

### Example

After installation, run `python example.py` for a minimal working example.

### Configuration

In order to set up successfully, define which input and output parameters represent the chat list used in the LLM.

This is done via the `paramount.toml` configuration file that you add in your project root dir.

It will be autogenerated for you with defaults if this file doesn't already exist on first run.
It will be autogenerated for you with defaults if it doesn't already exist on first run.

```toml
[record]
enabled = true
function_url = "http://localhost:9000" # The url to the flask app, for replay
function_url = "http://localhost:9000" # The url to your LLM API flask app, for replay

[db]
type = "csv" # postgres also available
Expand All @@ -57,11 +63,11 @@ identifier_colname = ""

# For the table display - define which columns should be shown
meta_cols = ['recorded_at']
input_cols = ['args__message_history', 'args__question'] # Matches my_ai_function() example
input_cols = ['args__message_history', 'args__new_question'] # Matches my_ai_function() example
output_cols = ['1', '2'] # 1 and 2 are indexes for llm_answer and llm_references in example above

# For the chat display - describe how your chat structure is set up. This example uses OpenAI format.
chat_list = "output__3" # Matches llm_message_history. Must be a list of dicts to display chat format
chat_list = "output__1" # Matches output updated_history. Must be a list of dicts to display chat format
chat_list_role_param = "role" # Key in list of dicts describing the role in the chat
chat_list_content_param = "content" # Key in list of dicts describing the content
```
Expand Down
19 changes: 19 additions & 0 deletions example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
import paramount

messages = [
{'role': 'system', 'content': 'You help users with their cake recipes.'},
{'role': 'user', 'content': 'Hello!'},
{'role': 'assistant', 'content': 'Hello, how can I help you?'},
]


@paramount.record()
def my_ai_function(message_history, new_question):
# The above inputs become "input_args__message_history", "input_args__new_question" in the recordings csv/table
# LLM invocations happen here
new_message = {'role': 'user', 'content': new_question}
updated_history = message_history + [new_message]
return updated_history # Outputs. This becomes "output__1". Can be used as the chat_list param in paramount.toml


my_ai_function(messages, "How do I bake a cake?")
2 changes: 2 additions & 0 deletions paramount/server/record.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,8 @@ def wrapper(*args, **kwargs):
(str, bytes, bytearray, memoryview)):
# In order to not iterate over strings, bytes, or other uniterable objects.
serialized_result = [serialized_result]
if isinstance(serialized_result, list): # In case of a singular list, treat it as one item
serialized_result = [serialized_result]
for i, output in enumerate(serialized_result, start=1):
if isinstance(output, dict):
for key, value in output.items():
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

setup(
name='paramount',
version='0.3.7',
version='0.3.8',
description='Business Evals for LLMs',
long_description=long_description,
long_description_content_type='text/markdown',
Expand Down

0 comments on commit 15cd59a

Please sign in to comment.