You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"document": "Machine learning (ML) is a field of study in artificial intelligence concerned with the development and
115
+
study of statistical algorithms that can learn from data and generalize to unseen data, and
116
+
thus perform tasks without explicit instructions"
117
+
},
118
+
expected_response: "Machine leaning is a branch of computer science that explores the study and construction of
119
+
algorithms which can learn and make predictions on data.",
120
+
custom_attributes: {
121
+
"tag": "science"
122
+
},
123
+
custom_eval_metrics: {
124
+
"automation_rate": 0.5
125
+
},
126
+
cost: 0.01,
91
127
)
92
128
exceptExceptionas e:
93
129
ifisinstance(e, CustomException):
@@ -108,22 +144,22 @@ All the arguments for the InferenceLogger.log_inference() method are:
108
144
```python
109
145
Expected formats of prompt:
110
146
111
-
prompt: [{"role": "user", "content": "What is machine learning?"}] # for openai models
112
-
prompt: {"text": "What is maching learning?"} # for other models
113
-
prompt: "what is machine learning?"# for other models
147
+
prompt=[{"role": "user", "content": "What is machine learning?"}] # for openai models
148
+
prompt={"text": "What is maching learning?"} # for other models
149
+
prompt="what is machine learning?"# for other models
114
150
```
115
151
-`response (optional)`: LLM Response. This can be either a `string` or the `ChatCompletion` response object from OpenAI
116
152
-`prompt_slug (optional)`: Identifier for the prompt used for inference. This is useful for segmenting inference calls by prompt
117
153
```python
118
-
prompt_slug: "customer_query"
154
+
prompt_slug="customer_query"
119
155
```
120
156
-`language_model_id (optional)`: Language model against which inference is made. Check out all supported models [here](/logging/supported_models)
121
157
```python
122
-
language_model_id: "gpt-4-1106-preview"
158
+
language_model_id="gpt-4-1106-preview"
123
159
```
124
160
-`functions (optional)`: functions for older versions of openai,
125
161
```python
126
-
functions: [
162
+
functions=[
127
163
{
128
164
"name": "get_current_weather",
129
165
"description": "Get the current weather in a given location",
@@ -151,18 +187,18 @@ functions: [
151
187
```
152
188
-`environment (optional)`: Environment your app is running in (ex: production, staging, etc). This is useful for segmenting inference calls by environment
153
189
```python
154
-
environment: "production"
190
+
environment="production"
155
191
```
156
192
-`function_call_response (optional)`: function call for older version of openai
157
193
```python
158
-
function_call_response: {
194
+
function_call_response={
159
195
"name": "get_current_weather",
160
196
"arguments": "{\n\"location\": \"Boston, MA\"\n}"
161
197
}
162
198
```
163
199
-`tools (optional)`: tools for new versions of openai
164
200
```python
165
-
tools: [
201
+
tools=[
166
202
{
167
203
"type": "function",
168
204
"function": {
@@ -193,7 +229,7 @@ tools: [
193
229
```
194
230
-`tool_calls (optional)`: tool calls for new versions of openai
195
231
```python
196
-
tool_calls: [
232
+
tool_calls=[
197
233
{
198
234
"id": "call_abc123",
199
235
"type": "function",
@@ -207,41 +243,48 @@ tool_calls: [
207
243
If tool_calls field is not present, we extract it from the openai completion response and log it in our database
208
244
-`external_reference_id (optional)`: is useful if you want to associate your own internal identifier with the inference logged to Athina
-`customer_id (optional)`: is your customer ID. This is useful for segmenting inference calls by customer
213
249
```python
214
-
customer_id: "stripe"
250
+
customer_id="stripe"
215
251
```
216
252
-`customer_user_id (optional)`: is the end user ID. This is useful for segmenting inference calls by the end user
217
253
```python
218
-
customer_user_id: "user@gmail.com"
254
+
customer_user_id="user@gmail.com"
219
255
```
220
256
-`cost (optional)`: is the cost incurred for this LLM inference call. Tip: If you log an entire OpenAI completion response to us, we'll automatically calculate the cost.
221
257
```python
222
-
cost: 0.0123
258
+
cost=0.0123
223
259
```
224
260
-`session_id (optional)`: is the session or conversation ID. This is used for grouping different inferences into a conversation or chain. [Read more](/logging/grouping_inferences)
225
261
```python
226
-
session_id: "c45g-1234-s6g4-43d3"
262
+
session_id="c45g-1234-s6g4-43d3"
227
263
```
228
264
-`user_query (optional)`: is the user's query. For conversational applications, this is the user's last message
229
265
```python
230
-
user_query: "what is machine learning?"
266
+
user_query="what is machine learning?"
231
267
```
232
268
-`context (optional)`: is the context used as information for the prompt. For RAG applications, this is the "retrieved" data.
233
269
You may log context as a string or as an object (dictionary)
234
270
```python
235
-
context: {"information": "Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy"}
236
-
context: "Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy"
271
+
context={"information": "Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy"}
272
+
context="Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy"
237
273
```
238
274
-`custom_attributes (optional)`: custom_attributes is an object (dictionary) where you can log your own custom attributes as key-value pair with the inference.
239
275
```python
240
-
custom_attributes: {
276
+
custom_attributes={
241
277
"name": "John Doe"
242
278
# Any other custom_attribute
243
279
} # OPTIONAL;
244
280
```
281
+
-`custom_eval_metrics (optional)`: custom_eval_metrics is an object (dictionary) where you can log your own custom eval metrics of the llm response as key-value pair with the inference.
282
+
```python
283
+
custom_eval_metrics={
284
+
"automation_rate": 0.3
285
+
# Any other custom_eval_metric
286
+
} # OPTIONAL;
287
+
```
245
288
<Callout>
246
289
Tip: For [evals](/evals/preset_evals/rag_evals), you must also log user_query and context
-`response_time (optional)`: is the response time in milliseconds. This is useful for segmenting inference calls by response time
253
296
```python
254
-
prompt_tokens: 50
255
-
completion_tokens: 30
256
-
total_tokens: 80
257
-
response_time: 1208
297
+
prompt_tokens=50
298
+
completion_tokens=30
299
+
total_tokens=80
300
+
response_time=1200
258
301
```
259
302
<Callout>
260
303
Tip: If you log the entire OpenAI `ChatCompletion` response object to us,
@@ -263,7 +306,7 @@ response_time: 1208
263
306
264
307
-`expected_response (optional)`: is the reference response to compare against for evaluation purposes. This is useful for segmenting inference calls by expected response
265
308
```python
266
-
expected_response: "Machine Learning is a branch of artificial intelligence"
309
+
expected_response="Machine Learning is a branch of computer science"
267
310
```
268
311
<Callout>
269
312
Tip: For grounded evals like [Answer Similarity](/evals/preset_evals/grounded_evals#answer_similarity), you must also log a reference response (string) to compare against.
0 commit comments