You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pages/logging/log_via_api.mdx
+18Lines changed: 18 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -114,6 +114,24 @@ Note:- A prompt run cannot have duplicate attribute names
114
114
}
115
115
```
116
116
117
+
##### Custom Eval Metrics
118
+
119
+
Optionally, you can also log custom eval metrics with your prompt. You can pass metric name and metric value as key-value pair in the `custom_eval_metrics` object.
120
+
121
+
Note:- A prompt run cannot have duplicate eval metric names
122
+
123
+
```json
124
+
{
125
+
// ...otherFields,
126
+
"custom_eval_metrics": {
127
+
"throughput": 100,
128
+
"correctness": 0.5,
129
+
"latency": 100,
130
+
//any other eval metric to be logged
131
+
}
132
+
}
133
+
```
134
+
117
135
##### Grounded Evals
118
136
119
137
For grounded evals like [Answer Similarity](/evals/preset_evals/grounded_evals#answer_similarity), you must also log a reference response (string) to compare against:
0 commit comments