LLM Based Evaluation - more control with mapping a variable e.g. {{query}}, {{ground_truth}} etc inside of Input/Output/Metadata observation #3352
Closed
stepanogil
started this conversation in
Ideas
Replies: 1 comment
-
Great feedback, thanks for taking the time to share this in this great detail! AzureOpenAI was added in the meantime and can now be used for evaluation in Langfuse. Being able to query (e.g. json-path) within an input/output object is tracked here already: https://github.com/orgs/langfuse/discussions/3237 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Describe the feature or potential improvement
This is how you currently map variables in the model-based Evaluation (Beta) module:
i.e. you map whole returns of Input or Output or Metadata to a variable e.g. {{query}}
This is an example of how I populate the a Trace Input:
This means when I do Evals, this whole json blob is passed on as the value of my {{query}}:
Which I don't want to do as all the metadata will just introduce a lot of noise to the eval.
What I want to do is map my {{query}} to particular kv pair inside of the Input. Example below:
So I want another dropdown here where I can configure to which key I want to map the {{query}} variable.
So I can input something like:
Input["conversation_history"][0]["content"]
or something to this effect.Hope that makes sense!
Also pls support Azure OpenAI in Evaluation - should be easy to implement- it has the same function calling implementation as OpenAI which seems to be integral to Evaluation.
Thats it. more power! im a big langfuse fan!
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions