Description
[x] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
In some cases / with some models, the instructions to generate json result in something like
Here's the generated abstract conceptual question in the requested JSON format::
{"text": ...}
Would you like me to explain in more detail?
The built-in validator will fail this because it cannot parse the json, and then make several repeated requests to try to conform the response to the specified schema. If this is an error associated with an LLM's default response preferences, the agentic retry loop is basically a waste of tokens.
Ragas version: 0.2.1
Python version:
Code to Reproduce
Use Anthropic Claude 3.5 Sonnet with the new testset generator
Error trace
error.txt
Expected behavior
Intelligent extraction of json object from text blob
Additional context
Add any other context about the problem here.