You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based of my understanding of the evaluation pipeline for extraction, any case where the document's ID matches with the IDs of documents present in the context is counted as success. However, I do not see any pipeline here for checking if the extracted text is actually what was present in the context?
It could be possible that the model correctly outputs the document IDs, but the content of the documents is modified heavily (and also possibly hallucinated). Is there any code/evaluation that checks for that, and perhaps did not make it to this repository?
The text was updated successfully, but these errors were encountered:
Hello,
Based of my understanding of the evaluation pipeline for extraction, any case where the document's ID matches with the IDs of documents present in the context is counted as success. However, I do not see any pipeline here for checking if the extracted text is actually what was present in the context?
It could be possible that the model correctly outputs the document IDs, but the content of the documents is modified heavily (and also possibly hallucinated). Is there any code/evaluation that checks for that, and perhaps did not make it to this repository?
The text was updated successfully, but these errors were encountered: