Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation Criteria for Extraction Success #2

Open
iamgroot42 opened this issue Nov 21, 2024 · 0 comments
Open

Evaluation Criteria for Extraction Success #2

iamgroot42 opened this issue Nov 21, 2024 · 0 comments

Comments

@iamgroot42
Copy link

Hello,

Based of my understanding of the evaluation pipeline for extraction, any case where the document's ID matches with the IDs of documents present in the context is counted as success. However, I do not see any pipeline here for checking if the extracted text is actually what was present in the context?

It could be possible that the model correctly outputs the document IDs, but the content of the documents is modified heavily (and also possibly hallucinated). Is there any code/evaluation that checks for that, and perhaps did not make it to this repository?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant