Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add distinct_1/2 metric #108

Merged
merged 12 commits into from
Sep 26, 2024
Merged

Add distinct_1/2 metric #108

merged 12 commits into from
Sep 26, 2024

Conversation

moshesbeta
Copy link
Collaborator

I add a new evaluation metric, Distinct 1/2, for the generate task evaluation. I have uploaded the new scripts "_answer_distinct12.py" and "test_answer_distinct12.py", and the modified version of "init.py".

import datasets
from nltk import ngrams
from rageval.metrics import Metric, add_attribute

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import/from is not standardized here

rageval/metrics/_answer_distinct12.py Outdated Show resolved Hide resolved
rageval/metrics/_answer_distinct12.py Outdated Show resolved Hide resolved
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This metric is lacked of a unit test file.

_answer_precision is not a suitable file name. It is recommended to change it to _answer_perplexity

from rageval.metrics import Metric, add_attribute


_CITATION = """\
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This citation doesn't seem to be correct.

longer than the max input length of the model, then it is truncated to the
max length for the perplexity computation.

Examples:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes it difficult to pass CI tests in an environment without a GPU.

import evaluate
from evaluate import logging
from rageval.metrics import Metric, add_attribute

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import/from is not standardized here

Copy link

codecov bot commented Sep 25, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.17%. Comparing base (019ffe3) to head (697f160).
Report is 22 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #108      +/-   ##
==========================================
+ Coverage   81.66%   82.17%   +0.50%     
==========================================
  Files          33       34       +1     
  Lines        1129     1161      +32     
==========================================
+ Hits          922      954      +32     
  Misses        207      207              
Flag Coverage Δ
82.17% <100.00%> (+0.50%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@bugtig6351 bugtig6351 merged commit e7726c7 into main Sep 26, 2024
3 checks passed
@bugtig6351 bugtig6351 deleted the feature/distinct_12_metrics branch October 2, 2024 12:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants