Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test eval metrics #112

Open
valedan opened this issue Mar 23, 2023 · 3 comments
Open

Test eval metrics #112

valedan opened this issue Mar 23, 2023 · 3 comments
Assignees
Labels
evals Model evaluations

Comments

@valedan
Copy link
Contributor

valedan commented Mar 23, 2023

We should ensure that the eval metrics are sensible, and that they are doing what we expect.

Probably makes sense to write unit tests for each of them.

More info needed to define acceptance criteria?

@valedan
Copy link
Contributor Author

valedan commented Mar 23, 2023

@rusheb Whoops, converting your draft to an issue made me the owner. 🤯

When you say eval metrics here, you're referring to the functions in pathdist.py?

@valedan
Copy link
Contributor Author

valedan commented Mar 23, 2023

If that's the case it should be fairly straightforward, but we should do it after #99 to avoid conflicts

@rusheb
Copy link
Collaborator

rusheb commented Mar 24, 2023

I think I initially raised this based on a comment somebody made in a meeting, so I'm not exactly sure what I was referring to. But it would make sense if it was the functions in pathdist.py

@valedan valedan self-assigned this Apr 7, 2023
@mivanit mivanit added the evals Model evaluations label Sep 4, 2023
@mivanit mivanit added this to the Fixing evals & training loop milestone Dec 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
evals Model evaluations
Projects
None yet
Development

No branches or pull requests

3 participants