-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add log exporting to e2e tests #308
Conversation
4742edf
to
8f77076
Compare
8f77076
to
00e0231
Compare
00e0231
to
4d3e3a7
Compare
4d3e3a7
to
82d5711
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
1fd7c48
to
387828b
Compare
387828b
to
039b743
Compare
@nathan-weinberg I've updated the CI scripts with your feedback, please take another pass when you get a chance and make sure that we didn't miss anything. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like the version number commenting to be consistent with how is it everywhere else, but otherwise LGTM
Can we squash commits before merging? Great work on this @RobotSail excited to see it in action! |
Currently, the training library runs through a series of end-to-end tests which ensure there are no bugs in the code being tested. However; we do not perform any form of validation to assure that the training logic and quality has not diminished. This presents an issue where we can potentially be "correct" in the sense of no hard errors being hit, but invisible bugs may be introduced which cause models to regress in training quality, or other bugs that plague the models themselves to seep in. This commit fixes that problem by introducng the ability to export the training loss data itself from the test and rendering the loss curve using matplotlib. Signed-off-by: Oleg S <97077423+RobotSail@users.noreply.github.com>
ab6151d
to
c809c73
Compare
@nathan-weinberg This has been squashed, I'll remove the hold since that's the only issue. |
Currently, the training library runs through a series of end-to-end tests which ensure there are
no bugs in the code being tested. However; we do not perform any form of validation to assure that
the training logic and quality has not diminished.
This presents an issue where we can potentially be "correct" in the sense of no hard errors being hit,
but invisible bugs may be introduced which cause models to regress in training quality, or other
bugs that plague the models themselves to seep in.
This commit fixes that problem by introducng the ability to export the training loss data itself
from the test and rendering the loss curve using matplotlib.
When the results are outputted, they can be found under the "Summary" tab of a Github actions run.
For example:
Resolves #179
Signed-off-by: Oleg S 97077423+RobotSail@users.noreply.github.com