-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(cli): model eval command support use-docker flag #1610
Conversation
Codecov Report
@@ Coverage Diff @@
## main #1610 +/- ##
============================================
+ Coverage 82.66% 89.69% +7.03%
============================================
Files 341 78 -263
Lines 17678 9267 -8411
Branches 931 0 -931
============================================
- Hits 14613 8312 -6301
+ Misses 2672 955 -1717
+ Partials 393 0 -393
Flags with carried forward coverage won't be shown. Click here to find out more. Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
model eval command support use-docker flag
Tested with 4 commands(starwhale==0.3.3.dev1212202632):
docker file for runtime1, work dir is pytorch runtime code:
Modules
Checklist