-
Notifications
You must be signed in to change notification settings - Fork 26
Allow for external serving to be used with mmlu #99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Note: Tests will fail until lm_eval 0.4.4 is available |
@JamesKunstle FYI |
cool patch btw! nice way to get around the current limitations |
The CI failures are because of requiring lm_eval 0.4.4. They are promising a new release should come out soon. We're in a holding pattern until then. |
@Mergifyio rebase |
Requires lm_eval >= 0.4.4 Signed-off-by: Dan McPherson <dmcphers@redhat.com>
✅ Branch has been successfully rebased |
https://github.com/EleutherAI/lm-evaluation-harness/releases/tag/v0.4.4 has been released |
…2065) Like with mt_bench, the new logic supports vllm and llama-cpp and passes the base url of the started instance to the mmlu library. This enables: - Consistency with how we are serving from a support perspective - Support for larger models with sharding across gpus - Multi-gpu support - Potential for faster performance with higher serving throughput Related: instructlab/eval#50 instructlab/eval#68 Corresponding Eval change: instructlab/eval#99 Resolves: #1792 **Checklist:** - [ ] **Commit Message Formatting**: Commit titles and messages follow guidelines in the [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/#summary). - [ ] [Changelog](https://github.com/instructlab/instructlab/blob/main/CHANGELOG.md) updated with breaking and/or notable changes for the next minor release. - [ ] Documentation has been updated, if necessary. - [x] Unit tests have been added, if necessary. - [ ] Integration tests have been added, if necessary. Approved-by: nathan-weinberg Approved-by: alimaredia Approved-by: leseb
Requires lm_eval >= 0.4.4
As implemented, if server_url is passed, it's used for external serving. Otherwise the old logic is still applied. Eventually we may want to remove the non external serving option.
Resolves: #50
Resolves: #68
Corresponding cli change: instructlab/instructlab#2065