You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* add token count, flexible batch size and kwargs to vllm class
* add testing script for implementation
* fix batch size calculation
* small changes
* add revision test
* add argument to parser
* max model len to int
* remove script
* Change version and Release notes
* changed callback behaviour and impelemented token count callback
* added super inits
* allow for splits not based on white space (such as new line break etc)
* include task descriptions
* add tokenizer based token count to vllm class
* update test run script
* use classifiers accordingly
* small fix
* add storage path
* helpers should use classificator
* use different model
* changes in opro test
* change get_predictor function
* fix callback calling
* change optimizer test run script
* small alignments
* small alignments
* small alignments
* some changes to match the current optimizer implementation
* changes in template and config
* allow for batching of prompt creation
* update release notes and version
* extend csvcallback functionality
* change callback csv export
* change step time calculation
* small changes
* remove llm_test_run script
* update release notes
* fix issues in token stepswise calculation
* small fix
---------
Co-authored-by: finitearth <t.zehle@gmail.com>
0 commit comments