Skip to content

Conversation

@mo374z
Copy link
Collaborator

@mo374z mo374z commented Mar 9, 2025

  • new features for the VLLM Wrapper (automatic batch size determination, accepting kwargs)
  • allow callbacks to terminate optimization run
  • add token count functionality
  • renamed "Classificator"-Predictor to "FirstOccurenceClassificator"
  • introduced "MarkerBasedClassifcator"
  • automatic task description creation
  • use task description in prompt creation
  • implement CSV callbacks

timo282 and others added 30 commits October 3, 2024 23:02
* chore: add codeowners file

* chore: add python poetry action and docs workflow

* chore: update pre-commit file

* chore: update docs

* chore: update logo

* chore: add cicd pipeline for automated deployment

* chore: update poetry version

* chore: fix action versioning

* chore: add gitattributes to ignore line count in jupyter notebooks

* chore: add and update docstrings

* chore: fix end of files

* chore: update action versions

* Update README.md

---------

Co-authored-by: mo374z <schlager.mo@t-online.de>
* chore: fix workflow execution

* chore: fix version check in CICD pipeline
* update gitignore

* initial implementation of opro

* formatting of prompt template

* added opro test run

* opro refinements

* fixed sampling error

* add docs to opro

* fix pre commit issues#

* fix pre commit issues#

* fixed end of line
* fixed pre commit config and removed end of file line breaks in tempaltes

* added /
* added prompt_creation.py

* change version
* Remove deepinfra file

* change langchain-community version
* renamed get_tasks to get_task and change functionality accordingly. moved templates and data_sets

* init

* move templates to templates.py

* Add nested asyncio to make it useable in notebooks

* Update README.md

* changed getting_started.ipynb and created helper functions

* added sampling of initial population

* fixed config

* fixed callbacks

* adjust runs

* fix run evaluation api token

* fix naming convention in opro, remove on epoch end for logger callback, fixed to allow for numeric values in class names

* Update promptolution/llms/api_llm.py

Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com>

* fixed comments

* Update pyproject.toml

* resolve comments

---------

Co-authored-by: mo374z <schlager.mo@t-online.de>
Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com>
Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com>
* implemented random selector

* added random search selector

* increased version count

* fix typos

* Update promptolution/predictors/base_predictor.py

Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com>

* Update promptolution/tasks/classification_tasks.py

Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com>

* resolve comments

* resolve comments

---------

Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com>
* Update release-notes.md

* Fix release note links
* Delete Experiment files

* Removed config necessities

* improved opro meta-prompts

* added read from data frame feature

* changed required python version to 3.9
* delete poetry.lock and upgrade transformers dependency

* Update release-notes.md
mo374z and others added 14 commits March 5, 2025 19:11
* add token count, flexible batch size and kwargs to vllm class

* add testing script for implementation

* fix batch size calculation

* small changes

* add revision test

* add argument to parser

* max model len to int

* remove script

* Change version and Release notes

* changed callback behaviour and impelemented token count callback

* added super inits

* allow for splits not based on white space (such as new line break etc)

* include task descriptions

* add tokenizer based token count to vllm class

* update test run script

* use classifiers accordingly

* small fix

* add storage path

* helpers should use classificator

* use different model

* changes in opro test

* change get_predictor function

* fix callback calling

* change optimizer test run script

* small alignments

* small alignments

* small alignments

* some changes to match the current optimizer implementation

* changes in template and config

* allow for batching of prompt creation

* update release notes and version

* extend csvcallback functionality

* change callback csv export

* change step time calculation

* small changes

* remove llm_test_run script

* update release notes

* fix issues in token stepswise calculation

* small fix

---------

Co-authored-by: finitearth <t.zehle@gmail.com>
@mo374z mo374z requested review from finitearth and timo282 and removed request for finitearth March 9, 2025 17:06
@mo374z mo374z requested review from finitearth and timo282 March 9, 2025 19:10
@mo374z mo374z merged commit 8ecc6a8 into main Mar 9, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants