-
Notifications
You must be signed in to change notification settings - Fork 3
Fix/opro #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Fix/opro #38
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#### Added features * new features for the VLLM Wrapper (accept seeding to ensure reproducibility) * fixes in the "MarkerBasedClassificator" * fixes in prompt creation and task description handling * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * removed datasets from repo * changed task creation (now by default with a dataset)
finitearth
commented
Mar 18, 2025
finitearth
commented
Mar 19, 2025
Owner
Author
finitearth
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
timo282
approved these changes
Mar 20, 2025
timo282
reviewed
Mar 20, 2025
Collaborator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add release notes and version increase
Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com>
finitearth
added a commit
that referenced
this pull request
Apr 22, 2025
* Feature/workflows (#8) * chore: add codeowners file * chore: add python poetry action and docs workflow * chore: update pre-commit file * chore: update docs * chore: update logo * chore: add cicd pipeline for automated deployment * chore: update poetry version * chore: fix action versioning * chore: add gitattributes to ignore line count in jupyter notebooks * chore: add and update docstrings * chore: fix end of files * chore: update action versions * Update README.md --------- Co-authored-by: mo374z <schlager.mo@t-online.de> * Fix/workflows (#11) * chore: fix workflow execution * chore: fix version check in CICD pipeline * Opro implementation (#7) * update gitignore * initial implementation of opro * formatting of prompt template * added opro test run * opro refinements * fixed sampling error * add docs to opro * fix pre commit issues# * fix pre commit issues# * fixed end of line * Patch/pre commit config (#10) * fixed pre commit config and removed end of file line breaks in tempaltes * added / * Feature/prompt generation (#12) * added prompt_creation.py * change version * Create LICENSE (#14) * Refactor/remove deepinfra (#16) * Remove deepinfra file * change langchain-community version * Usability patches (#15) * renamed get_tasks to get_task and change functionality accordingly. moved templates and data_sets * init * move templates to templates.py * Add nested asyncio to make it useable in notebooks * Update README.md * changed getting_started.ipynb and created helper functions * added sampling of initial population * fixed config * fixed callbacks * adjust runs * fix run evaluation api token * fix naming convention in opro, remove on epoch end for logger callback, fixed to allow for numeric values in class names * Update promptolution/llms/api_llm.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * fixed comments * Update pyproject.toml * resolve comments --------- Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> * Feature/examplar selection (#17) * implemented random selector * added random search selector * increased version count * fix typos * Update promptolution/predictors/base_predictor.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update promptolution/tasks/classification_tasks.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * resolve comments * resolve comments --------- Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Chore/docs release notes (#18) * Update release-notes.md * Fix release note links * revert Chore/docs release notes (#18)" This reverts commit e23dd74. * revert last commit * updated release notes and read me * Feature/read from df (#21) * Delete Experiment files * Removed config necessities * improved opro meta-prompts * added read from data frame feature * changed required python version to 3.9 * Update pyproject.toml * Update release-notes.md * merge * merge * resolve merge mistakes * delete duplicated lines * Update release-notes.md (#24) * Fix/dependencies (#28) * delete poetry.lock and upgrade transformers dependency * Update release-notes.md * Add vllm as feature and a llm_test_run_script * small fixes in vllm class * differentiate between vllm and api inference * set up experiment over multiple tasks and prompts * change csv saving * add base llm super class * add changes from PR review * change some VLLM params * fix tensor parallel size to 1 * experiment with batch size * experiment with larger batch sizes * add continuous batch llm * remove arg * remove continuous batch inference try * add batching to vllm * add batching in script * Add release notes and increase version number * remove llm_test_run.py script * change system prompt * Fix/vllm (#33) * add token count, flexible batch size and kwargs to vllm class * add testing script for implementation * fix batch size calculation * small changes * add revision test * add argument to parser * max model len to int * remove script * Change version and Release notes * changed callback behaviour and impelemented token count callback * added super inits * allow for splits not based on white space (such as new line break etc) * include task descriptions * add tokenizer based token count to vllm class * update test run script * use classifiers accordingly * small fix * add storage path * helpers should use classificator * use different model * changes in opro test * change get_predictor function * fix callback calling * change optimizer test run script * small alignments * small alignments * small alignments * some changes to match the current optimizer implementation * changes in template and config * allow for batching of prompt creation * update release notes and version * extend csvcallback functionality * change callback csv export * change step time calculation * small changes * remove llm_test_run script * update release notes * fix issues in token stepswise calculation * small fix --------- Co-authored-by: finitearth <t.zehle@gmail.com> * implement changes from review * add typing to token count callback * Feature/deterministic (#35) * make vllm class deterministic * fixes in prompt creation * Fix/prompt creation (#36) * fixes in the "MarkerBasedClassificator" * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * add arguements to test script * added some feature notes * Fix/template (#39) * v1.3.1 (#37) #### Added features * new features for the VLLM Wrapper (accept seeding to ensure reproducibility) * fixes in the "MarkerBasedClassificator" * fixes in prompt creation and task description handling * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * removed datasets from repo * changed task creation (now by default with a dataset) * add generation prompt to vllm input * allow for parquet as fileoutput callback * added sys_prompts * change usage of csv callbacks * add system prompt to token counts * fix merge issues * drag system prompts from api to task * added release notes --------- Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> * Fix/opro (#38) * v1.3.1 (#37) #### Added features * new features for the VLLM Wrapper (accept seeding to ensure reproducibility) * fixes in the "MarkerBasedClassificator" * fixes in prompt creation and task description handling * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * removed datasets from repo * changed task creation (now by default with a dataset) * opro reimplementation according to the paper * fine opro implementation * opro test scripts alignment * implement opro review * small fix in score handling * adjust hyperparameters * add early stopping at convergence to opro * Update promptolution/optimizers/opro.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> --------- Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Refactor/generic api llm (#41) * v1.3.2 (#40) #### Added features * Allow for configuration and evaluation of system prompts in all LLM-Classes * CSV Callback is now FileOutputCallback and able to write Parquet files * Fixed LLM-Call templates in VLLM * refined OPRO-implementation to be closer to the paper * implement api calls * removed default for system messages * roll back renaming --------- Co-authored-by: mo374z <schlager.mo@t-online.de> * Refactor/interfaces (#42) * add token count, flexible batch size and kwargs to vllm class * add testing script for implementation * fix batch size calculation * small changes * add revision test * add argument to parser * max model len to int * remove script * Change version and Release notes * changed callback behaviour and impelemented token count callback * init * small corrections * changes to prevent merge conflicts * small changes * first tests * add api_test * a lot has changed for gangsters * fix * added * removed * fix * Update promptolution/predictors/__init__.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * add testing to ci pipeline * Update action.yml * fix test dependencies in pipeline * Add further test dependencies * Refactor dependency groups * Add getting started notebook to documentation * Add workflow call trigger to docs workflow * Add CI and Docs status badges * Add temporary file for docs testing * Remove temporary file for docs testing * Update notebooks/getting_started.ipynb Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update notebooks/getting_started.ipynb Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * comments * end2end * added * new * more * change * update dependencies * added literals to get optimizer * added release notes --------- Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Timo Heiß <ti-heiss@t-online.de> Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Remove obsolete Docstring * remove redundant init * remove redundant init * resolve merge chaos * formatting * remove obsolete reproduce experiments part in readme * Update README.md --------- Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> Co-authored-by: Timo Heiß <ti-heiss@t-online.de> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
finitearth
added a commit
that referenced
this pull request
May 18, 2025
* Feature/workflows (#8) * chore: add codeowners file * chore: add python poetry action and docs workflow * chore: update pre-commit file * chore: update docs * chore: update logo * chore: add cicd pipeline for automated deployment * chore: update poetry version * chore: fix action versioning * chore: add gitattributes to ignore line count in jupyter notebooks * chore: add and update docstrings * chore: fix end of files * chore: update action versions * Update README.md --------- Co-authored-by: mo374z <schlager.mo@t-online.de> * Fix/workflows (#11) * chore: fix workflow execution * chore: fix version check in CICD pipeline * Opro implementation (#7) * update gitignore * initial implementation of opro * formatting of prompt template * added opro test run * opro refinements * fixed sampling error * add docs to opro * fix pre commit issues# * fix pre commit issues# * fixed end of line * Patch/pre commit config (#10) * fixed pre commit config and removed end of file line breaks in tempaltes * added / * Feature/prompt generation (#12) * added prompt_creation.py * change version * Create LICENSE (#14) * Refactor/remove deepinfra (#16) * Remove deepinfra file * change langchain-community version * Usability patches (#15) * renamed get_tasks to get_task and change functionality accordingly. moved templates and data_sets * init * move templates to templates.py * Add nested asyncio to make it useable in notebooks * Update README.md * changed getting_started.ipynb and created helper functions * added sampling of initial population * fixed config * fixed callbacks * adjust runs * fix run evaluation api token * fix naming convention in opro, remove on epoch end for logger callback, fixed to allow for numeric values in class names * Update promptolution/llms/api_llm.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * fixed comments * Update pyproject.toml * resolve comments --------- Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> * Feature/examplar selection (#17) * implemented random selector * added random search selector * increased version count * fix typos * Update promptolution/predictors/base_predictor.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update promptolution/tasks/classification_tasks.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * resolve comments * resolve comments --------- Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Chore/docs release notes (#18) * Update release-notes.md * Fix release note links * revert Chore/docs release notes (#18)" This reverts commit e23dd74. * revert last commit * updated release notes and read me * Feature/read from df (#21) * Delete Experiment files * Removed config necessities * improved opro meta-prompts * added read from data frame feature * changed required python version to 3.9 * Update pyproject.toml * Update release-notes.md * merge * merge * resolve merge mistakes * delete duplicated lines * Update release-notes.md (#24) * Fix/dependencies (#28) * delete poetry.lock and upgrade transformers dependency * Update release-notes.md * Add vllm as feature and a llm_test_run_script * small fixes in vllm class * differentiate between vllm and api inference * set up experiment over multiple tasks and prompts * change csv saving * add base llm super class * add changes from PR review * change some VLLM params * fix tensor parallel size to 1 * experiment with batch size * experiment with larger batch sizes * add continuous batch llm * remove arg * remove continuous batch inference try * add batching to vllm * add batching in script * Add release notes and increase version number * remove llm_test_run.py script * change system prompt * Fix/vllm (#33) * add token count, flexible batch size and kwargs to vllm class * add testing script for implementation * fix batch size calculation * small changes * add revision test * add argument to parser * max model len to int * remove script * Change version and Release notes * changed callback behaviour and impelemented token count callback * added super inits * allow for splits not based on white space (such as new line break etc) * include task descriptions * add tokenizer based token count to vllm class * update test run script * use classifiers accordingly * small fix * add storage path * helpers should use classificator * use different model * changes in opro test * change get_predictor function * fix callback calling * change optimizer test run script * small alignments * small alignments * small alignments * some changes to match the current optimizer implementation * changes in template and config * allow for batching of prompt creation * update release notes and version * extend csvcallback functionality * change callback csv export * change step time calculation * small changes * remove llm_test_run script * update release notes * fix issues in token stepswise calculation * small fix --------- Co-authored-by: finitearth <t.zehle@gmail.com> * implement changes from review * add typing to token count callback * Feature/deterministic (#35) * make vllm class deterministic * fixes in prompt creation * Fix/prompt creation (#36) * fixes in the "MarkerBasedClassificator" * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * add arguements to test script * added some feature notes * Fix/template (#39) * v1.3.1 (#37) #### Added features * new features for the VLLM Wrapper (accept seeding to ensure reproducibility) * fixes in the "MarkerBasedClassificator" * fixes in prompt creation and task description handling * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * removed datasets from repo * changed task creation (now by default with a dataset) * add generation prompt to vllm input * allow for parquet as fileoutput callback * added sys_prompts * change usage of csv callbacks * add system prompt to token counts * fix merge issues * drag system prompts from api to task * added release notes --------- Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> * Fix/opro (#38) * v1.3.1 (#37) #### Added features * new features for the VLLM Wrapper (accept seeding to ensure reproducibility) * fixes in the "MarkerBasedClassificator" * fixes in prompt creation and task description handling * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * removed datasets from repo * changed task creation (now by default with a dataset) * opro reimplementation according to the paper * fine opro implementation * opro test scripts alignment * implement opro review * small fix in score handling * adjust hyperparameters * add early stopping at convergence to opro * Update promptolution/optimizers/opro.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> --------- Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Refactor/generic api llm (#41) * v1.3.2 (#40) #### Added features * Allow for configuration and evaluation of system prompts in all LLM-Classes * CSV Callback is now FileOutputCallback and able to write Parquet files * Fixed LLM-Call templates in VLLM * refined OPRO-implementation to be closer to the paper * implement api calls * removed default for system messages * roll back renaming --------- Co-authored-by: mo374z <schlager.mo@t-online.de> * Refactor/interfaces (#42) * add token count, flexible batch size and kwargs to vllm class * add testing script for implementation * fix batch size calculation * small changes * add revision test * add argument to parser * max model len to int * remove script * Change version and Release notes * changed callback behaviour and impelemented token count callback * init * small corrections * changes to prevent merge conflicts * small changes * first tests * add api_test * a lot has changed for gangsters * fix * added * removed * fix * Update promptolution/predictors/__init__.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * add testing to ci pipeline * Update action.yml * fix test dependencies in pipeline * Add further test dependencies * Refactor dependency groups * Add getting started notebook to documentation * Add workflow call trigger to docs workflow * Add CI and Docs status badges * Add temporary file for docs testing * Remove temporary file for docs testing * Update notebooks/getting_started.ipynb Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update notebooks/getting_started.ipynb Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * comments * end2end * added * new * more * change * update dependencies * added literals to get optimizer * added release notes --------- Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Timo Heiß <ti-heiss@t-online.de> Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Remove obsolete Docstring * remove redundant init * remove redundant init * resolve merge chaos * formatting * alternated task to work for capo * bug fixes * formatting * update task interface and preds and eval cache * fix vllm and capo * fix nans when evaluate strategy * fix tests * finalized testing * fix vllm tests * fix vllm mocking * fix capo docstring * update getting_started notebook to include capo * formatting * formatting * formatting * formatting and docstring fix * implemented comments * implemented comments * fix tests * fix tests * fix tests * comments * fix typing of test statistics --------- Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> Co-authored-by: Timo Heiß <ti-heiss@t-online.de> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
mo374z
added a commit
that referenced
this pull request
May 19, 2025
* Feature/workflows (#8) * chore: add codeowners file * chore: add python poetry action and docs workflow * chore: update pre-commit file * chore: update docs * chore: update logo * chore: add cicd pipeline for automated deployment * chore: update poetry version * chore: fix action versioning * chore: add gitattributes to ignore line count in jupyter notebooks * chore: add and update docstrings * chore: fix end of files * chore: update action versions * Update README.md --------- Co-authored-by: mo374z <schlager.mo@t-online.de> * Fix/workflows (#11) * chore: fix workflow execution * chore: fix version check in CICD pipeline * Opro implementation (#7) * update gitignore * initial implementation of opro * formatting of prompt template * added opro test run * opro refinements * fixed sampling error * add docs to opro * fix pre commit issues# * fix pre commit issues# * fixed end of line * Patch/pre commit config (#10) * fixed pre commit config and removed end of file line breaks in tempaltes * added / * Feature/prompt generation (#12) * added prompt_creation.py * change version * Create LICENSE (#14) * Refactor/remove deepinfra (#16) * Remove deepinfra file * change langchain-community version * Usability patches (#15) * renamed get_tasks to get_task and change functionality accordingly. moved templates and data_sets * init * move templates to templates.py * Add nested asyncio to make it useable in notebooks * Update README.md * changed getting_started.ipynb and created helper functions * added sampling of initial population * fixed config * fixed callbacks * adjust runs * fix run evaluation api token * fix naming convention in opro, remove on epoch end for logger callback, fixed to allow for numeric values in class names * Update promptolution/llms/api_llm.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * fixed comments * Update pyproject.toml * resolve comments --------- Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> * Feature/examplar selection (#17) * implemented random selector * added random search selector * increased version count * fix typos * Update promptolution/predictors/base_predictor.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update promptolution/tasks/classification_tasks.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * resolve comments * resolve comments --------- Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Chore/docs release notes (#18) * Update release-notes.md * Fix release note links * revert Chore/docs release notes (#18)" This reverts commit e23dd74. * revert last commit * updated release notes and read me * Feature/read from df (#21) * Delete Experiment files * Removed config necessities * improved opro meta-prompts * added read from data frame feature * changed required python version to 3.9 * Update pyproject.toml * Update release-notes.md * merge * merge * resolve merge mistakes * delete duplicated lines * Update release-notes.md (#24) * Fix/dependencies (#28) * delete poetry.lock and upgrade transformers dependency * Update release-notes.md * Add vllm as feature and a llm_test_run_script * small fixes in vllm class * differentiate between vllm and api inference * set up experiment over multiple tasks and prompts * change csv saving * add base llm super class * add changes from PR review * change some VLLM params * fix tensor parallel size to 1 * experiment with batch size * experiment with larger batch sizes * add continuous batch llm * remove arg * remove continuous batch inference try * add batching to vllm * add batching in script * Add release notes and increase version number * remove llm_test_run.py script * change system prompt * Fix/vllm (#33) * add token count, flexible batch size and kwargs to vllm class * add testing script for implementation * fix batch size calculation * small changes * add revision test * add argument to parser * max model len to int * remove script * Change version and Release notes * changed callback behaviour and impelemented token count callback * added super inits * allow for splits not based on white space (such as new line break etc) * include task descriptions * add tokenizer based token count to vllm class * update test run script * use classifiers accordingly * small fix * add storage path * helpers should use classificator * use different model * changes in opro test * change get_predictor function * fix callback calling * change optimizer test run script * small alignments * small alignments * small alignments * some changes to match the current optimizer implementation * changes in template and config * allow for batching of prompt creation * update release notes and version * extend csvcallback functionality * change callback csv export * change step time calculation * small changes * remove llm_test_run script * update release notes * fix issues in token stepswise calculation * small fix --------- Co-authored-by: finitearth <t.zehle@gmail.com> * implement changes from review * add typing to token count callback * Feature/deterministic (#35) * make vllm class deterministic * fixes in prompt creation * Fix/prompt creation (#36) * fixes in the "MarkerBasedClassificator" * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * add arguements to test script * added some feature notes * Fix/template (#39) * v1.3.1 (#37) #### Added features * new features for the VLLM Wrapper (accept seeding to ensure reproducibility) * fixes in the "MarkerBasedClassificator" * fixes in prompt creation and task description handling * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * removed datasets from repo * changed task creation (now by default with a dataset) * add generation prompt to vllm input * allow for parquet as fileoutput callback * added sys_prompts * change usage of csv callbacks * add system prompt to token counts * fix merge issues * drag system prompts from api to task * added release notes --------- Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> * Fix/opro (#38) * v1.3.1 (#37) #### Added features * new features for the VLLM Wrapper (accept seeding to ensure reproducibility) * fixes in the "MarkerBasedClassificator" * fixes in prompt creation and task description handling * generalize the Classificator * add verbosity and callback handling in EvoPromptGA * add timestamp to the callback * removed datasets from repo * changed task creation (now by default with a dataset) * opro reimplementation according to the paper * fine opro implementation * opro test scripts alignment * implement opro review * small fix in score handling * adjust hyperparameters * add early stopping at convergence to opro * Update promptolution/optimizers/opro.py Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> --------- Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Refactor/generic api llm (#41) * v1.3.2 (#40) #### Added features * Allow for configuration and evaluation of system prompts in all LLM-Classes * CSV Callback is now FileOutputCallback and able to write Parquet files * Fixed LLM-Call templates in VLLM * refined OPRO-implementation to be closer to the paper * implement api calls * removed default for system messages * roll back renaming --------- Co-authored-by: mo374z <schlager.mo@t-online.de> * Refactor/interfaces (#42) * add token count, flexible batch size and kwargs to vllm class * add testing script for implementation * fix batch size calculation * small changes * add revision test * add argument to parser * max model len to int * remove script * Change version and Release notes * changed callback behaviour and impelemented token count callback * init * small corrections * changes to prevent merge conflicts * small changes * first tests * add api_test * a lot has changed for gangsters * fix * added * removed * fix * Update promptolution/predictors/__init__.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * add testing to ci pipeline * Update action.yml * fix test dependencies in pipeline * Add further test dependencies * Refactor dependency groups * Add getting started notebook to documentation * Add workflow call trigger to docs workflow * Add CI and Docs status badges * Add temporary file for docs testing * Remove temporary file for docs testing * Update notebooks/getting_started.ipynb Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update notebooks/getting_started.ipynb Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * comments * end2end * added * new * more * change * update dependencies * added literals to get optimizer * added release notes --------- Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Timo Heiß <ti-heiss@t-online.de> Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Remove obsolete Docstring * remove redundant init * remove redundant init * resolve merge chaos * formatting * alternated task to work for capo * bug fixes * formatting * update task interface and preds and eval cache * fix vllm and capo * fix nans when evaluate strategy * fix tests * finalized testing * fix vllm tests * fix vllm mocking * fix capo docstring * update getting_started notebook to include capo * formatting * formatting * formatting * formatting and docstring fix * implemented comments * implemented comments * fix tests * fix tests * fix tests * fixed docstrings and naming * clear inits * remove dummy classes * standardize logging * fix imports * logging * fix imports once more * remove n_eval_samples argument from tests * resolve merge issues * merge from main * comments * Update coverage badge in README [skip ci] * added realese notes * fix import bug * release notes * Update readme and fix imports * fix release notes * changed v2.0.0.md * formatting * Update coverage badge in README [skip ci] * fix imports once more * apply pre-commit everywhere * apply pre-commit everywwhere * fix imports and typings * fix helpers * fix demo scripts * formatting * Update coverage badge in README [skip ci] * fix tests * Update coverage badge in README [skip ci] * remove convergence speed from read me * rename Opro to OPRO * changed imports in isort * Update API reference in docs * Update README.md Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update README.md Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update README.md Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update README.md Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update README.md Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * Update README.md Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> * fixed task description in notebook * Update v2.0.0.md --------- Co-authored-by: Timo Heiß <87521684+timo282@users.noreply.github.com> Co-authored-by: mo374z <schlager.mo@t-online.de> Co-authored-by: Moritz Schlager <87517800+mo374z@users.noreply.github.com> Co-authored-by: Timo Heiß <ti-heiss@t-online.de> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: finitearth <finitearth@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.