-
Notifications
You must be signed in to change notification settings - Fork 289
Implementing Fine-Tuning and Prompt-Tuning for TabPFN #273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@tleemann Kudos for making the effort! Please note some work for fine-tunning is being done here: https://github.com/LennartPurucker/finetune_tabpfn_v2, not sure if your approach is somewhat different but the change looks quite large. In the repo referenced the goal is not to interfere with the main code base for the purposes of fine-tuning. |
@iivalchev Thanks for pointing to your codebase! Don't worry, I was in touch with the maintainers before creating this request.
|
Right! I am quite keen on the strategies for training on multiple datasets. Is there anything done in that regard? |
Yes, that's a challenging task. The idea here was to preprocess the datasets offline for additional speed through parallelism (preprocessing uses numpy). The classifier now has a Please have a look at the example in Best, |
Cool! Will do so. Do you think test-time training can also be accommodated? And I assume fine-tunning for the regressor would also be supported? |
Creating a pull request for Finetuning and Prompt tuning.
What has not been done yet, but might be nice to have:
Create tests, check typing with mypy