-
Notifications
You must be signed in to change notification settings - Fork 51
[WIP] Integration with DeepLabCut 3.0 - PyTorch Engine #121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
(while you're touching the benchmarking code, it might be worth it to just write it as actual pytest tests, which would give us better error output and a much better base to build on than the custom benchmarking/testing module we have now. just noticing you're extracting functions (which is great!!!! exactly what needs to be done) and it would be great to have proper test fixtures to be able to e.g. apply the same tests to a range of videos, different models, cleanup files in temporary directories, etc.) |
Thanks for the input @sneakers-the-rat ! |
Using pytest would be much better, I agree |
totally agreed. no point in throwing out what we already have! pytest can help structure some of the setup, so e.g. we have a reliable way for different tests to request the same model, only download it once, etc. the end to end tests can be added in then as the base case where we request some model, some video, and maybe compare to some expected results - and then the tensorflow and torch versions can be two parameters to the same testing function :) edit: not to say "do everything in this one PR," that would make sense as a follow-on thing, just commenting here because i've been enjoying seeing this work happen <3 |
Agree! Let's do this @maximpavliv ; it's what we have in cebra and it's extremely useful |
HRNet-32 requires input images to have shape that is multiple of 32, this preprocessing part was missing. I replaced the single PyTorchRunner.transform attribute by a detector transform and a pose transform, each of these transforms are built using the model config. I added a AutoPadToDivisor transform based on torchvision.transforms.functional.pad().
This script will be used for both benchmarking and integration testing, therefore it needs to crash if inference isnt successful
- removing until a new one can be added properly
This pull requests updates DeepLabCut-Live for models exported with DeepLabCut 3.0. TensorFlow models can still be used, and the code is siloed so that only the engine used to run the code is required as a package (i.e. no need to install TensorFlow if you want to run live pose estimation with PyTorch models).
If you want to give this PR a try, you can install the code in your local
conda
environment by running:pip install "git+https://github.com/DeepLabCut/DeepLabCut-live.git@dlclive3"