Skip to content

[WIP] Integration with DeepLabCut 3.0 - PyTorch Engine #121

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 65 commits into
base: main
Choose a base branch
from

Conversation

n-poulsen
Copy link
Collaborator

This pull requests updates DeepLabCut-Live for models exported with DeepLabCut 3.0. TensorFlow models can still be used, and the code is siloed so that only the engine used to run the code is required as a package (i.e. no need to install TensorFlow if you want to run live pose estimation with PyTorch models).

If you want to give this PR a try, you can install the code in your local conda environment by running:

pip install "git+https://github.com/DeepLabCut/DeepLabCut-live.git@dlclive3"

@sneakers-the-rat
Copy link
Collaborator

(while you're touching the benchmarking code, it might be worth it to just write it as actual pytest tests, which would give us better error output and a much better base to build on than the custom benchmarking/testing module we have now. just noticing you're extracting functions (which is great!!!! exactly what needs to be done) and it would be great to have proper test fixtures to be able to e.g. apply the same tests to a range of videos, different models, cleanup files in temporary directories, etc.)

@maximpavliv
Copy link
Collaborator

(while you're touching the benchmarking code, it might be worth it to just write it as actual pytest tests, which would give us better error output and a much better base to build on than the custom benchmarking/testing module we have now.

Thanks for the input @sneakers-the-rat !
I agree that pytests would be great for unit testing (we should add them sometime), but I think we should also keep the integration testing (that we currently do with the benchmarking script) to test the pipeline end-to-end. I'm definitely planning to run this integration test with different models, potentially different videos, as you suggest!

@MMathisLab
Copy link
Member

Using pytest would be much better, I agree

@sneakers-the-rat
Copy link
Collaborator

sneakers-the-rat commented Jun 11, 2025

we should also keep the integration testing

totally agreed. no point in throwing out what we already have! pytest can help structure some of the setup, so e.g. we have a reliable way for different tests to request the same model, only download it once, etc. the end to end tests can be added in then as the base case where we request some model, some video, and maybe compare to some expected results - and then the tensorflow and torch versions can be two parameters to the same testing function :)

edit: not to say "do everything in this one PR," that would make sense as a follow-on thing, just commenting here because i've been enjoying seeing this work happen <3

@MMathisLab
Copy link
Member

Agree! Let's do this @maximpavliv ; it's what we have in cebra and it's extremely useful

HRNet-32 requires input images to have shape that is multiple of 32, this preprocessing part was missing. I replaced the single PyTorchRunner.transform attribute by a detector transform and a pose transform, each of these transforms are built using the model config. I added a AutoPadToDivisor transform based on torchvision.transforms.functional.pad().
maximpavliv and others added 5 commits June 13, 2025 15:13
This script will be used for both benchmarking and integration testing, therefore it needs to crash if inference isnt successful
- removing until a new one can be added properly
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants