Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

slowfast_detection does not work with the demo code in tutorial #262

Open
botamochi6277 opened this issue May 9, 2024 · 0 comments
Open

Comments

@botamochi6277
Copy link

❓ Questions on how to use slowfast_r50_detection model

I already succeed to run slow_r50_detection model with the code in detection tutorial.
After it, I tried to run slowfast_r50_detection with change parameters and failed.

I just changed following items

  • change the model class: from slow_r50_detection to slowfast_r50_detection
  • change parameters for preprocessing : from inputs, inp_boxes, _ = ava_inference_transform(inp_imgs, predicted_boxes.numpy()) to inputs, inp_boxes, _ = ava_inference_transform(inp_imgs, predicted_boxes.numpy(), num_frames=32, slow_fast_alpha=4)

I got the following error:

...
preds = video_model(inputs.unsqueeze(0).to(device), inp_boxes.to(device))
                      ^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'unsqueeze'

This inputs is clip output of ava_inference_transform. Please look outputs of ava_inference_transform function. Its 'clip' output is torch.Tensor when slow_fast_alpha is None. On the other hand, 'clip' is list[torch.Tensor] when slow_fast_alpha is 4.

Do you have any idea about the way converting clip to suitable format for slowfast_r50_detection?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant