You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
❓ Questions on how to use slowfast_r50_detection model
I already succeed to run slow_r50_detection model with the code in detection tutorial.
After it, I tried to run slowfast_r50_detection with change parameters and failed.
I just changed following items
change the model class: from slow_r50_detection to slowfast_r50_detection
change parameters for preprocessing : from inputs, inp_boxes, _ = ava_inference_transform(inp_imgs, predicted_boxes.numpy()) to inputs, inp_boxes, _ = ava_inference_transform(inp_imgs, predicted_boxes.numpy(), num_frames=32, slow_fast_alpha=4)
I got the following error:
...
preds = video_model(inputs.unsqueeze(0).to(device), inp_boxes.to(device))
^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute 'unsqueeze'
This inputs is clip output of ava_inference_transform. Please look outputs of ava_inference_transform function. Its 'clip' output is torch.Tensor when slow_fast_alpha is None. On the other hand, 'clip' is list[torch.Tensor] when slow_fast_alpha is 4.
Do you have any idea about the way converting clip to suitable format for slowfast_r50_detection?
The text was updated successfully, but these errors were encountered:
❓ Questions on how to use slowfast_r50_detection model
I already succeed to run slow_r50_detection model with the code in detection tutorial.
After it, I tried to run slowfast_r50_detection with change parameters and failed.
I just changed following items
slow_r50_detection
toslowfast_r50_detection
inputs, inp_boxes, _ = ava_inference_transform(inp_imgs, predicted_boxes.numpy())
toinputs, inp_boxes, _ = ava_inference_transform(inp_imgs, predicted_boxes.numpy(), num_frames=32, slow_fast_alpha=4)
I got the following error:
This
inputs
isclip
output ofava_inference_transform
. Please look outputs ofava_inference_transform
function. Its 'clip' output istorch.Tensor
whenslow_fast_alpha
isNone
. On the other hand, 'clip' islist[torch.Tensor]
whenslow_fast_alpha
is4
.Do you have any idea about the way converting
clip
to suitable format forslowfast_r50_detection
?The text was updated successfully, but these errors were encountered: