You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Exception in thread "main" ai.djl.translate.TranslateException: ai.djl.engine.EngineException: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch.py", line 63, in forward
feat_s1 = torch.view(torch.permute(feat2, [1, 2, 0]), [1, -1, 128, 128])
feat_s0 = torch.view(torch.permute(feat, [1, 2, 0]), [1, -1, 256, 256])
_23 = (sam_prompt_encoder0).forward(point_coords, point_labels, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_24, _25, = _23
image_embeddings = torch.unsqueeze(torch.select(image_embed, 0, 0), 0)
File "code/torch/sam2/modeling/sam/prompt_encoder.py", line 78, in forward
_32 = annotate(List[Optional[Tensor]], [_31]) 33 = torch.add(torch.index(point_embedding2, _32), weight2)
_34 = torch.view(_33, [256])
~~~~~~~~~~ <--- HERE
_35 = annotate(List[Optional[Tensor]], [31])
point_embedding3 = torch.index_put(point_embedding2, _35, _34)
Traceback of TorchScript, original code (most recent call last):
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(98): _embed_points
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(169): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(74): predict
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(62): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/jit/_trace.py(1275): trace_module
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(104): trace_model
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(111):
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py(18): execfile
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1535): _exec
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1528): run
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2218): main
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2236):
RuntimeError: shape '[256]' is invalid for input of size 512
at ai.djl.inference.Predictor.batchPredict(Predictor.java:197)
at ai.djl.inference.Predictor.predict(Predictor.java:133)
at SegmentAnything2.predict(SegmentAnything2.java:79)
at SegmentAnything2.main(SegmentAnything2.java:49)
Caused by: ai.djl.engine.EngineException: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch.py", line 63, in forward
feat_s1 = torch.view(torch.permute(feat2, [1, 2, 0]), [1, -1, 128, 128])
feat_s0 = torch.view(torch.permute(feat, [1, 2, 0]), [1, -1, 256, 256])
_23 = (sam_prompt_encoder0).forward(point_coords, point_labels, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_24, _25, = _23
image_embeddings = torch.unsqueeze(torch.select(image_embed, 0, 0), 0)
File "code/torch/sam2/modeling/sam/prompt_encoder.py", line 78, in forward
_32 = annotate(List[Optional[Tensor]], [_31]) 33 = torch.add(torch.index(point_embedding2, _32), weight2)
_34 = torch.view(_33, [256])
~~~~~~~~~~ <--- HERE
_35 = annotate(List[Optional[Tensor]], [31])
point_embedding3 = torch.index_put(point_embedding2, _35, _34)
Traceback of TorchScript, original code (most recent call last):
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(98): _embed_points
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(169): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(74): predict
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(62): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/jit/_trace.py(1275): trace_module
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(104): trace_model
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(111):
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py(18): execfile
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1535): _exec
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1528): run
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2218): main
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2236):
RuntimeError: shape '[256]' is invalid for input of size 512
at ai.djl.pytorch.jni.PyTorchLibrary.moduleRunMethod(Native Method)
at ai.djl.pytorch.jni.IValueUtils.forward(IValueUtils.java:57)
at ai.djl.pytorch.engine.PtSymbolBlock.forwardInternal(PtSymbolBlock.java:146)
at ai.djl.nn.AbstractBaseBlock.forward(AbstractBaseBlock.java:79)
at ai.djl.nn.Block.forward(Block.java:127)
at ai.djl.inference.Predictor.predictInternal(Predictor.java:147)
at ai.djl.inference.Predictor.batchPredict(Predictor.java:172)
... 3 more
The text was updated successfully, but these errors were encountered:
@canglaoshidaidui
This is a limitation of traced model. During the jit trace, the input shape is fixed. Currently the model is traced with single point and doesn't support box. You have to manually trace the model with 2 point for your use case.
Description
@ dear DJL Team
SAM2 more than one point and label error
Expected Behavior
Error Message
Exception in thread "main" ai.djl.translate.TranslateException: ai.djl.engine.EngineException: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch.py", line 63, in forward
feat_s1 = torch.view(torch.permute(feat2, [1, 2, 0]), [1, -1, 128, 128])
feat_s0 = torch.view(torch.permute(feat, [1, 2, 0]), [1, -1, 256, 256])
_23 = (sam_prompt_encoder0).forward(point_coords, point_labels, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_24, _25, = _23
image_embeddings = torch.unsqueeze(torch.select(image_embed, 0, 0), 0)
File "code/torch/sam2/modeling/sam/prompt_encoder.py", line 78, in forward
_32 = annotate(List[Optional[Tensor]], [_31])
33 = torch.add(torch.index(point_embedding2, _32), weight2)
_34 = torch.view(_33, [256])
~~~~~~~~~~ <--- HERE
_35 = annotate(List[Optional[Tensor]], [31])
point_embedding3 = torch.index_put(point_embedding2, _35, _34)
Traceback of TorchScript, original code (most recent call last):
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(98): _embed_points
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(169): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(74): predict
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(62): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/jit/_trace.py(1275): trace_module
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(104): trace_model
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(111):
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py(18): execfile
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1535): _exec
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1528): run
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2218): main
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2236):
RuntimeError: shape '[256]' is invalid for input of size 512
Caused by: ai.djl.engine.EngineException: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch.py", line 63, in forward
feat_s1 = torch.view(torch.permute(feat2, [1, 2, 0]), [1, -1, 128, 128])
feat_s0 = torch.view(torch.permute(feat, [1, 2, 0]), [1, -1, 256, 256])
_23 = (sam_prompt_encoder0).forward(point_coords, point_labels, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_24, _25, = _23
image_embeddings = torch.unsqueeze(torch.select(image_embed, 0, 0), 0)
File "code/torch/sam2/modeling/sam/prompt_encoder.py", line 78, in forward
_32 = annotate(List[Optional[Tensor]], [_31])
33 = torch.add(torch.index(point_embedding2, _32), weight2)
_34 = torch.view(_33, [256])
~~~~~~~~~~ <--- HERE
_35 = annotate(List[Optional[Tensor]], [31])
point_embedding3 = torch.index_put(point_embedding2, _35, _34)
Traceback of TorchScript, original code (most recent call last):
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(98): _embed_points
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(169): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(74): predict
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(62): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/jit/_trace.py(1275): trace_module
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(104): trace_model
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(111):
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py(18): execfile
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1535): _exec
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1528): run
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2218): main
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2236):
RuntimeError: shape '[256]' is invalid for input of size 512
The text was updated successfully, but these errors were encountered: