PyTorch conversion error - TypeError with bool #793
Labels
awaiting response
Please respond to this issue to provide further clarification (status)
bug
Unexpected behaviour that should be corrected (type)
PyTorch (traced)
Hi,
I'm trying to convert a model from https://github.com/ultralytics/yolov5/ (yolov5l in specific) and I get the following error:
WARNING:root:Tuple detected at graph output. This will be flattened in the converted model.
Converting Frontend ==> MIL Ops: 61%|██████████████████████████ | 1399/2307 [00:13<00:13, 64.92 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision!
Converting Frontend ==> MIL Ops: 61%|██████████████████████████▎ | 1414/2307 [00:13<00:13, 64.09 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision!
Converting Frontend ==> MIL Ops: 71%|██████████████████████████████▍ | 1636/2307 [00:16<00:11, 59.18 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision!
Converting Frontend ==> MIL Ops: 72%|██████████████████████████████▊ | 1651/2307 [00:17<00:11, 56.47 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision!
Converting Frontend ==> MIL Ops: 100%|██████████████████████████████████████████▉| 2305/2307 [00:30<00:00, 75.65 ops/s]
Running MIL optimization passes: 100%|████████████████████████████████████████████| 13/13 [00:22<00:00, 1.71s/ passes]
Translating MIL ==> MLModel Ops: 31%|████████████▏ | 553/1807 [00:00<00:00, 547815.33 ops/s]
Traceback (most recent call last):
File "", line 1, in
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters_converters_entry.py", line 292, in convert
proto_spec = _convert(
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\converter.py", line 122, in _convert
out = backend_converter(prog, **kwargs)
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\converter.py", line 72, in call
return load(*args, **kwargs)
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\backend\nn\load.py", line 235, in load
convert_ops(
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\backend\nn\op_mapping.py", line 50, in convert_ops
mapper(const_context, builder, op)
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\backend\nn\op_mapping.py", line 881, in slice_by_index
builder.add_slice_dynamic(
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\models\neural_network\builder.py", line 5501, in add_slice_dynamic
spec_layer_params.endMasks.extend(end_masks)
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\google\protobuf\internal\containers.py", line 282, in extend
new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter]
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\google\protobuf\internal\containers.py", line 282, in
new_values = [self.type_checker.CheckValue(elem) for elem in elem_seq_iter]
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\google\protobuf\internal\type_checkers.py", line 142, in CheckValue
raise TypeError(message)
TypeError: True has type <class 'numpy.bool'>, but expected one of: (<class 'bool'>, <class 'numbers.Integral'>)
The code used to convert the model is the following:
from models.common import *
import coremltools as ct
img = torch.zeros((1, 3, *[640, 640]))
path = 'C:\yolov5l.pt'
model = torch.load(path, map_location=torch.device('cpu'))['model'].float()
model.eval()
model.model[-1].export = True # set Detect() layer export=True
y = model(img)
ts = torch.jit.trace(model, img)
model = ct.convert(ts, inputs=[ct.ImageType(name='images', shape=img.shape, scale=1 / 255.0, bias=[0, 0, 0])])
Seems that a simple type cast would resolve this issue. I'm not familiar with the coremltools codebase, so hopefully someone that reads this bug report knows where to fix this issue. If not could you at least point me in the right direction, so I can try to fix it myself?
The text was updated successfully, but these errors were encountered: