Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch conversion error - TypeError with bool #793

Open
lp55 opened this issue Jul 15, 2020 · 3 comments
Open

PyTorch conversion error - TypeError with bool #793

lp55 opened this issue Jul 15, 2020 · 3 comments
Labels
awaiting response Please respond to this issue to provide further clarification (status) bug Unexpected behaviour that should be corrected (type) PyTorch (traced)

Comments

@lp55
Copy link

lp55 commented Jul 15, 2020

Hi,

I'm trying to convert a model from https://github.com/ultralytics/yolov5/ (yolov5l in specific) and I get the following error:

WARNING:root:Tuple detected at graph output. This will be flattened in the converted model.
Converting Frontend ==> MIL Ops: 61%|██████████████████████████ | 1399/2307 [00:13<00:13, 64.92 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision!
Converting Frontend ==> MIL Ops: 61%|██████████████████████████▎ | 1414/2307 [00:13<00:13, 64.09 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision!
Converting Frontend ==> MIL Ops: 71%|██████████████████████████████▍ | 1636/2307 [00:16<00:11, 59.18 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision!
Converting Frontend ==> MIL Ops: 72%|██████████████████████████████▊ | 1651/2307 [00:17<00:11, 56.47 ops/s]WARNING:root:Saving value type of float16 into a builtin type of i8, might lose precision!
Converting Frontend ==> MIL Ops: 100%|██████████████████████████████████████████▉| 2305/2307 [00:30<00:00, 75.65 ops/s]
Running MIL optimization passes: 100%|████████████████████████████████████████████| 13/13 [00:22<00:00, 1.71s/ passes]
Translating MIL ==> MLModel Ops: 31%|████████████▏ | 553/1807 [00:00<00:00, 547815.33 ops/s]
Traceback (most recent call last):
File "", line 1, in
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters_converters_entry.py", line 292, in convert
proto_spec = _convert(
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\converter.py", line 122, in _convert
out = backend_converter(prog, **kwargs)
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\converter.py", line 72, in call
return load(*args, **kwargs)
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\backend\nn\load.py", line 235, in load
convert_ops(
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\backend\nn\op_mapping.py", line 50, in convert_ops
mapper(const_context, builder, op)
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\converters\mil\backend\nn\op_mapping.py", line 881, in slice_by_index
builder.add_slice_dynamic(
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\coremltools\models\neural_network\builder.py", line 5501, in add_slice_dynamic
spec_layer_params.endMasks.extend(end_masks)
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\google\protobuf\internal\containers.py", line 282, in extend
new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter]
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\google\protobuf\internal\containers.py", line 282, in
new_values = [self.type_checker.CheckValue(elem) for elem in elem_seq_iter]
File "C:\Miniconda3\envs\pytorch15\lib\site-packages\google\protobuf\internal\type_checkers.py", line 142, in CheckValue
raise TypeError(message)
TypeError: True has type <class 'numpy.bool
'>, but expected one of: (<class 'bool'>, <class 'numbers.Integral'>)

The code used to convert the model is the following:

from models.common import *
import coremltools as ct

img = torch.zeros((1, 3, *[640, 640]))
path = 'C:\yolov5l.pt'

model = torch.load(path, map_location=torch.device('cpu'))['model'].float()
model.eval()
model.model[-1].export = True # set Detect() layer export=True
y = model(img)
ts = torch.jit.trace(model, img)

model = ct.convert(ts, inputs=[ct.ImageType(name='images', shape=img.shape, scale=1 / 255.0, bias=[0, 0, 0])])

Seems that a simple type cast would resolve this issue. I'm not familiar with the coremltools codebase, so hopefully someone that reads this bug report knows where to fix this issue. If not could you at least point me in the right direction, so I can try to fix it myself?

@lp55 lp55 added the bug Unexpected behaviour that should be corrected (type) label Jul 15, 2020
@DawerG
Copy link
Collaborator

DawerG commented Jul 17, 2020

Thanks @lp55 for raising the issue. We are woking on the fix.

For the time being, to unblock yourself, you might want to manually change list of numpy bool to list of python boolfor end_masks object in the call spec_layer_params.endMasks.extend(end_masks)

@lp55
Copy link
Author

lp55 commented Jul 20, 2020

@DawerG

Thanks, the conversion process worked now. I'll test the coverted model and check if it works properly now.
For any one stuck on the same issue (until the propoer fix is avaliable), I simply added the following line:

end_masks_conv = [bool(val) for val in end_masks]

and changed

spec_layer_params.endMasks.extend(end_masks)

to

spec_layer_params.endMasks.extend(end_masks_conv)

inside the add_slice_dynamic function from the builder.py file.

Not pretty but it works :)

@bhushan23 bhushan23 added the triaged Reviewed and examined, release as been assigned if applicable (status) label Aug 28, 2020
@TobyRoseman TobyRoseman removed the triaged Reviewed and examined, release as been assigned if applicable (status) label Sep 27, 2022
@TobyRoseman
Copy link
Collaborator

Is this still an issue with the latest version of coremltools?

If it is still an issue, please give us complete steps to reproduce the problem. How was yolov5l.pt generated?

@TobyRoseman TobyRoseman added the awaiting response Please respond to this issue to provide further clarification (status) label Oct 24, 2022
Birch-san pushed a commit to Birch-san/coremltools that referenced this issue Nov 27, 2022
* [Img2Img] Fix batch size mismatch prompts vs. init images

* Remove bogus folder

* fix

* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting response Please respond to this issue to provide further clarification (status) bug Unexpected behaviour that should be corrected (type) PyTorch (traced)
Projects
None yet
Development

No branches or pull requests

5 participants