Skip to content

Exception: ERROR: Unsupported operation type: QuantizeLinear #1197

Open
@howard-liang-B

Description

@howard-liang-B

Code

hls_model = hls4ml.converters.convert_from_onnx_model(
    model=model,
    output_dir='OutputDir/',
    io_type='io_stream',
    backend='Vitis',
    hls_config=config,
)

Output

-----------------------------------
Configuration
Model
  Precision:         ap_fixed<16,6>
  ReuseFactor:       1
  Strategy:          Latency
-----------------------------------
Interpreting Model ...
Output layers:  ['Concat_18']
Input shape: [None, 3, 640, 640]
Topology:
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Cell In[13], [line 16](vscode-notebook-cell:?execution_count=13&line=16)
     [13](vscode-notebook-cell:?execution_count=13&line=13) plotting.print_dict(config)
     [14](vscode-notebook-cell:?execution_count=13&line=14) print("-----------------------------------")
---> [16](vscode-notebook-cell:?execution_count=13&line=16) hls_model = hls4ml.converters.convert_from_onnx_model(
     [17](vscode-notebook-cell:?execution_count=13&line=17)     model=model,
     [18](vscode-notebook-cell:?execution_count=13&line=18)     output_dir='OutputDir/',
     [19](vscode-notebook-cell:?execution_count=13&line=19)     io_type='io_stream',
     [20](vscode-notebook-cell:?execution_count=13&line=20)     backend='Vitis',
     [21](vscode-notebook-cell:?execution_count=13&line=21)     hls_config=config,
     [22](vscode-notebook-cell:?execution_count=13&line=22) )

File c:\Users\howar\miniconda3\envs\AI_NPU_env\lib\site-packages\hls4ml\converters\__init__.py:366, in convert_from_onnx_model(model, output_dir, project_name, input_data_tb, output_data_tb, backend, hls_config, **kwargs)
    [362](file:///C:/Users/howar/miniconda3/envs/AI_NPU_env/lib/site-packages/hls4ml/converters/__init__.py:362) config['HLSConfig']['Model'] = _check_model_config(model_config)
    [364](file:///C:/Users/howar/miniconda3/envs/AI_NPU_env/lib/site-packages/hls4ml/converters/__init__.py:364) _check_hls_config(config, hls_config)
--> [366](file:///C:/Users/howar/miniconda3/envs/AI_NPU_env/lib/site-packages/hls4ml/converters/__init__.py:366) return onnx_to_hls(config)

File c:\Users\howar\miniconda3\envs\AI_NPU_env\lib\site-packages\hls4ml\converters\onnx_to_hls.py:281, in onnx_to_hls(config)
    [279](file:///C:/Users/howar/miniconda3/envs/AI_NPU_env/lib/site-packages/hls4ml/converters/onnx_to_hls.py:279) for node in graph.node:
    [280](file:///C:/Users/howar/miniconda3/envs/AI_NPU_env/lib/site-packages/hls4ml/converters/onnx_to_hls.py:280)     if node.op_type not in supported_layers:
--> [281](file:///C:/Users/howar/miniconda3/envs/AI_NPU_env/lib/site-packages/hls4ml/converters/onnx_to_hls.py:281)         raise Exception(f'ERROR: Unsupported operation type: {node.op_type}')
    [283](file:///C:/Users/howar/miniconda3/envs/AI_NPU_env/lib/site-packages/hls4ml/converters/onnx_to_hls.py:283)     # If not the first layer then input shape is taken from last layer's output
    [284](file:///C:/Users/howar/miniconda3/envs/AI_NPU_env/lib/site-packages/hls4ml/converters/onnx_to_hls.py:284)     if layer_counter != 0:

Exception: ERROR: Unsupported operation type: QuantizeLinear

Please help me fix this error.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions