Open
Description
openedon May 17, 2024
Describe the issue
With the release of 0.18.0, we are having issues with the Transpose
op:
> sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
E onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (/_operators.0/Transpose) Op (Transpose) [TypeInferenceError] Invalid attribute perm {1, 0}, input shape = {
Can you please help point us to the directions of the changes that might have broken us? Thank you!
Please see microsoft/hummingbird#770
Urgency
This is blocking the Microsoft Hummingbird runners.
Target platform
all
Build script
This is part of the Hummingbird build which depends on onnxruntime. Can you please point us to the relevant changes in your 0.18.0 build?
Error / output
self = <onnxruntime.capi.onnxruntime_inference_collection.InferenceSession object at 0x7fb91dde3e90>
providers = [], provider_options = [], disabled_optimizers = None
def _create_inference_session(self, providers, provider_options, disabled_optimizers=None):
available_providers = C.get_available_providers()
# Tensorrt can fall back to CUDA if it's explicitly assigned. All others fall back to CPU.
if "TensorrtExecutionProvider" in available_providers:
if providers and any(
provider == "CUDAExecutionProvider"
or (isinstance(provider, tuple) and provider[0] == "CUDAExecutionProvider")
for provider in providers
):
self._fallback_providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
else:
self._fallback_providers = ["CPUExecutionProvider"]
# MIGraphX can fall back to ROCM if it's explicitly assigned. All others fall back to CPU.
elif "MIGraphXExecutionProvider" in available_providers:
if providers and any(
provider == "ROCMExecutionProvider"
or (isinstance(provider, tuple) and provider[0] == "ROCMExecutionProvider")
for provider in providers
):
self._fallback_providers = ["ROCMExecutionProvider", "CPUExecutionProvider"]
else:
self._fallback_providers = ["CPUExecutionProvider"]
else:
self._fallback_providers = ["CPUExecutionProvider"]
# validate providers and provider_options before other initialization
providers, provider_options = check_and_normalize_provider_args(
providers, provider_options, available_providers
)
session_options = self._sess_options if self._sess_options else C.get_default_session_options()
self._register_ep_custom_ops(session_options, providers, provider_options, available_providers)
if self._model_path:
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
else:
> sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
E onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (/_operators.0/Transpose) Op (Transpose) [TypeInferenceError] Invalid attribute perm {1, 0}, input shape = {
Visual Studio Version
No response
GCC / Compiler Version
No response
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Metadata
Assignees
Labels
build issues; typically submitted using templateissues related to the CUDA execution providerissues related to AMD MI GraphX execution providerquestions/issues related to ROCm execution providerissues related to TensorRT execution providerissues that have not been addressed in a while; categorized by a bot