8 fail, 2 944 skipped, 8 431 pass in 2h 45m 42s
Annotations
github-actions / Test Results
3 out of 24 runs failed: test_output_match_opinfo__ops_aten__scaled_dot_product_flash_attention_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript\tests\function_libs\torch_lib\ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\testing\_internal\opinfo\core.py:1112: in __call__
return self.op(*args, **kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
E BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript\tests\function_libs\torch_lib\ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\testing\_internal\opinfo\core.py:1112: in __call__
return self.op(*args, **kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
E BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript\tests\function_libs\torch_lib\ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\testing\_internal\opinfo\core.py:1112: in __call__
return self.op(*args, **kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
E BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
github-actions / Test Results
All 24 runs failed: test_output_match_opinfo__nn_functional_upsample_bilinear2d_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-experimental-torchlib-tracing-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-experimental-torchlib-tracing-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-experimental-torchlib-tracing-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-onnx-weekly-macos-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-onnx-weekly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-onnx-weekly-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-ort-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-ort-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-ort-nightly-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py311-ort-nightly-macos-latest)/pytest.xml [took 1s]
artifacts/Test Results (py311-ort-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py311-ort-nightly-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py38-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py38-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py38-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py39-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py39-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py39-windows-latest)/pytest.xml [took 0s]
Raw output
TypeError: 'NoneType' object is not subscriptable
TypeError: 'NoneType' object is not subscriptable
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:590: in executor
return function(*args, **kwargs)
onnxscript/values.py:577: in __call__
return self.func(*args, **kwargs)
onnxscript/function_libs/torch_lib/ops/nn.py:2322: in aten_upsample_bilinear2d_vec
self, output_size, scale_factors[0], scale_factors[1], align_corners
E TypeError: 'NoneType' object is not subscriptable
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:590: in executor
return function(*args, **kwargs)
onnxscript/values.py:577: in __call__
return self.func(*args, **kwargs)
onnxscript/function_libs/torch_lib/ops/nn.py:2322: in aten_upsample_bilinear2d_vec
self, output_size, scale_factors[0], scale_factors[1], align_corners
E TypeError: 'NoneType' object is not subscriptable
github-actions / Test Results
1 out of 24 runs failed: test_output_match_opinfo__linalg_vector_norm_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 34s]
Raw output
EOFError
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:540: in _capture_graph_and_evaluate_torch_script_evaluator
return _safe_ort_session_run(onnx_model.SerializeToString(), ort_inputs)
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:345: in _safe_ort_session_run
return_dict = manager.dict()
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/multiprocessing/managers.py:723: in temp
token, exp = self._create(typeid, *args, **kwds)
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/multiprocessing/managers.py:606: in _create
conn = self._Client(self._address, authkey=self._authkey)
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/multiprocessing/connection.py:508: in Client
answer_challenge(c, authkey)
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/multiprocessing/connection.py:752: in answer_challenge
message = connection.recv_bytes(256) # reject large message
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/multiprocessing/connection.py:216: in recv_bytes
buf = self._recv_bytes(maxlength)
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/multiprocessing/connection.py:414: in _recv_bytes
buf = self._recv(4)
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/multiprocessing/connection.py:383: in _recv
raise EOFError
E EOFError
github-actions / Test Results
All 24 runs failed: test_output_match_opinfo__nn_functional_upsample_bilinear2d_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-experimental-torchlib-tracing-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-experimental-torchlib-tracing-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-experimental-torchlib-tracing-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-onnx-weekly-macos-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-onnx-weekly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-onnx-weekly-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-ort-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-ort-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-ort-nightly-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 2s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py311-ort-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py311-ort-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py311-ort-nightly-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py38-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py38-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py38-windows-latest)/pytest.xml [took 0s]
artifacts/Test Results (py39-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py39-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py39-windows-latest)/pytest.xml [took 0s]
Raw output
TypeError: 'NoneType' object is not subscriptable
TypeError: 'NoneType' object is not subscriptable
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:504: in _capture_graph_and_evaluate_torch_script_evaluator
symbolic_outputs = function(*onnxscript_args, **onnxscript_kwargs)
onnxscript/values.py:577: in __call__
return self.func(*args, **kwargs)
onnxscript/function_libs/torch_lib/ops/nn.py:2322: in aten_upsample_bilinear2d_vec
self, output_size, scale_factors[0], scale_factors[1], align_corners
E TypeError: 'NoneType' object is not subscriptable
onnxscript/tests/function_libs/torch_lib/ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript/tests/function_libs/torch_lib/ops_test_common.py:504: in _capture_graph_and_evaluate_torch_script_evaluator
symbolic_outputs = function(*onnxscript_args, **onnxscript_kwargs)
onnxscript/values.py:577: in __call__
return self.func(*args, **kwargs)
onnxscript/function_libs/torch_lib/ops/nn.py:2322: in aten_upsample_bilinear2d_vec
self, output_size, scale_factors[0], scale_factors[1], align_corners
E TypeError: 'NoneType' object is not subscriptable
github-actions / Test Results
3 out of 24 runs failed: test_output_match_opinfo__ops_aten__scaled_dot_product_flash_attention_cpu_float32 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 0s]
Raw output
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:241 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript\tests\function_libs\torch_lib\ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\testing\_internal\opinfo\core.py:1112: in __call__
return self.op(*args, **kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
E BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript\tests\function_libs\torch_lib\ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\testing\_internal\opinfo\core.py:1112: in __call__
return self.op(*args, **kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
E BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
onnxscript\tests\function_libs\torch_lib\ops_test.py:209: in run_test_output_match
torch_output = op(*inputs, **cpu_sample.kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\testing\_internal\opinfo\core.py:1112: in __call__
return self.op(*args, **kwargs)
.nox\test_torch_nightly\lib\site-packages\torch\_ops.py:825: in __call__
return self_._op(*args, **(kwargs or {}))
E NotImplementedError: Could not run 'aten::_scaled_dot_product_flash_attention' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_scaled_dot_product_flash_attention' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
E
E Meta: registered at /dev/null:241 [kernel]
E BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
E Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
E FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
E Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
E Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
E Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
E Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
E ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
E ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
E AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHIP: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMPS: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradIPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradVE: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMTIA: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradMeta: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_1.cpp:16340 [autograd kernel]
E Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_1.cpp:16033 [kernel]
E AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:378 [backend fallback]
E AutocastCUDA: registered at ..\aten\src\ATen\autocast_mode.cpp:248 [kernel]
E FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
E BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
E FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
E Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
E VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
E FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
E PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
E FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
E PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
E PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]
Check warning on line 0 in onnxscript.backend.onnx_export_test.TestOnnxBackEnd
github-actions / Test Results
1 out of 5 runs failed: test_export2python_produces_correct_onnx_script_model_1012_test_size (onnxscript.backend.onnx_export_test.TestOnnxBackEnd)
artifacts/Test Results (py38-windows-latest)/pytest.xml [took 0s]
Raw output
AssertionError: Unable to import 'onnxscript.tests.onnx_backend_test_code.test_size' (file: WindowsPath('D:/a/onnxscript/onnxscript/onnxscript/tests/onnx_backend_test_code/test_size.py'))
----
import numpy
from onnx import TensorProto
from onnx.helper import make_tensor
from onnxscript import script, external_tensor
from onnxscript.values import Opset
from onnxscript.onnx_types import FLOAT, INT64
from onnxscript.onnx_opset import opset19
@script()
def bck_test_size(x: FLOAT[3,4,5]) -> (INT64):
y = opset19.Size(x)
return y
onnxscript\backend\onnx_export_test.py:116: in extract_functions
mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\importlib\__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
E ModuleNotFoundError: No module named 'onnxscript.tests.onnx_backend_test_code.test_size'
The above exception was the direct cause of the following exception:
.nox\test\lib\site-packages\parameterized\parameterized.py:620: in standalone_func
return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:247: in test_export2python_produces_correct_onnx_script_model
functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:118: in extract_functions
raise AssertionError(
E AssertionError: Unable to import 'onnxscript.tests.onnx_backend_test_code.test_size' (file: WindowsPath('D:/a/onnxscript/onnxscript/onnxscript/tests/onnx_backend_test_code/test_size.py'))
E ----
E import numpy
E from onnx import TensorProto
E from onnx.helper import make_tensor
E from onnxscript import script, external_tensor
E from onnxscript.values import Opset
E from onnxscript.onnx_types import FLOAT, INT64
E from onnxscript.onnx_opset import opset19
E
E @script()
E def bck_test_size(x: FLOAT[3,4,5]) -> (INT64):
E y = opset19.Size(x)
E return y
github-actions / Test Results
All 3 runs failed: test_output_match_opinfo__native_layer_norm_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyFullGraphCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 4s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 1s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 1s]
Raw output
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] input_3) => (float16[1,2,3] _val_4, float16[1,1,1] _val_5, float16[1,1,1] _val_6)
<float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] input_3, float16[1,2,3] _val_4, float16[1,1,1] _val_5, float16[1,1,1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_3) => (float16[1,2,3] _val_7, float16[1,1,1] _val_8, float16[1,1,1] _val_9)
<float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_3, float16[1,2,3] _val_7, float16[1,1,1] _val_8, float16[1,1,1] _val_9, float[1] _val_3, int64[3] _val_4, float[1,2,3] _val_5, float16[1,2,3] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -3> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2) => (float16[1,2,3] _val_3, float16[1,1,1] _val_4, float16[1,1,1] _val_5)
<float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] _val_3, float16[1,1,1] _val_4, float16[1,1,1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2,3] input_0, int64[3] input_1) => (float16[1,2,3] _val_6, float16[1,1,1] _val_7, float16[1,1,1] _val_8)
<float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] _val_6, float16[1,1,1] _val_7, float16[1,1,1] _val_8, float[1] _val_2, int64[3] _val_3, float[1,2,3] _val_4, float16[1,2,3] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -3> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,3] input_3) => (float16[2,2,3] _val_4, float16[2,1,1] _val_5, float16[2,1,1] _val_6)
<float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,3] input_3, float16[2,2,3] _val_4, float16[2,1,1] _val_5, float16[2,1,1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_3) => (float16[2,2,3] _val_7, float16[2,1,1] _val_8, float16[2,1,1] _val_9)
<float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_3, float16[2,2,3] _val_7, float16[2,1,1] _val_8, float16[2,1,1] _val_9, float[1] _val_3, int64[2] _val_4, float[2,3] _val_5, float16[2,3] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -2> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2) => (float16[2,2,3] _val_3, float16[2,1,1] _val_4, float16[2,1,1] _val_5)
<float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,2,3] _val_3, float16[2,1,1] _val_4, float16[2,1,1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[2,2,3] input_0, int64[2] input_1) => (float16[2,2,3] _val_6, float16[2,1,1] _val_7, float16[2,1,1] _val_8)
<float16[2,2,3] input_0, int64[2] input_1, float16[2,2,3] _val_6, float16[2,1,1] _val_7, float16[2,1,1] _val_8, float[1] _val_2, int64[2] _val_3, float[2,3] _val_4, float16[2,3] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -2> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3) => (float16[1] _val_4, float16[1] _val_5, float16[1] _val_6)
<float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] _val_4, float16[1] _val_5, float16[1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_3) => (float16[1] _val_7, float16[1] _val_8, float16[1] _val_9)
<float16[1] input_0, int64[1] input_1, float16[1] input_3, float16[1] _val_7, float16[1] _val_8, float16[1] _val_9, float[1] _val_3, int64[1] _val_4, float[1] _val_5, float16[1] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -1> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_2) => (float16[1] _val_3, float16[1] _val_4, float16[1] _val_5)
<float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] _val_3, float16[1] _val_4, float16[1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1] input_0, int64[1] input_1) => (float16[1] _val_6, float16[1] _val_7, float16[1] _val_8)
<float16[1] input_0, int64[1] input_1, float16[1] _val_6, float16[1] _val_7, float16[1] _val_8, float[1] _val_2, int64[1] _val_3, float[1] _val_4, float16[1] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -1> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[2] input_3) => (float16[1,2] _val_4, float16[1,1] _val_5, float16[1,1] _val_6)
<float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[2] input_3, float16[1,2] _val_4, float16[1,1] _val_5, float16[1,1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_3) => (float16[1,2] _val_7, float16[1,1] _val_8, float16[1,1] _val_9)
<float16[1,2] input_0, int64[1] input_1, float16[2] input_3, float16[1,2] _val_7, float16[1,1] _val_8, float16[1,1] _val_9, float[1] _val_3, int64[1] _val_4, float[2] _val_5, float16[2] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -1> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_2) => (float16[1,2] _val_3, float16[1,1] _val_4, float16[1,1] _val_5)
<float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[1,2] _val_3, float16[1,1] _val_4, float16[1,1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[1,2] input_0, int64[1] input_1) => (float16[1,2] _val_6, float16[1,1] _val_7, float16[1,1] _val_8)
<float16[1,2] input_0, int64[1] input_1, float16[1,2] _val_6, float16[1,1] _val_7, float16[1,1] _val_8, float[1] _val_2, int64[1] _val_3, float[2] _val_4, float16[2] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -1> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3) => (float16[0,1] _val_4, float16[0,1] _val_5, float16[0,1] _val_6)
<float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3, float16[0,1] _val_4, float16[0,1] _val_5, float16[0,1] _val_6>
{
_val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_3) => (float16[0,1] _val_7, float16[0,1] _val_8, float16[0,1] _val_9)
<float16[0,1] input_0, int64[1] input_1, float16[1] input_3, float16[0,1] _val_7, float16[0,1] _val_8, float16[0,1] _val_9, float[1] _val_3, int64[1] _val_4, float[1] _val_5, float16[1] _val_6>
{
_val_3 = Constant <value_floats: floats = [1]> ()
_val_4 = Shape <start: int = -1> (input_0)
_val_5 = Expand (_val_3, _val_4)
_val_6 = CastLike (_val_5, input_0)
_val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_2) => (float16[0,1] _val_3, float16[0,1] _val_4, float16[0,1] _val_5)
<float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[0,1] _val_3, float16[0,1] _val_4, float16[0,1] _val_5>
{
_val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
AssertionError: ONNX model is invalid. Model:
<
ir_version: 8,
opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
producer_name: "pytorch",
producer_version: "2.3.0"
>
main_graph (float16[0,1] input_0, int64[1] input_1) => (float16[0,1] _val_6, float16[0,1] _val_7, float16[0,1] _val_8)
<float16[0,1] input_0, int64[1] input_1, float16[0,1] _val_6, float16[0,1] _val_7, float16[0,1] _val_8, float[1] _val_2, int64[1] _val_3, float[1] _val_4, float16[1] _val_5>
{
_val_2 = Constant <value_floats: floats = [1]> ()
_val_3 = Shape <start: int = -1> (input_0)
_val_4 = Expand (_val_2, _val_3)
_val_5 = CastLike (_val_4, input_0)
_val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
Rank (input) => (return_val)
{
tmp = Shape (input)
return_val = Size (tmp)
}
<
domain: "pkg.onnxscript.torch_lib.common",
opset_import: ["" : 18]
>
IsScalar (input) => (return_val)
{
tmp = Shape (input)
tmp_0 = Size (tmp)
tmp_1 = Constant <value_int: int = 0> ()
return_val = Equal (tmp_0, tmp_1)
}
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] input_3) => (float16[1,2,3] _val_4, float16[1,1,1] _val_5, float16[1,1,1] _val_6)
E <float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] input_3, float16[1,2,3] _val_4, float16[1,1,1] _val_5, float16[1,1,1] _val_6>
E {
E _val_4, _val_5, _val_6 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, input_2, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_3) => (float16[1,2,3] _val_7, float16[1,1,1] _val_8, float16[1,1,1] _val_9)
E <float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_3, float16[1,2,3] _val_7, float16[1,1,1] _val_8, float16[1,1,1] _val_9, float[1] _val_3, int64[3] _val_4, float[1,2,3] _val_5, float16[1,2,3] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -3> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2) => (float16[1,2,3] _val_3, float16[1,1,1] _val_4, float16[1,1,1] _val_5)
E <float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] input_2, float16[1,2,3] _val_3, float16[1,1,1] _val_4, float16[1,1,1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2,3] input_0, int64[3] input_1) => (float16[1,2,3] _val_6, float16[1,1,1] _val_7, float16[1,1,1] _val_8)
E <float16[1,2,3] input_0, int64[3] input_1, float16[1,2,3] _val_6, float16[1,1,1] _val_7, float16[1,1,1] _val_8, float[1] _val_2, int64[3] _val_3, float[1,2,3] _val_4, float16[1,2,3] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -3> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -3, epsilon: float = 0.5, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
on…0, int64[2] input_1, float16[2,3] input_3) => (float16[2,2,3] _val_7, float16[2,1,1] _val_8, float16[2,1,1] _val_9)
E <float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_3, float16[2,2,3] _val_7, float16[2,1,1] _val_8, float16[2,1,1] _val_9, float[1] _val_3, int64[2] _val_4, float[2,3] _val_5, float16[2,3] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -2> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2) => (float16[2,2,3] _val_3, float16[2,1,1] _val_4, float16[2,1,1] _val_5)
E <float16[2,2,3] input_0, int64[2] input_1, float16[2,3] input_2, float16[2,2,3] _val_3, float16[2,1,1] _val_4, float16[2,1,1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[2,2,3] input_0, int64[2] input_1) => (float16[2,2,3] _val_6, float16[2,1,1] _val_7, float16[2,1,1] _val_8)
E <float16[2,2,3] input_0, int64[2] input_1, float16[2,2,3] _val_6, float16[2,1,1] _val_7, float16[2,1,1] _val_8, float[1] _val_2, int64[2] _val_3, float[2,3] _val_4, float16[2,3] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -2> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -2, epsilon: float = -0.5, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3) => (float16[1] _val_4, float16[1] _val_5, float16[1] _val_6)
E <float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3, float16[1] _val_4, float16[1] _val_5, float16[1] _val_6>
E {
E _val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_3) => (float16[1] _val_7, float16[1] _val_8, float16[1] _val_9)
E <float16[1] input_0, int64[1] input_1, float16[1] input_3, float16[1] _val_7, float16[1] _val_8, float16[1] _val_9, float[1] _val_3, int64[1] _val_4, float[1] _val_5, float16[1] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -1> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1] input_0, int64[1] input_1, float16[1] input_2) => (float16[1] _val_3, float16[1] _val_4, float16[1] _val_5)
E <float16[1] input_0, int64[1] input_1, float16[1] input_2, float16[1] _val_3, float16[1] _val_4, float16[1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1] input_0, int64[1] input_1) => (float16[1] _val_6, float16[1] _val_7, float16[1] _val_8)
E <float16[1] input_0, int64[1] input_1, float16[1] _val_6, float16[1] _val_7, float16[1] _val_8, float[1] _val_2, int64[1] _val_3, float[1] _val_4, float16[1] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -1> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[2] input_3) => (float16[1,2] _val_4, float16[1,1] _val_5, float16[1,1] _val_6)
E <float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[2] input_3, float16[1,2] _val_4, float16[1,1] _val_5, float16[1,1] _val_6>
E {
E _val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_3) => (float16[1,2] _val_7, float16[1,1] _val_8, float16[1,1] _val_9)
E <float16[1,2] input_0, int64[1] input_1, float16[2] input_3, float16[1,2] _val_7, float16[1,1] _val_8, float16[1,1] _val_9, float[1] _val_3, int64[1] _val_4, float[2] _val_5, float16[2] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -1> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2] input_0, int64[1] input_1, float16[2] input_2) => (float16[1,2] _val_3, float16[1,1] _val_4, float16[1,1] _val_5)
E <float16[1,2] input_0, int64[1] input_1, float16[2] input_2, float16[1,2] _val_3, float16[1,1] _val_4, float16[1,1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[1,2] input_0, int64[1] input_1) => (float16[1,2] _val_6, float16[1,1] _val_7, float16[1,1] _val_8)
E <float16[1,2] input_0, int64[1] input_1, float16[1,2] _val_6, float16[1,1] _val_7, float16[1,1] _val_8, float[1] _val_2, int64[1] _val_3, float[2] _val_4, float16[2] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -1> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3) => (float16[0,1] _val_4, float16[0,1] _val_5, float16[0,1] _val_6)
E <float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[1] input_3, float16[0,1] _val_4, float16[0,1] _val_5, float16[0,1] _val_6>
E {
E _val_4, _val_5, _val_6 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_3) => (float16[0,1] _val_7, float16[0,1] _val_8, float16[0,1] _val_9)
E <float16[0,1] input_0, int64[1] input_1, float16[1] input_3, float16[0,1] _val_7, float16[0,1] _val_8, float16[0,1] _val_9, float[1] _val_3, int64[1] _val_4, float[1] _val_5, float16[1] _val_6>
E {
E _val_3 = Constant <value_floats: floats = [1]> ()
E _val_4 = Shape <start: int = -1> (input_0)
E _val_5 = Expand (_val_3, _val_4)
E _val_6 = CastLike (_val_5, input_0)
E _val_7, _val_8, _val_9 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_6, input_3)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_0): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[0,1] input_0, int64[1] input_1, float16[1] input_2) => (float16[0,1] _val_3, float16[0,1] _val_4, float16[0,1] _val_5)
E <float16[0,1] input_0, int64[1] input_1, float16[1] input_2, float16[0,1] _val_3, float16[0,1] _val_4, float16[0,1] _val_5>
E {
E _val_3, _val_4, _val_5 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, input_2)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:528: in _capture_graph_and_evaluate_torch_script_evaluator
onnx.checker.check_model(onnx_model, full_check=True)
.nox\test_torch_nightly\lib\site-packages\onnx\checker.py:171: in check_model
C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
E onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:LayerNormalization, node name: LayerNormalization_4): [TypeInferenceError] Inferred elem type differs from existing elem type: (1) vs (10)
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:229: in run_test_output_match
function_output = function_executor(test_name, reference_torch_outputs)(
onnxscript\tests\function_libs\torch_lib\ops_test_common.py:530: in _capture_graph_and_evaluate_torch_script_evaluator
raise AssertionError(
E AssertionError: ONNX model is invalid. Model:
E <
E ir_version: 8,
E opset_import: ["" : 18, "pkg.onnxscript.torch_lib.common" : 1],
E producer_name: "pytorch",
E producer_version: "2.3.0"
E >
E main_graph (float16[0,1] input_0, int64[1] input_1) => (float16[0,1] _val_6, float16[0,1] _val_7, float16[0,1] _val_8)
E <float16[0,1] input_0, int64[1] input_1, float16[0,1] _val_6, float16[0,1] _val_7, float16[0,1] _val_8, float[1] _val_2, int64[1] _val_3, float[1] _val_4, float16[1] _val_5>
E {
E _val_2 = Constant <value_floats: floats = [1]> ()
E _val_3 = Shape <start: int = -1> (input_0)
E _val_4 = Expand (_val_2, _val_3)
E _val_5 = CastLike (_val_4, input_0)
E _val_6, _val_7, _val_8 = LayerNormalization <axis: int = -1, epsilon: float = 1e-05, stash_type: int = 1> (input_0, _val_5)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E Rank (input) => (return_val)
E {
E tmp = Shape (input)
E return_val = Size (tmp)
E }
E <
E domain: "pkg.onnxscript.torch_lib.common",
E opset_import: ["" : 18]
E >
E IsScalar (input) => (return_val)
E {
E tmp = Shape (input)
E tmp_0 = Size (tmp)
E tmp_1 = Constant <value_int: int = 0> ()
E return_val = Equal (tmp_0, tmp_1)
E }
github-actions / Test Results
All 3 runs failed: test_output_match_opinfo__native_layer_norm_cpu_float16 (onnxscript.tests.function_libs.torch_lib.ops_test.TestOutputConsistencyEagerCPU)
artifacts/Test Results (py310-torch-nightly-macos-latest)/pytest.xml [took 2s]
artifacts/Test Results (py310-torch-nightly-ubuntu-latest)/pytest.xml [took 0s]
artifacts/Test Results (py310-torch-nightly-windows-latest)/pytest.xml [took 1s]
Raw output
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch
onnxscript\tests\function_libs\torch_lib\ops_test.py:266: in run_test_output_match
torch.testing.assert_close(
E AssertionError: The values for attribute 'dtype' do not match: torch.float32 != torch.float16.
The above exception was the direct cause of the following exception:
onnxscript\tests\function_libs\torch_lib\ops_test.py:280: in run_test_output_match
raise AssertionError(f"Output {j} mismatch") from e
E AssertionError: Output 1 mismatch