You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*Note that this causes the InferenceSession to be re-initialized, which may cause model recompilation and hardware re-initialization*
21
21
22
22
### C/C++ API
23
-
All the options (key-value pairs) need to be concantenated into a string as shown below and passed to OrtSessionOptionsAppendExecutionProviderEx_OpenVINO() API as shown below:-
23
+
All the options shown below are passed to SessionOptionsAppendExecutionProvider_OpenVINO() API and populated in the struct OrtOpenVINOProviderOptions in an example shown below:-
**Note: This API has been deprecated. Please use the Key-Value mechanism mentioned above to set the 'device-type' option.**
71
-
When ONNX Runtime is built with OpenVINO Execution Provider, a target hardware option needs to be provided. This build time option becomes the default target harware the EP schedules inference on. However, this target may be overriden at runtime to schedule inference on a different hardware as shown below.
72
-
73
-
Note. This dynamic hardware selection is optional. The EP falls back to the build-time default selection if no dynamic hardware option value is specified.
The table below shows the ONNX layers supported and validated using OpenVINO Execution Provider.The below table also lists the Intel hardware support for each of the layers. CPU refers to Intel<sup>®</sup>
@@ -280,4 +247,4 @@ Improved throughput that multiple devices can deliver (compared to single-device
280
247
More consistent performance, since the devices can now share the inference burden (so that if one device is becoming too busy, another device can take more of the load)
281
248
282
249
For more information on Multi-Device plugin of OpenVINO, please refer to the following
0 commit comments