You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For complete examples of exporting and running the model, please refer to our [examples GitHub repository](https://github.com/pytorch-labs/executorch-examples/tree/main/mv2/python).
102
103
103
104
<hr/>
104
105
@@ -178,6 +179,7 @@ target_link_libraries(
178
179
xnnpack_backend)
179
180
```
180
181
182
+
181
183
#### Runtime APIs
182
184
Both high-level and low-level C++ APIs are provided. The low-level APIs are platform independent, do not dynamically allocate memory, and are most suitable for resource-constrained embedded systems. The high-level APIs are provided as a convenience wrapper around the lower-level APIs, and make use of dynamic memory allocation and standard library constructs to reduce verbosity.
183
185
@@ -194,8 +196,8 @@ using namespace ::executorch::extension;
194
196
Module module("/path/to/model.pte");
195
197
196
198
// Create an input tensor.
197
-
float input[1 * 3 * 256 * 256];
198
-
auto tensor = from_blob(input, {1, 3, 256, 256});
199
+
float input[1 * 3 * 224 * 224];
200
+
auto tensor = from_blob(input, {1, 3, 224, 224});
199
201
200
202
// Perform an inference.
201
203
const auto result = module.forward(tensor);
@@ -208,6 +210,8 @@ if (result.ok()) {
208
210
209
211
For more information on the C++ APIs, see [Running an ExecuTorch Model Using the Module Extension in C++](extension-module.md) and [Managing Tensor Memory in C++](extension-tensor.md).
210
212
213
+
For complete examples of building and running C++ application, please refer to our [examples GitHub repository](https://github.com/pytorch-labs/executorch-examples/tree/main/mv2/cpp).
0 commit comments