Skip to content

Commit

Permalink
Fix typos in docs. (#5613)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #5613

.

Reviewed By: kirklandsign

Differential Revision: D63355422

fbshipit-source-id: 994db615dc7b9f34d2f0c13cb36fbcdefe4172a7
  • Loading branch information
shoumikhin authored and facebook-github-bot committed Sep 24, 2024
1 parent 99ee547 commit 5c56f96
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 29 deletions.
6 changes: 3 additions & 3 deletions docs/source/extension-module.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ const auto error = module.load_method("forward");
assert(module.is_method_loaded("forward"));
```
Note: the `Program` is loaded automatically before any `Method` is loaded. Subsequent attemps to load them have no effect if one of the previous attemps was successful.
Note: the `Program` is loaded automatically before any `Method` is loaded. Subsequent attempts to load them have no effect if one of the previous attempts was successful.

You can also force-load the "forward" method with a convenience syntax:

Expand All @@ -72,7 +72,7 @@ assert(module.is_method_loaded("forward"));
### Querying for Metadata
Get a set of method names that a Module contains udsing the `method_names()` function:
Get a set of method names that a Module contains using the `method_names()` function:
```cpp
const auto method_names = module.method_names();
Expand All @@ -98,7 +98,7 @@ if (method_meta.ok()) {
if (input_meta.ok()) {
assert(input_meta->scalar_type() == ScalarType::Float);
}
const auto output_meta = meta->output_tensor_meta(0);
const auto output_meta = method_meta->output_tensor_meta(0);

if (output_meta.ok()) {
assert(output_meta->sizes().size() == 1);
Expand Down
52 changes: 26 additions & 26 deletions docs/source/extension-tensor.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

**Author:** [Anthony Shoumikhin](https://github.com/shoumikhin)

Tensors are fundamental data structures in ExecuTorch, representing multi-dimensional arrays used in computations for neural networks and other numerical algorithms. In ExecuTorch, the [Tensor](https://github.com/pytorch/executorch/blob/main/runtime/core/portable_type/tensor.h) class doesn’t own its metadata (sizes, strides, dim_order) or data, keeping the runtime lightweight. Users are responsible for supplying all these memory buffers and ensuring that the metadata and data outlive the `Tensor` instance. While this design is lightweight and flexible, especially for tiny embedded systems, it places a significant burden on the user. However, if your environment requires minimal dynamic allocations, a small binary footprint, or limited C++ standard library support, you’ll need to accept that trade-off and stick with the regular `Tensor` type.
Tensors are fundamental data structures in ExecuTorch, representing multi-dimensional arrays used in computations for neural networks and other numerical algorithms. In ExecuTorch, the [Tensor](https://github.com/pytorch/executorch/blob/main/runtime/core/portable_type/tensor.h) class doesn’t own its metadata (sizes, strides, dim_order) or data, keeping the runtime lightweight. Users are responsible for supplying all these memory buffers and ensuring that the metadata and data outlive the `Tensor` instance. While this design is lightweight and flexible, especially for tiny embedded systems, it places a significant burden on the user. If your environment requires minimal dynamic allocations, a small binary footprint, or limited C++ standard library support, you’ll need to accept that trade-off and stick with the regular `Tensor` type.

Imagine you’re working with a [`Module`](extension-module.md) interface, and you need to pass a `Tensor` to the `forward()` method. You would need to declare and maintain at least the sizes array and data separately, sometimes the strides too, often leading to the following pattern:

Expand Down Expand Up @@ -33,7 +33,7 @@ You must ensure `sizes`, `dim_order`, `strides`, and `data` stay valid. This mak
To alleviate these issues, ExecuTorch provides `TensorPtr` and `TensorImplPtr` via the new [Tensor Extension](https://github.com/pytorch/executorch/tree/main/extension/tensor) that manage the lifecycle of tensors and their implementations. These are essentially smart pointers (`std::unique_ptr<Tensor>` and `std::shared_ptr<TensorImpl>`, respectively) that handle the memory management of both the tensor's data and its dynamic metadata.
Now, users no longer need to worry about metadata lifetimes separately. Data ownership is determined based on whether the it is passed by pointer or moved into the `TensorPtr` as an `std::vector`. Everything is bundled in one place and managed automatically, enabling you to focus on actual computations.
Now, users no longer need to worry about metadata lifetimes separately. Data ownership is determined based on whether it is passed by pointer or moved into the `TensorPtr` as an `std::vector`. Everything is bundled in one place and managed automatically, enabling you to focus on actual computations.
Here’s how you can use it:
Expand All @@ -50,24 +50,24 @@ auto tensor = make_tensor_ptr(
module.forward(tensor);
```

The data is now owned by the tensor instance because it's provided as a vector. To create a non-owning `TensorPtr` just pass the data by pointer. The `type` is deduced automatically from the data vector (`float`). `strides` and `dim_order` are computed automatically to the default values based on the `sizes` if not specified explicitly as extra arguments.
The data is now owned by the tensor instance because it's provided as a vector. To create a non-owning `TensorPtr` just pass the data by pointer. The `type` is deduced automatically based on the data vector (`float`). `strides` and `dim_order` are computed automatically to the default values based on the `sizes` if not specified explicitly as additional arguments.

`EValue` in `Module::forward()` accepts `TensorPtr` directly, ensuring seamless integration. `EValue` can now be constructed implicitly with a smart pointer to any type that it can hold, so `TensorPtr` gets dereferenced implicitly and `EValue` holding a `Tensor` that the `TensorPtr` pointed at is passed to the `forward()`.
`EValue` in `Module::forward()` accepts `TensorPtr` directly, ensuring seamless integration. `EValue` can now be constructed implicitly with a smart pointer to any type that it can hold. This allows `TensorPtr` to be dereferenced implicitly when passed to `forward()`, and `EValue` will hold the `Tensor` that the `TensorPtr` points to.

## API Overview

The new API revolves around two main smart pointers:

- `TensorPtr`: `std::unique_ptr` managing a `Tensor` object. Since each `Tensor` instance is unique, `TensorPtr` ensures exclusive ownership.
- `TensorImplPtr`: `std::shared_ptr` managing a `TensorImpl` object. Multiple `Tensor` instances can share the same `TensorImpl`, so `TensorImplPtr` uses shared ownership.
- `TensorImplPtr`: `std::shared_ptr` managing a `TensorImpl` object. Multiple `Tensor` instances can share the same `TensorImpl`, so `TensorImplPtr` ensures shared ownership.

### Creating Tensors

There are several ways to create a `TensorPtr`.

### Creating Scalar Tensors
#### Creating Scalar Tensors

You can create a scalar tensor, i.e. a tensor with zero dimensions or with one of sizes being zero.
You can create a scalar tensor, i.e. a tensor with zero dimensions or with one of the sizes being zero.

*Providing A Single Data Value*

Expand All @@ -85,7 +85,7 @@ auto tensor = make_tensor_ptr(42, ScalarType::Float);

Now the integer 42 will be cast to float and the tensor will contain a single value 42 of type float.

#### Owning a Data Vector
#### Owning Data from a Vector

When you provide sizes and data vectors, `TensorPtr` takes ownership of both the data and the sizes.

Expand All @@ -101,19 +101,19 @@ The type is deduced automatically as `ScalarType::Float` from the data vector.
*Providing Data Vector with a Type*
If you provide data of one type but specify a different scalar type, the data will be cast to the specified type.
If you provide data of one type but specify a different scalar type, the data will be cast to the given type.
```cpp
auto tensor = make_tensor_ptr(
{1, 2, 3, 4, 5, 6}, // data (int)
ScalarType::Double); // double scalar type
```

In this example, even though the data vector contains integers, we specify the scalar type as `Double`. The integers are cast to doubles, and the new data vector is owned by the `TensorPtr`. The `sizes` argument is skipped in this example, so the input data vector's size is used. Note that we forbid the opposite cast, when a floating point type casts to an integral type, because that loses precision. Similarly, casting other types to `Bool` isn't allowed.
In this example, even though the data vector contains integers, we specify the scalar type as `Double`. The integers are cast to double, and the new data vector is owned by the `TensorPtr`. Since the `sizes` argument is skipped in this example, the tensor is one-dimensional with a size equal to the length of the data vector. Note that the reverse cast, from a floating-point type to an integral type, is not allowed because that loses precision. Similarly, casting other types to `Bool` is disallowed.

*Providing Data Vector as `std::vector<uint8_t>`*

You can also provide raw data as a `std::vector<uint8_t>`, specifying the sizes and scalar type. The data will be reinterpreted according to the provided type.
You can also provide raw data in the form of a `std::vector<uint8_t>`, specifying the sizes and scalar type. The data will be reinterpreted according to the provided type.

```cpp
std::vector<uint8_t> data = /* raw data */;
Expand All @@ -125,7 +125,7 @@ auto tensor = make_tensor_ptr(
The `data` vector must be large enough to accommodate all the elements according to the provided sizes and scalar type.
#### Non-Owning a Raw Data Pointer
#### Non-Owning Data from Raw Pointer
You can create a `TensorPtr` that references existing data without taking ownership.
Expand All @@ -143,7 +143,7 @@ The `TensorPtr` does not own the data, you must ensure the `data` remains valid.

*Providing Raw Data with Custom Deleter*

If you want `TensorPtr` to manage the lifetime of the data, you can provide a custom deleter.
If you want the `TensorPtr` to manage the lifetime of the data, you can provide a custom deleter.

```cpp
auto* data = new double[6]{1.0, 2.0, 3.0, 4.0, 5.0, 6.0};
Expand All @@ -159,7 +159,7 @@ The `TensorPtr` will call the custom deleter when it is destroyed, i.e. when the
#### Sharing Existing Tensor
You can create a `TensorPtr` by wrapping an existing `TensorImplPtr`, and the latter can be created with the same collection of APIs as `TensorPtr`. Any changes made to `TensorImplPtr` or any `TensorPtr` sharing the same `TensorImplPtr` get reflected in for all.
You can create a `TensorPtr` by wrapping an existing `TensorImplPtr`, and the latter can be created with the same collection of APIs as `TensorPtr`. Any changes made to `TensorImplPtr` or any `TensorPtr` sharing the same `TensorImplPtr` are reflected across all.
*Sharing Existing TensorImplPtr*
Expand All @@ -171,7 +171,7 @@ auto tensor = make_tensor_ptr(tensor_impl);
auto tensor_copy = make_tensor_ptr(tensor_impl);
```

Both `tensor` and `tensor_copy` share the underlying `TensorImplPtr`, reflecting changes in data but not in metadata.
Both `tensor` and `tensor_copy` share the underlying `TensorImplPtr`, reflecting changes to data but not to metadata.

Also, you can create a new `TensorPtr` that shares the same `TensorImplPtr` as an existing `TensorPtr`.

Expand All @@ -192,7 +192,7 @@ Tensor original_tensor = /* some existing tensor */;
auto tensor = make_tensor_ptr(original_tensor);
```

Now the newly created `TensorPtr` references the same data as the original tensor, but has its own metadata copy, so can interpret or "view" the data differently, but any modifications to the data will be reflected for the original `Tensor` too.
Now the newly created `TensorPtr` references the same data as the original tensor, but has its own metadata copy, so it can interpret or "view" the data differently, but any modifications to the data will be reflected in the original `Tensor` as well.

### Cloning Tensors

Expand All @@ -211,15 +211,15 @@ auto original_tensor = make_tensor_ptr();
auto tensor = clone_tensor_ptr(original_tensor);
```

Note that regardless of whether the original `TensorPtr` owns the data or not, the newly created `TensorPtr` will own a copy of the data.
Note that, regardless of whether the original `TensorPtr` owns the data or not, the newly created `TensorPtr` will own a copy of the data.

### Resizing Tensors

The `TensorShapeDynamism` enum specifies the mutability of a tensor's shape:

- `STATIC`: The tensor's shape cannot be changed.
- `DYNAMIC_BOUND`: The tensor's shape can be changed, but can never contain more elements than it had at creation based on the initial sizes.
- `DYNAMIC`: The tensor's shape can be changed arbitrarily. Note that currently `DYNAMIC` is an alias of `DYNAMIC_BOUND`.
- `DYNAMIC_BOUND`: The tensor's shape can be changed but cannot contain more elements than it originally had at creation based on the initial sizes.
- `DYNAMIC`: The tensor's shape can be changed arbitrarily. Note that, currently, `DYNAMIC` is an alias for `DYNAMIC_BOUND`.

When resizing a tensor, you must respect its dynamism setting. Resizing is only allowed for tensors with `DYNAMIC` or `DYNAMIC_BOUND` shapes, and you cannot resize `DYNAMIC_BOUND` tensor to contain more elements than it had initially.

Expand All @@ -233,19 +233,19 @@ auto tensor = make_tensor_ptr(
// Number of elements: 6

resize_tensor_ptr(tensor, {2, 2});
// The tensor's sizes are now {2, 2}
// The tensor sizes are now {2, 2}
// Number of elements is 4 < initial 6

resize_tensor_ptr(tensor, {1, 3});
// The tensor's sizes are now {1, 3}
// The tensor sizes are now {1, 3}
// Number of elements is 3 < initial 6

resize_tensor_ptr(tensor, {3, 2});
// The tensor's sizes are now {3, 2}
// The tensor sizes are now {3, 2}
// Number of elements is 6 == initial 6

resize_tensor_ptr(tensor, {6, 1});
// The tensor's sizes are now {6, 1}
// The tensor sizes are now {6, 1}
// Number of elements is 6 == initial 6
```
Expand Down Expand Up @@ -378,7 +378,7 @@ This also applies when using functions like `set_input()` or `set_output()` that
## Interoperability with ATen
If your code is compiled with the preprocessor flag `USE_ATEN_LIB` turned on, all the `TensorPtr` APIs will use `at::` APIs under the hood. E.g. `TensorPtr` becomes a `std::unique_ptr<at::Tensor>` and `TensorImplPtr` becomes `c10::intrusive_ptr<at::TensorImpl>`. This allows for seamless integration with [PyTorch ATen](https://pytorch.org/cppdocs) library.
If your code is compiled with the preprocessor flag `USE_ATEN_LIB` enabled, all the `TensorPtr` APIs will use `at::` APIs under the hood. E.g. `TensorPtr` becomes a `std::unique_ptr<at::Tensor>` and `TensorImplPtr` becomes `c10::intrusive_ptr<at::TensorImpl>`. This allows for seamless integration with [PyTorch ATen](https://pytorch.org/cppdocs) library.
### API Equivalence Table
Expand Down Expand Up @@ -421,6 +421,6 @@ Here's a table matching `TensorPtr` creation functions with their corresponding
## Conclusion
The `TensorPtr` and `TensorImplPtr` in ExecuTorch simplifies tensor memory management by bundling the data and dynamic metadata into smart pointers. This design eliminates the need for users to manage multiple pieces of data and ensures safer and more maintainable code.
The `TensorPtr` and `TensorImplPtr` in ExecuTorch simplify tensor memory management by bundling the data and dynamic metadata into smart pointers. This design eliminates the need for users to manage multiple pieces of data and ensures safer and more maintainable code.
By providing interfaces similar to PyTorch's ATen library, ExecuTorch makes it easier for developers to adopt the new API without a steep learning curve.
By providing interfaces similar to PyTorch's ATen library, ExecuTorch simplifies the adoption of the new API, allowing developers to transition without a steep learning curve.

0 comments on commit 5c56f96

Please sign in to comment.