Skip to content

Commit

Permalink
Fix typos under aten directory (pytorch#87754)
Browse files Browse the repository at this point in the history
This PR fixes typos in `.md` files under aten directory

Pull Request resolved: pytorch#87754
Approved by: https://github.com/kit1980
  • Loading branch information
kiszk authored and pytorchmergebot committed Oct 26, 2022
1 parent 4080b1d commit 58dc95b
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
6 changes: 3 additions & 3 deletions aten/src/ATen/native/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -445,7 +445,7 @@ By default, ATen code generation will generate device check,
which will ensure all the tensor parameters passed to kernel are
on the same device.

However, in some cases, checking the device is unncessary, because,
However, in some cases, checking the device is unnecessary, because,
e.g., you call a function allows to work on multiple devices.
In that case, code generation of the device check can be disabled by adding
`device_check: NoCheck` to your function definition.
Expand Down Expand Up @@ -556,7 +556,7 @@ Here're steps to follow to decide the right dispatch keyword:
Note: to support training, you're required to write a formula in
derivatives.yaml since your backend implementations don't support autograd.
- Yes: you're likely calling other `at::` ops in the implemetation. Go to step 2.
- Yes: you're likely calling other `at::` ops in the implementation. Go to step 2.
2. Think about training: does your kernel support autograd? [check autograd support](#will-your-function-be-automatically-differentiable)
- Yes: in other words, you're providing a `CompositeImplicitAutograd` kernel which supports both inference and autograd.
Expand Down Expand Up @@ -610,7 +610,7 @@ It shows for a certain operator, what the computed dispatch table looks like aft
4. TODO: AutogradCPUOrCUDA
Note that in native_functions.yaml you can mix using backend keywords and alias keywords above for one op:
- direct registration to backend always has higher precendence than alias
- direct registration to backend always has higher precedence than alias
- DO NOT provide multiple alias keywords to the same op: alias keywords have precedence `CompositeExplicitAutograd > CompositeImplicitAutograd`,
e.g. adding both `CompositeImplicitAutograd` and `CompositeExplicitAutograd` kernels for one op will completely ignore `CompositeImplicitAutograd` kernel for
both inference and training. Thus this will trigger an error when native_functions.yaml is parsed.
Expand Down
4 changes: 2 additions & 2 deletions aten/src/ATen/native/cpu/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ within 256bit & 512bits registers. vec defines various operators such as
As an example `ReduceOpsKernel.cpp` implements a generic `kernel_` that reduces
an entire array using a given associative binary operation such as +.

More explicity, calling `kernel_` with template argument `std::plus` will cause
More explicitly, calling `kernel_` with template argument `std::plus` will cause
it to sum up the entire array into a single value.

`ReduceOpsKernel.cpp` uses the `CPU_CAPABILITY_*` macros to "know" under which
Expand All @@ -73,7 +73,7 @@ generic code, which will be compiled under multipled compilation settings.

`../ReduceOps.cpp` now includes the header `ReduceOpsKernel.h`, which contains
a generic definition of `sumImplAll`. This function allows the user to reduce
over a dimension or all dimensions. The appropiate capability is chosen at
over a dimension or all dimensions. The appropriate capability is chosen at
runtime using cpuinfo. If the current platform has AVX2, `sumImpl` will be set
to `sumImplAll<CPUCapability::AVX2>`.

Expand Down

0 comments on commit 58dc95b

Please sign in to comment.