From 58dc95b321631f40d2f18915f7cb6a68bdbd8607 Mon Sep 17 00:00:00 2001 From: Kazuaki Ishizaki Date: Wed, 26 Oct 2022 19:29:05 +0000 Subject: [PATCH] Fix typos under aten directory (#87754) This PR fixes typos in `.md` files under aten directory Pull Request resolved: https://github.com/pytorch/pytorch/pull/87754 Approved by: https://github.com/kit1980 --- aten/src/ATen/native/README.md | 6 +++--- aten/src/ATen/native/cpu/README.md | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/aten/src/ATen/native/README.md b/aten/src/ATen/native/README.md index 01a25e3a978cc..c355423ea7501 100644 --- a/aten/src/ATen/native/README.md +++ b/aten/src/ATen/native/README.md @@ -445,7 +445,7 @@ By default, ATen code generation will generate device check, which will ensure all the tensor parameters passed to kernel are on the same device. -However, in some cases, checking the device is unncessary, because, +However, in some cases, checking the device is unnecessary, because, e.g., you call a function allows to work on multiple devices. In that case, code generation of the device check can be disabled by adding `device_check: NoCheck` to your function definition. @@ -556,7 +556,7 @@ Here're steps to follow to decide the right dispatch keyword: Note: to support training, you're required to write a formula in derivatives.yaml since your backend implementations don't support autograd. - - Yes: you're likely calling other `at::` ops in the implemetation. Go to step 2. + - Yes: you're likely calling other `at::` ops in the implementation. Go to step 2. 2. Think about training: does your kernel support autograd? [check autograd support](#will-your-function-be-automatically-differentiable) - Yes: in other words, you're providing a `CompositeImplicitAutograd` kernel which supports both inference and autograd. @@ -610,7 +610,7 @@ It shows for a certain operator, what the computed dispatch table looks like aft 4. TODO: AutogradCPUOrCUDA Note that in native_functions.yaml you can mix using backend keywords and alias keywords above for one op: - - direct registration to backend always has higher precendence than alias + - direct registration to backend always has higher precedence than alias - DO NOT provide multiple alias keywords to the same op: alias keywords have precedence `CompositeExplicitAutograd > CompositeImplicitAutograd`, e.g. adding both `CompositeImplicitAutograd` and `CompositeExplicitAutograd` kernels for one op will completely ignore `CompositeImplicitAutograd` kernel for both inference and training. Thus this will trigger an error when native_functions.yaml is parsed. diff --git a/aten/src/ATen/native/cpu/README.md b/aten/src/ATen/native/cpu/README.md index ab2f9d3d02609..2cf6fa0a13320 100644 --- a/aten/src/ATen/native/cpu/README.md +++ b/aten/src/ATen/native/cpu/README.md @@ -64,7 +64,7 @@ within 256bit & 512bits registers. vec defines various operators such as As an example `ReduceOpsKernel.cpp` implements a generic `kernel_` that reduces an entire array using a given associative binary operation such as +. -More explicity, calling `kernel_` with template argument `std::plus` will cause +More explicitly, calling `kernel_` with template argument `std::plus` will cause it to sum up the entire array into a single value. `ReduceOpsKernel.cpp` uses the `CPU_CAPABILITY_*` macros to "know" under which @@ -73,7 +73,7 @@ generic code, which will be compiled under multipled compilation settings. `../ReduceOps.cpp` now includes the header `ReduceOpsKernel.h`, which contains a generic definition of `sumImplAll`. This function allows the user to reduce -over a dimension or all dimensions. The appropiate capability is chosen at +over a dimension or all dimensions. The appropriate capability is chosen at runtime using cpuinfo. If the current platform has AVX2, `sumImpl` will be set to `sumImplAll`.