Skip to content

Commit

Permalink
Don't use vector accessor methods to do pointer math; unblock platfor…
Browse files Browse the repository at this point in the history
…m010 (pytorch#73414)

Summary:
Pull Request resolved: pytorch#73414

The code here previously used this creative pointer math
```
const auto end = reinterpret_cast<uintptr_t>(&managed_tensor_storage_impls_.at(managed_tensor_storage_impls_.size()));
```
This has the form
```
const auto end = &A[N];
```
this works just fine if `A` is C-style array since `&A[N]` can get transformed to `(A+N)` where `A` is a simple pointer without ever dereferencing.

But this is C++ and `A` is a vector, so `A[N]` calls the accessor method, reaches into an illegal place in memory, and then we get the address of that. (Or so I deduce.)

We sidestep the issue by using `data()` to get the desired memory address directly.

Test Plan: Sandcastle

Reviewed By: meyering

Differential Revision: D34468166

fbshipit-source-id: d1bcbdceddd7c1da8204f90a446793945ebe9a34
(cherry picked from commit e93cc37)
  • Loading branch information
r-barnes authored and pytorchmergebot committed Feb 26, 2022
1 parent 12890ab commit 33ca944
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion torch/csrc/jit/runtime/static/memory_planner.h
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,8 @@ class MemoryPlanner {
const auto start =
reinterpret_cast<uintptr_t>(managed_tensor_storage_impls_.data());
const auto end = reinterpret_cast<uintptr_t>(
&managed_tensor_storage_impls_[managed_tensor_storage_impls_.size()]);
managed_tensor_storage_impls_.data() +
managed_tensor_storage_impls_.size());
return impl_p >= start && impl_p < end;
}

Expand Down

0 comments on commit 33ca944

Please sign in to comment.