Skip to content

Commit

Permalink
Append parameters when checking graphs for TorchScript Methods (pytor…
Browse files Browse the repository at this point in the history
…ch#13553)

Summary:
Also, add an assertion in the GraphExecutor to make sure we don't
access memory out of bounds.
Pull Request resolved: pytorch#13553

Differential Revision: D12924796

Pulled By: soumith

fbshipit-source-id: ea2a134084538484178b8ebad33d6716a8e1d633
  • Loading branch information
apaszke authored and facebook-github-bot committed Nov 6, 2018
1 parent f3c197d commit f6ff5d8
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 2 deletions.
1 change: 1 addition & 0 deletions torch/csrc/jit/graph_executor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -350,6 +350,7 @@ struct GraphExecutorImpl {
}

std::shared_ptr<Graph> graphFor(const Stack& stack) const {
JIT_ASSERT(stack.size() >= num_inputs);
auto inputs = last(stack, num_inputs);
ArgumentSpec spec(autograd::GradMode::is_enabled(), inputs, num_flat_inputs);

Expand Down
7 changes: 5 additions & 2 deletions torch/csrc/jit/script/module.h
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ struct Method {

void run(Stack & stack) {
for(at::Tensor* tp : member_inputs) {
stack.push_back(*tp);
stack.emplace_back(*tp);
}
get_executor().run(stack);
}
Expand All @@ -72,7 +72,10 @@ struct Method {
return stack.front();
}

std::shared_ptr<Graph> graph_for(const Stack& inputs) {
std::shared_ptr<Graph> graph_for(Stack inputs) {
for(at::Tensor* tp : member_inputs) {
inputs.emplace_back(*tp);
}
return get_executor().graphFor(inputs);
}
std::shared_ptr<Graph> graph() const {
Expand Down

0 comments on commit f6ff5d8

Please sign in to comment.