Skip to content

Commit

Permalink
Remove explicit dereferencing for TesnorPtr converted implicitly to E…
Browse files Browse the repository at this point in the history
…Value. (pytorch#5278)

Summary: Pull Request resolved: pytorch#5278

Reviewed By: kirklandsign

Differential Revision: D62512518

fbshipit-source-id: c1f32dd398cb58833ca3fa95b0cd1ab5c9984de9
  • Loading branch information
shoumikhin authored and facebook-github-bot committed Sep 11, 2024
1 parent d689722 commit 338ef26
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion examples/qualcomm/oss_scripts/llama2/runner/runner.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ Result<torch::executor::Tensor> Runner::run_model_step(
token->mutable_data_ptr<int32_t>()[0] = input_token;

// inputs:[tokens, start_pos, atten_mask, k_cache, v_cache]
auto outputs_res = module_->forward({*token, *start_pos, *atten_mask});
auto outputs_res = module_->forward({token, start_pos, atten_mask});
ET_CHECK_OK_OR_RETURN_ERROR(outputs_res.error());

// TODO: need to handle batch size != 1
Expand Down
2 changes: 1 addition & 1 deletion extension/llm/runner/text_decoder_runner.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ ::executorch::runtime::Result<exec_aten::Tensor> TextDecoderRunner::step(
TensorPtr& start_pos) {
// ET_LOG(Info, "Input token %" PRIu64, input_token);
if (use_kv_cache_) {
auto outputs_res = module_->forward({*tokens, *start_pos});
auto outputs_res = module_->forward({tokens, start_pos});
ET_CHECK_OK_OR_RETURN_ERROR(outputs_res.error());
ET_CHECK_MSG(
outputs_res.get().size() == 1,
Expand Down

0 comments on commit 338ef26

Please sign in to comment.