Skip to content

Error when running inference for nanoGPT LLM example #3465

Closed
@bryangarza

Description

@bryangarza

Hi,

I am following the instructions from https://github.com/pytorch/executorch/blob/main/docs/source/llm/getting-started.md and got to the "Building and Running" section where you compile the CPP code and try to do inference. However, when I enter a prompt, I get this error. I also tried other prompts but get the same result.

I am on an Apple M1 Pro, with MacOS 13.6.6

Enter model prompt: Hello world!
E 00:00:33.096084 executorch:op_split_with_sizes_copy.cpp:60] Check failed (tensor_is_broadcastable_to( {target_out_sizes, target_out_ndim}, out[i].sizes())):
E 00:00:33.096114 executorch:method.cpp:1034] KernelCall failed at instruction 0:11 in operator aten::split_with_sizes_copy.out: 0x12
E 00:00:33.096130 executorch:method.cpp:1040] arg 0 with type id 1
E 00:00:33.096132 executorch:method.cpp:1040] arg 1 with type id 8
E 00:00:33.096133 executorch:method.cpp:1040] arg 2 with type id 4
E 00:00:33.096134 executorch:method.cpp:1040] arg 3 with type id 9
E 00:00:33.096135 executorch:method.cpp:1040] arg 4 with type id 9
F 00:00:33.096137 executorch:result.h:165] In function CheckOk(), assert failed: hasValue_
Hello world!zsh: abort      ./cmake-out/nanogpt_runner

If you need any more detail from me that would help to make this reproducible, just let me know. Thanks!

Metadata

Metadata

Labels

module: docIssues related to documentation, both in docs/ and inlined in code

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions