Skip to content

Kernels assume dim <= kTensorDimensionLimit #8237

Open
@dbort

Description

@dbort

🐛 Describe the bug

While reviewing #7943 it surfaced that a lot of kernels and kernel utilities assume that tensors will never have more than kTensorDimensionLimit, or in some cases defining their own values for kTensorDimensionLimit and not checking that the tensor dimensions are actually smaller than that value.

Nothing in the runtime today puts a limit on the number of dimensions of a tensor. It would be legal to construct a PTE file that contains a tensor with kTensorDimensionLimit + 1 dimensions, and it would be legal for a model to define an input with kTensorDimensionLimit + dimensions.

And even if the model's tensors fit within the assumed size, a PTE file could become corrupted in a way that increases a tensor's dimensions.

If a kernel or backend encountered a tensor with an unexpectedly-large number of dimensions, they may overrun stack-resident buffers, causing subtle or not-so-subtle problems, corruption, and crashes. It's also a potential security attack vector.

Auditing existing code is one step, followed by adding regression tests to ensure that fixes stay fixed.

Fuzzing is another important tool, along with tools like @manuelcandales's kernel testing tool. These would help us uncover unknown assumptions, and help us run regression tests against the code.

cc: @swolchok

Versions

ExecuTorch 0.5.0

cc @larryliu0820 @manuelcandales

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: kernelsIssues related to kernel libraries and utilities, and code under kernels/triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions