-
Notifications
You must be signed in to change notification settings - Fork 540
Fix allocation-size-too-big crash in prepare_input_tensors #8233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8233
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New Failure, 2 PendingAs of commit f71670a with merge base e7fd150 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D68876117 |
@@ -40,10 +62,29 @@ Result<BufferCleanup> prepare_input_tensors(Method& method) { | |||
} | |||
Result<TensorInfo> tensor_meta = method_meta.input_tensor_meta(i); | |||
if (!tensor_meta.ok()) { | |||
BufferCleanup cleanup({inputs, num_allocated}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR also fixes this buffer leak
I was thinking that testing would require adding PTE files with a large number of inputs and large allocations, but I could use an existing PTE and just reduce the limits. I'll look into it. |
) Summary: (Adapted from an LLM-suggested fix for a fuzzer-discovered crash) The crash is an allocation-size-too-big error that occurs when the `prepare_input_tensors` function attempts to allocate an excessively large amount of memory for the `inputs` array. This crash is caused by the function's inability to handle large numbers of inputs, resulting in an attempt to allocate a huge amount of memory that exceeds the system's limits. The root cause of the crash is the lack of bounds checking on the `num_inputs` variable, which allows the function to attempt to allocate an arbitrarily large amount of memory. This is exacerbated by the fact that the function allocates memory for each input tensor separately, without checking the total size of all tensors before allocating memory for the `inputs` array. The patch fixes the crash by adding bounds checking on the `num_inputs` variable and calculating the total size of all tensors before allocating memory for the `inputs` array. Differential Revision: D68876117
5b572b5
to
f71670a
Compare
This pull request was exported from Phabricator. Differential Revision: D68876117 |
Added tests, thanks for pushing me to do so :) So many of these fuzzer fixes aren't unit-testable without corrupt PTE files, but this fix is. |
Summary:
(Adapted from an LLM-suggested fix for a fuzzer-discovered crash)
The crash is an allocation-size-too-big error that occurs when the
prepare_input_tensors
function attempts to allocate an excessively large amount of memory for theinputs
array. This crash is caused by the function's inability to handle large numbers of inputs, resulting in an attempt to allocate a huge amount of memory that exceeds the system's limits.The root cause of the crash is the lack of bounds checking on the
num_inputs
variable, which allows the function to attempt to allocate an arbitrarily large amount of memory. This is exacerbated by the fact that the function allocates memory for each input tensor separately, without checking the total size of all tensors before allocating memory for theinputs
array.The patch fixes the crash by adding bounds checking on the
num_inputs
variable and calculating the total size of all tensors before allocating memory for theinputs
array.Differential Revision: D68876117