Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NPUW] L0 allocation improvements #27011
[NPUW] L0 allocation improvements #27011
Changes from 2 commits
87e1022
335ae52
b431b13
b534478
a21c5cd
f50509e
29d72e8
ab964cb
e4003f9
d4be3fb
29af786
25ef09f
47fdd64
5958af8
f70238e
52e3859
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just thinking.. if you're using allocTensor only where we store ITensors, why can't
allocTensor
return theITensor
so you don't need to callget_tensor_impl
everywhere?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we need an overload for
allocTensor
which takesov::Input<ov::Node>
/ov::Output<ov::Node>
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note - you're not passing any device here. And you have
="NPU"
as the default parameter..There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the default device - that was the idea
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if it's correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we check global parameters. The idea is that global parameters allocation solely depends on
m_alloc_required
- if it's set, params will be allocated on NPUThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haven't checked on this yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think
m_alloc_required
is overall misleading. It is always required.Also, you've missed this in my past L0 PR: dmatveev#5
More precisely, this part: https://github.com/dmatveev/openvino/blob/e7d62f1a4412f639d0fb112e4f5647eeff9a1b8e/src/plugins/intel_npu/src/plugin/npuw/just_sync_infer_request.cpp#L117
And then this part: https://github.com/dmatveev/openvino/blob/e7d62f1a4412f639d0fb112e4f5647eeff9a1b8e/src/plugins/intel_npu/src/plugin/npuw/just_sync_infer_request.cpp#L370
The backstory here is that, even if you've allocated your model-global input tensors yourself, they may be overwritten. Even our scripts carelessly do this, unfortunately. So what you need to do is to keep track of the tensors you allocate (maybe you can just memorize the pointers in your new
allocTensor
method) and check if the tensors' your working with are still "known" to you.Once there's a
set_tensor
call - you loose it and yourm_alloc_required
flag doesn't tell truth.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, thanks