Skip to content

Changes to allow the export of functions with no user input. #8031

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

cptspacemanspiff
Copy link
Contributor

@cptspacemanspiff cptspacemanspiff commented Jan 29, 2025

Summary

This is a set of changes that I have been using to get executorch working with Multiple Entrypoints with shared state.
This is related to: #8030

Issue & Changes

Currently trying to export a function:

def init():
  self.shared_state.fill_(0)

Will fail, because in to_edge in exir in the program.py the lifted constants are assumed to be placed before the first user input. This causes weird behavior when there are no user inputs. I changed it to place the lifted constant after the last buffer, if there are no user inputs to place before. However, this relies on my understanding of the implicit layout of the graph inputs.

At the same time later in the after the memory planning phase of to_executorch, it validates that memory planning was correct based on whether the graph_input_allocated flag is set, this only applies to user inputs, of which we have none, so it errors out. I added a check to bypass this error if there are no user inputs, but I honestly do not understand enough of the validation check to know if that is appropriate.

Comments

In the current executorch with no support for shared state, this case does not make sense, but #8030 is my attempt at adding that capability. and having initialization methods that init the buffers from constants is useful, especially since their initail state is undefined.

Currently this is not ready, I have no tests/ and have my random notes in the commit as comments, and other than validating that it worked as I was working on the shared state export have not done anything in depth....

But I kind-of want feedback if my solution seems correct, or if I am missing something. Particularly regarding my understanding of placeholder ordering and signature logic, and whether bypassing the graph_input_allocated validation is appropriate.

Out of all the changes I had to make for shared state, this is the one I am least sure about.

cc @JacobSzwejbka @angelayi

Copy link

pytorch-bot bot commented Jan 29, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8031

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures

As of commit d7284e9 with merge base 19a3002 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 29, 2025
@JacobSzwejbka
Copy link
Contributor

@ydwu4 @angelayi can you guys take a look at this one

@JacobSzwejbka
Copy link
Contributor

Hmm I thought we already supported functions with no inputs, but I guess before we only tested with constant returns. Now theres no user return either.

cc @tarun292 who I think worked on the no inputs thing before

# only check if inputs are allocated if there are user inputs:
user_inputs_exist = len(list(filter(lambda input: input.kind == InputKind.USER_INPUT, self.graph_signature.input_specs))) > 0

if "placeholder" in check_list and user_inputs_exist:
Copy link
Contributor

@JacobSzwejbka JacobSzwejbka Jan 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this change looks fine. On our side we should probably take a pass over all the memory planning infra. A lot of it was written before export graphs were functionalized so the concept of non user inputs didnt make exist.

Like realistically we should /only/ be verifying the memory planning of user inputs here

@manuelcandales manuelcandales added module: exir Issues related to Export IR and the code under exir/ triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Jan 29, 2025
# this is a bit hacky, but I am not certain of what the contract is for
# node ordering. is the first non-placeholder node guranteed to be the
# first node after input paramters? what if there is no op, and it is
# just placeholders? Easier to just find the last buffer, and insert after.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be fine to leave it as None if there are no user inputs -- then it will insert it as the first placeholder. The official order of inputs is [parameters, buffers, constants, user inputs], but the order of the first 3 doesn't matter as much.

@@ -306,6 +335,9 @@ def lift_constant_tensor_pass(ep):
new_input_specs.extend(lifted_constants)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And if we just insert all the constants at the beginning, we can just directly prepend to input_specs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, and removed all the logic I had previously, now:

No user inputs -> inserting before None -> inserts at the beginning.

@@ -306,6 +335,9 @@ def lift_constant_tensor_pass(ep):
new_input_specs.extend(lifted_constants)
lifted_constants.clear()
new_input_specs.append(s)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a test case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a test case, shifted to the simpler implementation of just prepending the signature since the order of the non-user inputs does not matter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have also validated that it works with my export process as well.

@cptspacemanspiff cptspacemanspiff force-pushed the fix-lifted-constants-with-no-user-input branch from bb07bb0 to a563d8a Compare February 6, 2025 20:45
Simplified code, In the case of no user input lifted constants are placed before 'none' node, aka the begining. We update the graph node to place these at the beginning as well.

fixed formatting.
"b_a" # followed by the buffer input.
).run(ep.graph_module.code)
# the graph signature should also be the same:
assert ep.graph_signature.input_specs[0].arg.name == "_lifted_tensor_constant1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert ep.graph_signature.input_specs[0].arg.name == "_lifted_tensor_constant1"
self.assertEquals(ep.graph_signature.input_specs[0].arg.name, "_lifted_tensor_constant1")

Comment on lines 1086 to 1087
# executorch_program.graph_signature.input_specs[0].arg.name == "_lifted_tensor_constant1"
# executorch_program.graph_signature.input_specs[1].arg.name == "b_a"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are these commented out?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I accidentally left those in, when I was initially writing the test/debugging and validating changes I just copy/pasted those lines so I had something to drop a debugger breakpoint on.

I removed those, and left in the to_executorch call, this is there to ensure that the validation steps that were previously failing are correctly bypassed. (Does not throw an exception.)

@cptspacemanspiff
Copy link
Contributor Author

@angelayi

Copy link
Contributor

@angelayi angelayi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!!

@cptspacemanspiff
Copy link
Contributor Author

@angelayi So I am not sure what else needs to happen to merge this in (I think I might have messed up the workflow by merging in main to fix some of the CI tests because my branch was out of date)

@JacobSzwejbka
Copy link
Contributor

Im rerunning CI and then assuming it passes Ill merge it

@shoumikhin
Copy link
Contributor

@cptspacemanspiff any updates? Do we still want this change?
cc @JacobSzwejbka

@byjlw byjlw merged commit 8d4ba1d into pytorch:main Mar 17, 2025
45 of 50 checks passed
@swolchok
Copy link
Contributor

looks like this broke unittest-release, build documentation, and perhaps other tests?

jathu added a commit that referenced this pull request Mar 17, 2025
### Summary

CI is broken due to #8031.
`graph_signature` is optional, and is None.

### Test plan

CI

cc @JacobSzwejbka @angelayi
DannyYuyang-quic pushed a commit to CodeLinaro/executorch that referenced this pull request Apr 2, 2025
…#8031)

### Summary
This is a set of changes that I have been using to get executorch
working with Multiple Entrypoints with shared state.
This is related to: pytorch#8030 

### Issue & Changes
Currently trying to export a function:

```python
def init():
  self.shared_state.fill_(0)
```

Will fail, because in to_edge in exir in the program.py the lifted
constants are assumed to be placed before the first user input. This
causes weird behavior when there are no user inputs. I changed it to
place the lifted constant after the last buffer, if there are no user
inputs to place before. However, this relies on my understanding of the
implicit layout of the graph inputs.

At the same time later in the after the memory planning phase of
to_executorch, it validates that memory planning was correct based on
whether the `graph_input_allocated` flag is set, this only applies to
user inputs, of which we have none, so it errors out. I added a check to
bypass this error if there are no user inputs, but I honestly do not
understand enough of the validation check to know if that is
appropriate.

### Comments

In the current executorch with no support for shared state, this case
does not make sense, but pytorch#8030 is my attempt at adding that capability.
and having initialization methods that init the buffers from constants
is useful, especially since their initail state is undefined.

Currently this is not ready, I have no tests/ and have my random notes
in the commit as comments, and other than validating that it worked as I
was working on the shared state export have not done anything in
depth....

But I kind-of want feedback if my solution seems correct, or if I am
missing something. Particularly regarding my understanding of
placeholder ordering and signature logic, and whether bypassing the
graph_input_allocated validation is appropriate.

Out of all the changes I had to make for shared state, this is the one I
am least sure about.

 


cc @JacobSzwejbka @angelayi
DannyYuyang-quic pushed a commit to CodeLinaro/executorch that referenced this pull request Apr 2, 2025
### Summary

CI is broken due to pytorch#8031.
`graph_signature` is optional, and is None.

### Test plan

CI

cc @JacobSzwejbka @angelayi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: exir Issues related to Export IR and the code under exir/ topic: not user facing triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants