Skip to content

[ExecuTorch][to_backend] Enable passing Delegation Spec to to_backend #8165

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 7 commits into from

Conversation

mcr229
Copy link
Contributor

@mcr229 mcr229 commented Feb 4, 2025

Stack from ghstack (oldest at bottom):

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

Motivation

A current usecase for backend lowering is through the to_backend(backend_id, exported_program, compile_spec) API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old to_backend(backend_id, ...) api can not export executorch models with multiple methods.

Design

We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

Intended Flow

del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})

Differential Revision: D69086565

cc @cccclai

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Feb 4, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8165

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit 5f76e91 with merge base 0beadcc (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 4, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69086565

mcr229 added a commit that referenced this pull request Feb 4, 2025
Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)

ghstack-source-id: 264503263
Pull Request resolved: #8165
… to_backend"

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)

[ghstack-poisoned]
mcr229 added a commit that referenced this pull request Feb 13, 2025
Pull Request resolved: #8165

This will be used for the backend weight sharing so backends which do entire graph delegation can still share data across methods.

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```
ghstack-source-id: 266313740
@exported-using-ghexport

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69086565

@mcr229 mcr229 added module: backend Issues related to backend APIs or requests for new backends release notes: api Changes to public facing apis (any interfaces, pybinded runtime methods, etc.) labels Feb 13, 2025
… to_backend"

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)

cc cccclai

[ghstack-poisoned]
mcr229 added a commit that referenced this pull request Feb 13, 2025
Pull Request resolved: #8165

This will be used for the backend weight sharing so backends which do entire graph delegation can still share data across methods.

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```
ghstack-source-id: 266326224
@exported-using-ghexport

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69086565

… to_backend"

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)

cc cccclai

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69086565

mcr229 added a commit that referenced this pull request Feb 13, 2025
Pull Request resolved: #8165

This will be used for the backend weight sharing so backends which do entire graph delegation can still share data across methods.

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```
ghstack-source-id: 266382004
@exported-using-ghexport

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)
… to_backend"

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)

cc cccclai

[ghstack-poisoned]
mcr229 added a commit that referenced this pull request Feb 14, 2025
Pull Request resolved: #8165

This will be used for the backend weight sharing so backends which do entire graph delegation can still share data across methods.

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```
ghstack-source-id: 266547681
@exported-using-ghexport

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69086565

@mergennachin mergennachin requested a review from cccclai February 14, 2025 20:23
… to_backend"

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)

cc cccclai

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69086565

mcr229 added a commit that referenced this pull request Feb 14, 2025
Pull Request resolved: #8165

This will be used for the backend weight sharing so backends which do entire graph delegation can still share data across methods.

Support Entire Graph Delegation Flow through EdgeProgramManager's to_backend.

### Motivation
A current usecase for backend lowering is through the `to_backend(backend_id, exported_program, compile_spec)` API which lowers the entire exported program to the specified backend_id. However, lowering via the EdgeProgramManager only allows for partitioner based lowering. the EdgeProgramManager is the main component which enables support for multiple methods, as a result backends which leverage the old `to_backend(backend_id, ...)` api can not export executorch models with multiple methods.

### Design
We override EdgeProgramManager to also allow Partitioner to be replaceable by DelegationSpec. DelegationSpec is essentially a wrapper around the backend_id and the compile_spec, so any where a partitioenr is specified to lower a graph, the delegation spec can also be used to do entier graph lowering.

### Intended Flow
```
del_spec = DelegationSpec("BackendWithCompilerDemo", [CompileSpec(...)])
encode_graph = torch.export.export(Encoder(), sample_inputs)
decode_graph = torch.export.export(Decoder(), sample_inputs)
edge_manager = to_edge({
    "encode": encode_graph,
    "decode": decode_graph,
})
lowered_edge_manager = edge_manager.to_backend(del_spec)
# or if you want to specify which methods to lower to with del_spec
lowered_edge_manager= edge_manager.to_backend({
    "encode": del_spec,
})
```
ghstack-source-id: 266571809
@exported-using-ghexport

Differential Revision: [D69086565](https://our.internmc.facebook.com/intern/diff/D69086565/)
@mcr229 mcr229 deleted the branch gh/mcr229/6/base March 24, 2025 17:13
@mcr229 mcr229 closed this Mar 24, 2025
@mcr229 mcr229 closed this Mar 24, 2025
@mcr229 mcr229 deleted the gh/mcr229/6/head branch March 24, 2025 17:13
@mcr229 mcr229 restored the gh/mcr229/6/head branch April 1, 2025 22:09
@mcr229 mcr229 deleted the gh/mcr229/6/head branch April 1, 2025 23:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported module: backend Issues related to backend APIs or requests for new backends release notes: api Changes to public facing apis (any interfaces, pybinded runtime methods, etc.)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants