Skip to content

Refactor binary op partitioner configs under binary op config class #9024

Open
@mcr229

Description

@mcr229

Problem

A lot of duplicated code is done for our partitioner configs which take in two inputs (mul, add, sub, etc.) I believe we can refactor these configs:

class AddConfig(GenericNodePartitionerConfig):
target_name = "add.Tensor"
def __init__(self, **kwargs):
super().__init__(fused_act=["relu.default"], **kwargs)
def supported_precision_types(self) -> List[ConfigPrecisionType]:
return [ConfigPrecisionType.FP32, ConfigPrecisionType.STATIC_QUANT]

so that they all inherit from a parent BinaryConfig class. We can then enforce common constraints for these binary configs. This is similar to what we do with the GEMM Config:

class GEMMConfig(XNNPartitionerConfig):

Verification

Make sure all existing CI is passing. python -m unittest executorch.backends.xnnpack.test

Resources

https://discord.gg/a9r5KZDNfZ

cc @digantdesai @cbilgin

Metadata

Metadata

Assignees

Labels

good first issueGood for newcomersmodule: xnnpackIssues related to xnnpack delegation and the code under backends/xnnpack/triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

Backlog

Status

No status

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions