Skip to content

Introducing NXP Neutron runtime #10563

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

jirioc
Copy link

@jirioc jirioc commented Apr 29, 2025

Summary

Introducing NXP Neutron runtime.

cc @digantdesai @JakeStevens @robert-kalmar

Copy link

pytorch-bot bot commented Apr 29, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/10563

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 4 New Failures

As of commit 1207e08 with merge base 4559a61 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 29, 2025
@jirioc
Copy link
Author

jirioc commented Apr 30, 2025

@pytorchbot label "module: nxp" "release notes: nxp"

@pytorch-bot pytorch-bot bot added module: nxp Issues related to NXP Neutron NPU delegation and code under backends/nxp/ release notes: nxp Changes to the NXP Neutron backend delegate labels Apr 30, 2025
@jirioc
Copy link
Author

jirioc commented Apr 30, 2025

@tarun292
Copy link
Contributor

tarun292 commented May 2, 2025

@digantdesai are you the POC for reviewing this on our end?

uint32_t numInputs = transpositionFlags[INPUT_TENSOR_FORMAT_LEN_POS];
uint32_t numOutputs = transpositionFlags[OUTPUT_TENSOR_FORMAT_LEN_POS];
cfg->inputTranspositionFlags =
INPUT_TENSOR_FORMAT_ARRAY_ADDR(transpositionFlags);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this tranpose outside of the ET graph?

Can you explain the logic here as well?

I want to make sure I fully understand, as our application uses channel last (like Neutron) natively

return true;
}

Result<DelegateHandle*> init(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't there a neutron init function as well? Should this be called within this function as well?

}

// Transpose inputs.
for (int i = 0; i < cfg->numInputs; i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@digantdesai is it the case that Executorch input tensors are always NC(H)W? Natively, we use NWC in our applications because it makes more sense for our use case. Is t

Copy link
Contributor

@digantdesai digantdesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still in progress?

At a high level looks OK to me.

/// - Initialize the Neutron Driver library, setting initial values, do memory
/// allocation
/// for internal data structures, do memory mapping.
NeutronError neutronInit();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can these functions be in a neutron namespace?
Where are we calling this specific function to init the lib?

using namespace std;

namespace torch {
namespace executor {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

neuton namespace?


// Applied on outputs.
template <typename T>
void transposeToChannelFirst(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK for now, but I wonder is there another way to do this, i.e. set the dim_order on the output and let the portable or someone else take care of this?


// Applied on inputs.
template <typename T>
void transposeToChannelLast(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here.


auto* cfg = allocator->allocateInstance<NeutronConfig>();

// The following data is read from the "processed" data blob.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: make this a helper function?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. module: nxp Issues related to NXP Neutron NPU delegation and code under backends/nxp/ release notes: nxp Changes to the NXP Neutron backend delegate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants