Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Proper way to run nn.Modules for testing #2896

Open
mitchelldehaven opened this issue Sep 11, 2024 · 0 comments
Open

[Question] Proper way to run nn.Modules for testing #2896

mitchelldehaven opened this issue Sep 11, 2024 · 0 comments
Labels
question Question about the usage

Comments

@mitchelldehaven
Copy link

❓ General Questions

I'm trying to add a model I'm interested in running with MLC-LLM, however is there a method for testing the intermediate nn.Modules that are part of the model? For example, if I have an attention class and a defined forward function, is there a way to validate the outputs of the class as I'm building the model?

I was hoping I could run something like below, but from reading it seems like forward only is for tracing, so actual outputs can't be generated.

my_module = Module(Config)
test_data = np.zeros(shape)
outputs = my_module(test_data)
@mitchelldehaven mitchelldehaven added the question Question about the usage label Sep 11, 2024
@mitchelldehaven mitchelldehaven changed the title [Question] Proper way to run sub-modules [Question] Proper way to run nn.Modules for testing Sep 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about the usage
Projects
None yet
Development

No branches or pull requests

1 participant