Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CDUA device does not load the model #39

Closed
IamHussain503 opened this issue Dec 8, 2023 · 2 comments
Closed

CDUA device does not load the model #39

IamHussain503 opened this issue Dec 8, 2023 · 2 comments

Comments

@IamHussain503
Copy link

IamHussain503 commented Dec 8, 2023

Hi,
It is an attempt to reach you about a a problem I face, the model does not load in to GPU while making inference. CPU is fast though , each file takes abour 1 second to complete, however this is not the behaviour what i want. I want it a little more faster. Can you please give me a hint. I have even used self.dev to cuda in NISQ_DIM, but model still goes to CPU.
image

Thanks in advance!

@IamHussain503
Copy link
Author

    nisqa = nisqaModel(args)
    # Print the device of the model
    print(" The parameters Device of the NISQA MODEL:  ",next(nisqa.model.parameters()).device)
    # Execute the prediction directly
    nisqa.predict()

The code gives me "cuda:0" result of the print statement, but model inference goes to CPU. Thanks for your guidence.

@gabrielmittag
Copy link
Owner

it looks like it does go through the GPU, it's just that the model is relatively small so that the utilization is quite low. The CPU usage you are seeing is probably from the data preprocessing, such as computing Mel-specs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants