You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
It is an attempt to reach you about a a problem I face, the model does not load in to GPU while making inference. CPU is fast though , each file takes abour 1 second to complete, however this is not the behaviour what i want. I want it a little more faster. Can you please give me a hint. I have even used self.dev to cuda in NISQ_DIM, but model still goes to CPU.
Thanks in advance!
The text was updated successfully, but these errors were encountered:
nisqa = nisqaModel(args)
# Print the device of the model
print(" The parameters Device of the NISQA MODEL: ",next(nisqa.model.parameters()).device)
# Execute the prediction directly
nisqa.predict()
The code gives me "cuda:0" result of the print statement, but model inference goes to CPU. Thanks for your guidence.
it looks like it does go through the GPU, it's just that the model is relatively small so that the utilization is quite low. The CPU usage you are seeing is probably from the data preprocessing, such as computing Mel-specs
Hi,
It is an attempt to reach you about a a problem I face, the model does not load in to GPU while making inference. CPU is fast though , each file takes abour 1 second to complete, however this is not the behaviour what i want. I want it a little more faster. Can you please give me a hint. I have even used self.dev to cuda in NISQ_DIM, but model still goes to CPU.
Thanks in advance!
The text was updated successfully, but these errors were encountered: