You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, GPT4All only utilizes 1 GPU so for machines with multiple GPU's, it blocks them from having access to higher parameter count models to use.
Steps to Reproduce
2x RTX 3090 installed
Download llama-3-70b
Try and load the model and watch it load one GPU with 24gb and then crash, 2nd GPU is not utilized
Expected Behavior
In rigs where there are multiple GPU's the app should be able to split the models across them - enabling users to have access of higher parameter count models.
Your Environment
GPT4All version: [v3.0.0]
Operating System: Windows 11
Chat model used (if applicable):
The text was updated successfully, but these errors were encountered:
Bug Report
Right now, GPT4All only utilizes 1 GPU so for machines with multiple GPU's, it blocks them from having access to higher parameter count models to use.
Steps to Reproduce
Expected Behavior
In rigs where there are multiple GPU's the app should be able to split the models across them - enabling users to have access of higher parameter count models.
Your Environment
The text was updated successfully, but these errors were encountered: