You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
⚠️ Check for existing issues before proceeding. ⚠️
I have searched the existing issues, and there is no existing issue for my problem
Where are you using SuperAGI?
Linux
Which branch of SuperAGI are you using?
Main
Do you use OpenAI GPT-3.5 or GPT-4?
GPT-3.5
Which area covers your issue best?
Installation and setup
Describe your issue.
When I do a docker compose -f local-llm-gpu up --build, I am getting this error:
1.295 RuntimeError:
1.295 The detected CUDA version (11.8) mismatches the version that was used to compile
1.295 PyTorch (12.1). Please make sure to use the same CUDA versions.
1.295
------
failed to solve: process "/bin/sh -c cd /app/repositories/GPTQ-for-LLaMa/ && python3 setup_cuda.py install" did not complete successfully: exit code: 1
But aren't both PyTorch and CUDA inside these docker images?
How to replicate your Issue?
docker compose -f local-llm-gpu up --build
I haven't done anything special than make the config file from the template.
Where are you using SuperAGI?
Linux
Which branch of SuperAGI are you using?
Main
Do you use OpenAI GPT-3.5 or GPT-4?
GPT-3.5
Which area covers your issue best?
Installation and setup
Describe your issue.
When I do a
docker compose -f local-llm-gpu up --build
, I am getting this error:But aren't both PyTorch and CUDA inside these docker images?
How to replicate your Issue?
docker compose -f local-llm-gpu up --build
I haven't done anything special than make the config file from the template.
Upload Error Log Content
https://gist.github.com/joshuacox/f9d4aa78b84ab614af5954a361cc6b2b
The text was updated successfully, but these errors were encountered: