-
Notifications
You must be signed in to change notification settings - Fork 629
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats #156
Comments
facing the same issue when trying to resolve #134 with a downgrade to 0.35 |
Any news about this issue? |
Hit it as well
|
I have the same problem, any ideas? |
same issue |
|
pip install -i https://test.pypi.org/simple/ bitsandbytes that worked for me. |
This works for me, thanks : ) |
Manual copy of the .so file worked. I have version cuda version 11.7 so the following command in the conda environment directory ensured that it worked correctly again.
|
This works without a conda env as well, fwiw. |
not work |
thanks, it works !! |
It works!!! 😄 |
Yikes this also appears official anaconda image Adding that cp as a final run command solves this though yay |
It works Thank you! |
conda install cudatoolkit -y that worked for me. |
thx it worked for me |
@boersmamarcel Thank you for the answer, it saved my day. Do you have any solution other than manually replacing the so file. I am doing things like model finetune automation |
I needed an additional step: since I am on CUDA 11.6, I needed to delete the 11.7 .so file from packages. Otherwise the cp command above would not fix it. In my venv (no conda) it worked as follows:
after this, my python code ran fine:
|
Jesus... This finally worked. I've been looking for a solution all week! |
That looks like an S3 and arrow specific error, can't help you there :/ |
I came across the same issue and after running the bug report checks, it turns out my CUDA runtime installation was not properly detected by bitsandbytes, leading to a CPU fallback:
To fix this, one can simply add the path to the CUDA runtime library (in my configuration it is under a virtual env at
Once CUDA runtime was properly detected, I was able to load the model and run an inference as expected (with a nice speedup too). |
My solution:
Hope this helps you. |
thanks, it works for me. |
in my situation, the BUG REPORT says: |
This is the right solution
|
well, this shouldn't even be the case |
It works for me, too. Thank you very much. I'm running it on wsl2 on windows 11 |
+1ing this. A better, system level fix imo |
Finally This worked for me, thanks man you saved my day |
Thank you. It works on WSL as well. I am using CUDA for WSL https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_network |
For people who are using the manual copy "cp libbitsandbytes_cuda117.so libbitsandbytes_cpu.so" approach, a cleaner solution is as follows. This is how bitsandbytes tries to locate libcudart.so: https://github.com/TimDettmers/bitsandbytes/blob/main/bitsandbytes/cuda_setup/env_vars.py#L47 So you basically need to export an environment variable that contains values that look like a path (i.e. contain a /) and bitsandbytes will pick your libcudart.so from there. For example: $ locate libcudart.so
$ export CUDART_PATH="<local_path>/.env/lib/python3.8/site-packages/nvidia/cuda_runtime/lib" @TimDettmers I'm wondering if we should modify the script to look for a libcudart.so file in the $VIRTUAL_ENV directory. This would solve the issue out of the box for people who use it via a virtual environment on popular cloud providers (GCP, LambdaLabs, etc) |
for me worked from python:
|
Bitsandbytes was not supported windows before, but my method can support windows.(yuhuang) 3 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes-windows 4 J:\StableDiffusion\sdwebui\py310\python.exe -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl Replace your SD venv directory file(python.exe Folder) here(J:\StableDiffusion\sdwebui\py310) |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
"I also encountered this issue. Later, I found out that the cloud server I rented updated its system, and I needed to manually load CUDA each time." |
I tried implementing this solution But the error still remains the same. Any ideas? I have tried both installing bitsandbytes from source and thru pip install. I am on wsl & CUDA 12.3 I am using python venv and ran the lib cmd by finding the bitsandbytes folder in the lib folder of env3 |
Works for me also (I'm having CUDA 11.4)! For ppl seeing this just make sure you go the env path indicated in the error msg (for me, it's @FarziBuilder perhaps you could find something similar to |
Hi, I came across this problem when I try to use bitsandbytes to load a big model from huggingface, and I cannot fix it. My CUDA version is 12.0 and my torch version is 1.13.1+cu116. I would like to know if there is any way to solve this problem? Thanks!
The text was updated successfully, but these errors were encountered: