-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to setup environment #2
Comments
Thank you for your valuable feedback. I have replicated the error and am actively seeking a resolution. |
Okay found solution to this problem. Pull DeepSpeed repo: git clone https://github.com/microsoft/DeepSpeed.git
cd DeepSpeed
# rollback files to DeepSpeed 0.9.5
# because in the previous step, the command line log shows that the version being installed is this one
# using higher version may be okay?
git reset --hard 8b7423d2 When compiling DeepSpeed, you may encounter a type conversion error between // line 536, 537
{hidden_dim * (unsigned) InferenceContext::Instance().GetMaxTokenLength(),
k * (unsigned) InferenceContext::Instance().GetMaxTokenLength(),
// line 545, 546
{hidden_dim * (unsigned) InferenceContext::Instance().GetMaxTokenLength(),
k * (unsigned) InferenceContext::Instance().GetMaxTokenLength(),
// line 1570
at::from_blob(intermediate_ptr, {input.size(0), input.size(1), (unsigned) mlp_1_out_neurons}, options);
Compile and install DeepSpeed # use same conda env to Chat-UniVi is okay
conda activate chatunivi
# run build script for Windows
build_win.bat
# then should a deepspeed-0.9.5-....whl file be in dist folder
# install it with pip
pip install dist/deepspeed-0.9.5-....whl Now Here is the deepspeed-0.9.5+8b7423d2-cp310-cp310-win_amd64.whl.zip Then another problem rises: Windows is not well supported by I'm trying to solve this now. |
Thank you very much! Are you planning to train the model on Windows? If you only intend to perform inference, there's no need to install |
Not yet. I'm just trying to setup environment and evaluate inference performance. Thanks for that information! I will continue with testing. 😆😆 |
Error occurs when running logs
According to this document, manually setting environment varibles ( Then comes a lot of module not found errors. Running I will continue to test tomorrow. 🚲 |
Adding line 75: model = AutoModelForCausalLM.from_pretrained(model_path, offload_folder='offload', low_cpu_mem_usage=True, **kwargs) line 82: model = AutoModelForCausalLM.from_pretrained(model_base, offload_folder='offload', torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto") line 92 model = AutoModelForCausalLM.from_pretrained(model_path, offload_folder='offload', low_cpu_mem_usage=True, **kwargs) set CUDA_VISIBLE_DEVICES=0 && set BNB_CUDA_VERSION=117 && set LD_LIBRARY_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin" && uvicorn main_demo_7B:app --host 0.0.0.0 --port 8888
|
|
Hello, I've encountered the same problem. It seems that bitsandbytes didn't support Windows very well. |
Error messages:
How to reproduce:
System info:
The text was updated successfully, but these errors were encountered: