Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError: [WinError -1073741795] Windows Error 0xc000001d #609

Open
4 tasks done
thedevstone opened this issue Aug 14, 2023 · 1 comment
Open
4 tasks done

OSError: [WinError -1073741795] Windows Error 0xc000001d #609

thedevstone opened this issue Aug 14, 2023 · 1 comment
Labels
duplicate This issue or pull request already exists

Comments

@thedevstone
Copy link

thedevstone commented Aug 14, 2023

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

I expect llama cpp working

Current Behavior

I followed official installer guide:

  • installed cmake
  • installed git
  • installed vs studio with desktop c++ and linux embedded development
  • runned pyhton setup.py install

When using lamacpp it's not working
Traceback (most recent call last):
File "C:\Users\luca.giulianini2\Desktop\cuda-test\main.py", line 10, in
llm = Llama(model_path=r"llama-2-7b.ggmlv3.q2_K.bin")
File "C:\Users\luca.giulianini2\AppData\Local\anaconda3\envs\ai\lib\site-packages\llama_cpp_python-0.1.77-py3.10-win-amd64.egg\llama_cpp\llama.py", line 320, in init
self.model = llama_cpp.llama_load_model_from_file(
File "C:\Users\luca.giulianini2\AppData\Local\anaconda3\envs\ai\lib\site-packages\llama_cpp_python-0.1.77-py3.10-win-amd64.egg\llama_cpp\llama_cpp.py", line 428, in llama_load_model_from_file
return _lib.llama_load_model_from_file(path_model, params)
OSError: [WinError -1073741795] Windows Error 0xc000001d

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

  • Physical (or virtual) hardware you are using, e.g. for Linux:

intel i7 3770 4 core 8 threads

  • Operating System, e.g. for Linux:

windows 10 LTSC

  • SDK version, e.g. for Linux:
$ python3 --version 3.10.12
$ cmake --version 3.27.2
$ g++ --version don't know

Try the following:

  1. git clone https://github.com/abetlen/llama-cpp-python
  2. cd llama-cpp-python
  3. rm -rf _skbuild/ # delete any old builds
  4. python setup.py develop
  5. cd ./vendor/llama.cpp
  6. Follow llama.cpp's instructions to cmake llama.cpp
  7. Run llama.cpp's ./main with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp
@gjmulder
Copy link
Contributor

#81 #562

@gjmulder gjmulder added the duplicate This issue or pull request already exists label Aug 16, 2023
antoine-lizee pushed a commit to antoine-lizee/llama-cpp-python that referenced this issue Oct 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

2 participants