Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ x ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [ x ] I carefully followed the README.md.
- [ x ] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [ x ] I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
examples server should start
Current Behavior
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-model-f16.gguf'
{"timestamp":1698069462,"level":"ERROR","function":"load_model","line":558,"message":"unable to load model","model":"models/7B/ggml-model-f16.gguf"}
Loaded 'C:\Windows\SysWOW64\kernel.appcore.dll'.
Loaded 'C:\Windows\SysWOW64\msvcrt.dll'.
The program '[6600] server.exe' has exited with code 1 (0x1).
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
$ lscpu
-
Operating System, e.g. for Linux:
windows
$ uname -a
-
SDK version, e.g. for Linux:
$ python3 Python 3.10.11
$ make 3.28
Failure Information (for bugs)
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-model-f16.gguf'
{"timestamp":1698069462,"level":"ERROR","function":"load_model","line":558,"message":"unable to load model","model":"models/7B/ggml-model-f16.gguf"}
Loaded 'C:\Windows\SysWOW64\kernel.appcore.dll'.
Loaded 'C:\Windows\SysWOW64\msvcrt.dll'.
The program '[6600] server.exe' has exited with code 1 (0x1).