-
Notifications
You must be signed in to change notification settings - Fork 1k
Issues: Mozilla-Ocho/llamafile
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Bug: Shared memory not working, results in Segfault
bug
critical severity
#611
opened Nov 7, 2024 by
abishekmuthian
Bug: segfault loading models with KV quantization and related problems
bug
high severity
#610
opened Nov 5, 2024 by
mseri
Bug: GPU Acceleration works for one but not the other users on same Linux machine
bug
medium severity
#609
opened Nov 5, 2024 by
lovenemesis
Bug: 'cmath' file not found on Windows with AMD Dedicated GPU
bug
medium severity
#601
opened Oct 24, 2024 by
DK013
Bug: llamafiler /v1/embeddings endpoint does not return model name
bug
low severity
#589
opened Oct 14, 2024 by
wirthual
Bug:
--path
Option Broken When Pointing to a Folder
bug
high severity
#588
opened Oct 13, 2024 by
gorkem
Bug: binary called ape in PATH breaks everything
bug
high severity
#587
opened Oct 13, 2024 by
step21
Bug: Phi3.5-mini-instruct Q4 K L gguf based llamafile CuDA error AMD iGPU
bug
high severity
#584
opened Oct 10, 2024 by
eddan168
Bug: install: cannot stat 'o/x86_64/stable-diffusion.cpp/main': No such file or directory
bug
high severity
#580
opened Oct 6, 2024 by
toby3d
Bug: APE is running on WIN32 inside WSL - whisperfile - zsh
bug
high severity
#579
opened Oct 4, 2024 by
baptistecs
Bug: Segmentation fault re-running after installing NVIDIA CUDA.
bug
medium severity
#560
opened Sep 5, 2024 by
4kbyte
Bug:
ggml-rocm.so not found
in llamafile 0.8.13
bug
medium severity
#547
opened Aug 20, 2024 by
winstonma
Bug: The token generation speed is slower compared to the upstream llama.cpp project
bug
medium severity
#533
opened Aug 13, 2024 by
BIGPPWONG
Bug: unknown argument: --threads‐batch‐draft
bug
medium severity
#532
opened Aug 9, 2024 by
moisestohias
Bug: llama 3.1 and variants fail with error "wrong number of tensors; expected 292, got 291"
bug
high severity
#516
opened Jul 30, 2024 by
camAtGitHub
Bug: Unable to load Mixtral-8x7B-Instruct-v0.1-GGUF on Amazon Linux with AMD EPYC 7R13
bug
critical severity
#512
opened Jul 28, 2024 by
rpchastain
Bug: low CPU usage on AWS Graviton4 compared to ollama
bug
low severity
#503
opened Jul 24, 2024 by
nlothian
Previous Next
ProTip!
no:milestone will show everything without a milestone.