Skip to content

Issues: ggerganov/llama.cpp

changelog : libllama API
#9289 opened Sep 3, 2024 by ggerganov
Open 1
changelog : llama-server REST API
#9291 opened Sep 3, 2024 by ggerganov
Open 7
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Feature Request: Nemotron chat templates enhancement New feature or request stale
#9864 opened Oct 12, 2024 by freebiesoft
4 tasks done
Bug: fatal error: too many errors emitted, stopping now [-ferror-limit=] bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow) stale
#9858 opened Oct 12, 2024 by wuhongsheng
Bug: Unable to load GGUF models after update bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss) stale
#9852 opened Oct 11, 2024 by FitzWM
Bug: llama-cli exiting on Windows after loading everything when given an initial prompt bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss) stale
#9843 opened Oct 11, 2024 by Edw590
Bug: Llama.cpp with cuda support outputs garbage response when prompt is above 30-40ish Tokens bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale
#9838 opened Oct 11, 2024 by bmahabirbu
Server UI bug: corrupted generation medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) server/webui server stale
#9836 opened Oct 11, 2024 by ivanstepanovftw
android examples add top_p min_keep to new_context enhancement New feature or request stale
#9828 opened Oct 10, 2024 by darrassi1
4 tasks done
Bug: Load time on rpc server with multiple machines bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale
#9820 opened Oct 10, 2024 by angelosathanasiadis
Bug: Unable to build the project with HIP fatal error: 'hipblas/hipblas.h' file not found bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) stale
#9815 opened Oct 10, 2024 by RandUser123sa
Bug: [vulkan] llama.cpp not work on Raspberry Pi 5 bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale
#9801 opened Oct 9, 2024 by FanShupei
Typo on build.md? bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) stale
#9793 opened Oct 8, 2024 by lisatwyw
Bug: After update, unable to load GGUF models bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss) stale
#9790 opened Oct 8, 2024 by FitzWM
Feature Request: Enable overallocation for ggml-vulkan enhancement New feature or request stale
#9785 opened Oct 8, 2024 by theraininsky
4 tasks done
Feature Request: Support for architecture MambaByte enhancement New feature or request stale
#9780 opened Oct 8, 2024 by hg0428
4 tasks done
Bug: Cannot edit input before the current line. bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale
#9777 opened Oct 7, 2024 by SpaceHunterInf
Bug: No improvement for NEON? bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) stale
#9774 opened Oct 7, 2024 by Abhranta
Feature Request: ANE utilization on Apple Silicon enhancement New feature or request stale
#9773 opened Oct 7, 2024 by hg0428
4 tasks done
llama_model_load: error loading model: vk::PhysicalDevice::createDevice: ErrorDeviceLost bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow) stale
#9767 opened Oct 6, 2024 by BreakShoot
Bug: Rocm extreme slow down on GFX1100 with release binary bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale
#9765 opened Oct 6, 2024 by sorasoras
Bug: Row Split Mode - Segmentation fault after model load on ROCm multi-gpu bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss) stale
#9761 opened Oct 6, 2024 by thamwangjun
Problem with using llava_surgery_v2.py bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow) stale
#9750 opened Oct 5, 2024 by ssykee
Bug: using kv cache quantitisation q4_0 seems to cause issues when a context shift is done bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable) stale
#9743 opened Oct 4, 2024 by blauzim
ProTip! no:milestone will show everything without a milestone.