Skip to content

Issues: abetlen/llama-cpp-python

Roadmap for v0.2
#487 opened Jul 18, 2023 by abetlen
Open 1
Add batched inference
#771 opened Sep 30, 2023 by abetlen
Open 37
Improve installation process
#1178 opened Feb 12, 2024 by abetlen
Open 8
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Issues list

Failed to detect a default CUDA architecture build hardware Hardware specific issue
#627 opened Aug 22, 2023 by arthurwolf
4 tasks done
How to use GPU? build hardware Hardware specific issue
#576 opened Aug 5, 2023 by imwide
LLama cpp problem ( gpu support) bug Something isn't working hardware Hardware specific issue llama.cpp Problem with llama.cpp shared lib
#509 opened Jul 20, 2023 by xajanix
Create plug-n-play nvidia docker image enhancement New feature or request hardware Hardware specific issue
#496 opened Jul 18, 2023 by abetlen
Could not find nvcc, please set CUDAToolkit_ROOT build duplicate This issue or pull request already exists hardware Hardware specific issue llama.cpp Problem with llama.cpp shared lib windows A Windoze-specific issue
#409 opened Jun 20, 2023 by EugeoSynthesisThirtyTwo
GPU memory not cleaned up after off-loading layers to GPU using n_gpu_layers bug Something isn't working hardware Hardware specific issue llama.cpp Problem with llama.cpp shared lib
#223 opened May 17, 2023 by nidhishs
4 tasks done
ProTip! Updated in the last three days: updated:>2025-05-22.