-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Pull requests: abetlen/llama-cpp-python
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
(WIP) Openapi client gen
enhancement
New feature or request
#144
opened May 3, 2023 by
Stonelinks
Loading…
added huggingface space implementation
enhancement
New feature or request
#146
opened May 3, 2023 by
abhishekmamdapure
Loading…
Add truncate to high level api
enhancement
New feature or request
#172
opened May 8, 2023 by
SagsMug
Loading…
WIP: Mechanism to retrieve all logprobs on completion
enhancement
New feature or request
#176
opened May 9, 2023 by
tristanvdb
Loading…
Allow relative paths at model initialization
enhancement
New feature or request
#198
opened May 12, 2023 by
andreakiro
Loading…
Added Mirostat Mode and related Params to Llama initialization
#329
opened Jun 6, 2023 by
CoffeeVampir3
Loading…
Implement a flake.nix that uses the upstream llama.cpp flake by reference
#517
opened Jul 23, 2023 by
charles-dyfis-net
Loading…
Add parameter to skip saving to cache when caching is enabled
#594
opened Aug 10, 2023 by
shaunabanana
Loading…
fix: get system message from messages for all prompt formats
#913
opened Nov 15, 2023 by
julianullrich99
Loading…
Multistage CUDA Dockerfile to reduce image size and allow local repository build
#993
opened Dec 10, 2023 by
peturparkur
Loading…
Remove subsequences of cached tokens to match a longer prefix
#1106
opened Jan 19, 2024 by
m42a
Loading…
Previous Next
ProTip!
Filter pull requests by the default branch with base:main.