Skip to content

Commit

Permalink
Bump llama-cpp-python from 0.2.20 to 0.2.27 (#293)
Browse files Browse the repository at this point in the history
Bumps [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
from 0.2.20 to 0.2.27.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/abetlen/llama-cpp-python/blob/main/CHANGELOG.md">llama-cpp-python's
changelog</a>.</em></p>
<blockquote>
<h2>[0.2.27]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@b3a7c20</li>
<li>feat: Add <code>saiga</code> chat format by <a
href="https://github.com/femoiseev"><code>@​femoiseev</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1050">#1050</a></li>
<li>feat: Added <code>chatglm3</code> chat format by <a
href="https://github.com/xaviviro"><code>@​xaviviro</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1059">#1059</a></li>
<li>fix: Correct typo in README.md by <a
href="https://github.com/qeleb"><code>@​qeleb</code></a> in (<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1058">#1058</a>)</li>
</ul>
<h2>[0.2.26]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@f679349</li>
</ul>
<h2>[0.2.25]</h2>
<ul>
<li>feat(server): Multi model support by <a
href="https://github.com/D4ve-R"><code>@​D4ve-R</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/931">#931</a></li>
<li>feat(server): Support none defaulting to infinity for completions by
<a href="https://github.com/swg"><code>@​swg</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/111">#111</a></li>
<li>feat(server): Implement openai api compatible authentication by <a
href="https://github.com/docmeth2"><code>@​docmeth2</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1010">#1010</a></li>
<li>fix: text_offset of multi-token characters by <a
href="https://github.com/twaka"><code>@​twaka</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1037">#1037</a></li>
<li>fix: ctypes bindings for kv override by <a
href="https://github.com/phiharri"><code>@​phiharri</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1011">#1011</a></li>
<li>fix: ctypes definitions of llama_kv_cache_view_update and
llama_kv_cache_view_free. by <a
href="https://github.com/e-c-d"><code>@​e-c-d</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1028">#1028</a></li>
</ul>
<h2>[0.2.24]</h2>
<ul>
<li>feat: Update llama.cpp to
ggerganov/llama.cpp@0e18b2e</li>
<li>feat: Add offload_kqv option to llama and server by <a
href="https://github.com/abetlen"><code>@​abetlen</code></a> in
095c65000642a3cf73055d7428232fb18b73c6f3</li>
<li>feat: n_ctx=0 now uses the n_ctx_train of the model by <a
href="https://github.com/DanieleMorotti"><code>@​DanieleMorotti</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1015">#1015</a></li>
<li>feat: logits_to_logprobs supports both 2-D and 3-D logits arrays by
<a href="https://github.com/kddubey"><code>@​kddubey</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1002">#1002</a></li>
<li>fix: Remove f16_kv, add offload_kqv fields in low level and llama
apis by <a
href="https://github.com/brandonrobertz"><code>@​brandonrobertz</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1019">#1019</a></li>
<li>perf: Don't convert logprobs arrays to lists by <a
href="https://github.com/kddubey"><code>@​kddubey</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1021">#1021</a></li>
<li>docs: Fix README.md functionary demo typo by <a
href="https://github.com/evelynmitchell"><code>@​evelynmitchell</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/996">#996</a></li>
<li>examples: Update low_level_api_llama_cpp.py to match current API by
<a href="https://github.com/jsoma"><code>@​jsoma</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1023">#1023</a></li>
</ul>
<h2>[0.2.23]</h2>
<ul>
<li>Update llama.cpp to
ggerganov/llama.cpp@948ff13</li>
<li>Add qwen chat format by <a
href="https://github.com/yhfgyyf"><code>@​yhfgyyf</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1005">#1005</a></li>
<li>Add support for running the server with SSL by <a
href="https://github.com/rgerganov"><code>@​rgerganov</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/994">#994</a></li>
<li>Replace logits_to_logprobs implementation with numpy equivalent to
llama.cpp by <a
href="https://github.com/player1537"><code>@​player1537</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/991">#991</a></li>
<li>Fix UnsupportedOperation: fileno in suppress_stdout_stderr by <a
href="https://github.com/zocainViken"><code>@​zocainViken</code></a> in
<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/961">#961</a></li>
<li>Add Pygmalion chat format by <a
href="https://github.com/chiensen"><code>@​chiensen</code></a> in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/986">#986</a></li>
<li>README.md multimodal params fix by <a
href="https://github.com/zocainViken"><code>@​zocainViken</code></a> in
<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/967">#967</a></li>
<li>Fix minor typo in README by <a
href="https://github.com/aniketmaurya"><code>@​aniketmaurya</code></a>
in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/958">#958</a></li>
</ul>
<h2>[0.2.22]</h2>
<ul>
<li>Update llama.cpp to
ggerganov/llama.cpp@8a7b2fa</li>
<li>Fix conflict with transformers library by kddubey in <a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/952">#952</a></li>
</ul>
<h2>[0.2.21]</h2>
<ul>
<li>Update llama.cpp to
ggerganov/llama.cpp@64e64aa</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/75d0527fd782a792af8612e55b0a3f2dad469ae9"><code>75d0527</code></a>
Bump version</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/fffcd0181c2b58a084daebc6df659520d0c73337"><code>fffcd01</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/907b9e9d4281336072519fbf11e885768ad0ff0b"><code>907b9e9</code></a>
Add Saiga chat format. (<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1050">#1050</a>)</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/f766b70c9a63801f6f27dc92b4ab822f92055bc9"><code>f766b70</code></a>
Fix: Correct typo in README.md (<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1058">#1058</a>)</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/cf743ec5d32cc84e68295da8442ccf3a64e635f1"><code>cf743ec</code></a>
Added ChatGLM chat format (<a
href="https://redirect.github.com/abetlen/llama-cpp-python/issues/1059">#1059</a>)</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/eb9c7d4ed8984bdff6585e38d04e7d17bf14155e"><code>eb9c7d4</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/011c3630f5a130505458c29d58f1654d5efba3bf"><code>011c363</code></a>
Bump version</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/969ea6a2c029964175316dd71e4497f241fcc6a4"><code>969ea6a</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/f952d45c2cd0ccb63b117130c1b1bf4897987e4c"><code>f952d45</code></a>
Update llama.cpp</li>
<li><a
href="https://github.com/abetlen/llama-cpp-python/commit/f6f157c06dac24296ec990e912f80c4f8dbe1591"><code>f6f157c</code></a>
Update bug report instructions for new build process.</li>
<li>Additional commits viewable in <a
href="https://github.com/abetlen/llama-cpp-python/compare/v0.2.20...v0.2.27">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=llama-cpp-python&package-manager=pip&previous-version=0.2.20&new-version=0.2.27)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  • Loading branch information
dependabot[bot] authored Jan 8, 2024
1 parent d5f321f commit e9b550e
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
8 changes: 4 additions & 4 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ selfsign = "scripts.gen_certs:entrypoint"
python = "^3.11"
pydantic = "^2.5.3"
fastapi = "^0.108.0"
llama-cpp-python = "^0.2.20"
llama-cpp-python = "^0.2.27"
huggingface-hub = "0.20.1"
duckdb = "^0.9.1"
uvicorn = "^0.25.0"
Expand Down

0 comments on commit e9b550e

Please sign in to comment.