Skip to content

Commit

Permalink
[auto] Sync version 2312011812.0.0+llamacpp-release.b1595
Browse files Browse the repository at this point in the history
== Relevant log messages from source repo:

commit 880f57973b8e0091d0f9f50eb5ab4cd4e31582ca
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Fri Dec 1 18:42:11 2023 +0200

    llama : fix integer overflow during quantization (#4284)

    happens with multi-threaded quantization of Qwen-72B

    ggml-ci
  • Loading branch information
github-actions committed Dec 1, 2023
1 parent dbc19b9 commit a46cf66
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "ggml-sys-bleedingedge"
version = "2312011218.0.0+llamacpp-release.b1593"
version = "2312011812.0.0+llamacpp-release.b1595"
description = "Bleeding edge low-level bindings to GGML. "
repository = "https://github.com/KerfuffleV2/ggml-sys-bleedingedge"
keywords = ["deep-learning", "machine-learning", "tensors", "ggml", "ml"]
Expand Down
2 changes: 1 addition & 1 deletion VERSION.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2312011218.0.0+llamacpp-release.b1593
2312011812.0.0+llamacpp-release.b1595
2 changes: 1 addition & 1 deletion ggml-tag-current.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
b1593
b1595
2 changes: 1 addition & 1 deletion ggml-tag-previous.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
b1592
b1593

0 comments on commit a46cf66

Please sign in to comment.