Skip to content

Tags: EAddario/llama.cpp

Tags

b6096

Toggle b6096's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
llama : add gpt-oss (ggml-org#15091)

* oai moe

* compat with new checkpoint

* add attn sink impl

* add rope scaling yarn

* logits match with latest transformers code

* wip chat template

* rm trailing space

* use ggml_scale_bias

* rm redundant is_swa_all

* convert interleaved gate_up

* graph : fix activation function to match reference (ggml-org#7)

* vocab : handle o200k_harmony special tokens

* ggml : add attention sinks support (ggml-org#1)

* llama : add attn sinks

* ggml : add attn sinks

* cuda : add attn sinks

* vulkan : add support for sinks in softmax

remove unnecessary return

* ggml : add fused swiglu_oai op (ggml-org#11)

* ggml : add fused swiglu_oai op

* Update ggml/src/ggml-cpu/ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* update CUDA impl

* cont : metal impl

* add vulkan impl

* test-backend-ops : more test cases, clean up

* llama : remove unfused impl

* remove extra lines

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>

* repack mxfp4 upon conversion

* clean up a bit

* enable thinking

* add quick hack to render only some special tokens

* fix bf16 conversion

* remove vocab hack

* webui ok

* support chat parsing for gpt-oss

* fix webui

* direct mapping mxfp4, FINALLY

* force using mxfp4

* properly use lazy tensor

* ggml : add mxfp4

ggml : use e8m0 conversion instead of powf

Co-authored-by: Diego Devesa <slarengh@gmail.com>

change kvalues_mxfp4 table to match e2m1 (ggml-org#6)

metal : remove quantization for now (not used)

cuda : fix disabled CUDA graphs due to ffn moe bias

vulkan : add support for mxfp4

cont : add cm2 dequant

* ggml : add ggml_add_id (ggml-org#13)

* ggml : add ggml_add_id

* add cuda impl

* llama : add weight support check for add_id

* perf opt

* add vulkan impl

* rename cuda files

* add metal impl

* allow in-place ggml_add_id

* llama : keep biases on CPU with --cpu-moe

* llama : fix compile error

ggml-ci

* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw

ggml-ci

* cleanup

ggml-ci

* sycl : fix supports_op for MXFP4

ggml-ci

* fix Unknown reasoning format

* ggml-cpu : fix AVX build

ggml-ci

* fix hip build

ggml-ci

* cuda : add mxfp4 dequantization support for cuBLAS

ggml-ci

* ggml-cpu : fix mxfp4 fallback definitions for some architectures

ggml-ci

* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: slaren <slarengh@gmail.com>

b6082

Toggle b6082's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
vulkan: fix build when using glslang that does not support coopmat2 (g…

…gml-org#15062)

b6039

Toggle b6039's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
opencl: add `mul_mat_f32_f32_l4_lm` and `mul_mat_f16_f32_l4_lm` (ggml…

…-org#14809)

b6037

Toggle b6037's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
server : add support for `embd_normalize` parameter (ggml-org#14964)

This commit adds support for the `embd_normalize` parameter in the
server code.

The motivation for this is that currently if the server is started with
a pooling type that is not `none`, then Euclidean/L2 normalization will
be the normalization method used for embeddings. However, this is not
always the desired behavior, and users may want to use other
normalization (or none) and this commit allows that.

Example usage:
```console
curl --request POST \
    --url http://localhost:8080/embedding \
    --header "Content-Type: application/json" \
    --data '{"input": "Hello world today", "embd_normalize": -1}
```

b6020

Toggle b6020's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
CUDA: add roll (ggml-org#14919)

* CUDA: add roll

* Make everything const, use __restrict__

b6005

Toggle b6005's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
vulkan: add ops docs (ggml-org#14900)

b5996

Toggle b5996's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
CANN: Implement GLU ops (ggml-org#14884)

Implement REGLU, GEGLU, SWIGLU ops according to ggml-org#14158

b5995

Toggle b5995's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
musa: fix build warnings (unused variable) (ggml-org#14869)

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

b5994

Toggle b5994's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
ggml-cpu : disable GGML_NNPA by default due to instability (ggml-org#…

…14880)

* docs: update s390x document for sentencepiece

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit e086c5e)

* docs: update huggingface links + reword

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 8410b08)

* ggml-cpu: disable ggml-nnpa compile flag by default

fixes ggml-org#14877

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit 412f4c7)

* docs: update s390x build docs to reflect nnpa disable

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
(cherry picked from commit c1eeae1)

---------

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

b5971

Toggle b5971's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
ci : correct label refactor->refactoring (ggml-org#14832)