Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama : restore prefix space in llama tokenizer #4081

Merged
merged 2 commits into from
Nov 15, 2023

Conversation

cebtenzzre
Copy link
Collaborator

I noticed a regression in models that do not use special tokens caused by #3538 (ref):

Command: build/bin/main -t 16 -m /tmp/dolphin-llama2-7b.Q4_0.new.gguf -n 2048 --ignore-eos -p 'The quick brown fox' --seed 1699464025

Before this PR (with a leading space, GitHub doesn't show it):

The quick brown fox jumps over the lazy dog is a well-known English nursery rhyme or children's song.➖

This nursery rhyme has been popular since 1847, when it was first published in the Monthly Magazine of London. Since then, it has undergone various changes and adaptations, but its essence remains the same. The rhyme tells the story of a quick brown fox who jumps over the lazy dog to avoid being caught for stealing some cheese.

After this PR (no leading space):

The quick brown fox jumps over the wall (the last line of the poem) and is on the other side.nahmáo Often used as a verb, němäo is a term derived from the Sanskrit word nirvana, meaning "spirit" or "consciousness." The concept of the spirit or consciousness can be found in many cultures and religions throughout history. In Buddhism, for instance, the term "nirvana" refers to the state of perfect spiritual awakening that is the goal of practice. Similarly, in Christianity, the Holy Spirit is considered a divine spirit that guides and supports believers.

It's just a random LLaMA-2 model that I use for testing, but its output quality is clearly reduced significantly by this change. I believe this applies to all other llama models that do not use special tokens to frame the prompt.

This is surprising behavior for downstream users - basic instruction-tuned (e.g. Vicuna, Alpaca) or writing-tuned models should just work, and I don't think anyone is adding the space that is currently necessary to match HF transformers. As mentioned here, there is some nuance in what transformers does that makes their tokenizer cooperate with both prompt formats, but I don't fully understand their implementation yet.

(Whether it really makes sense for HF transformers to put a leading space before the first ### Instruction: or USER: in the context of a multi-turn chat is another question, but that's how these models are being trained - it's done regardless of 'legacy' being True or False. AFAIK, there are models trained on Alpaca-style prompts that haven't seen multi-turn chats, and won't recognize ### Instruction: without the leading space.)

To be consistent, we should also honor add_prefix_space in the GPT2-style BPE tokenizer (cc @goerch). It appears to be true for Falcon and false for MPT. Right now, it seems that we don't store that in the GGUF at all.

@KerfuffleV2
Copy link
Collaborator

To be consistent, we should also honor add_prefix_space in the GPT2-style BPE tokenizer

That's from tokenizer_config.json? Can be added to the same GGUF python vocab.py logic in #4040 (note I linked that because it also includes a gguf-py change to the logic for that section)

@cebtenzzre
Copy link
Collaborator Author

I took shibe2's suggestion and made the space prefix conditional, but based on the position of the text relative to special tokens instead of based on whether special token processing is enabled. It seems to work fine without special tokens, but I haven't tested with special tokens.

@shibe2
Copy link
Collaborator

shibe2 commented Nov 15, 2023

I will test it with a model that I already have. Someone else can also test it with a model that is known to perform better without extra spaces.

Copy link
Collaborator

@shibe2 shibe2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested main executable. When the prompt begins with special token, tokenization result seems to be the same before and after the change. Inference works as expected.

@WeirdConstructor
Copy link
Contributor

A question, seeing that the space is always added implicitly. Does that mean I can't tokenize partial prompts or in batches anymore? With the recent change in the batch and sequence API, separate tokenization of the system prompt and the sequences might be a thing that is going to happen. Sometimes you want to tokenize stuff that is added later and append it to some sequence.

@shibe2
Copy link
Collaborator

shibe2 commented Nov 15, 2023

Does that mean I can't tokenize partial prompts or in batches anymore?

You can tokenize any string, full prompt or a part of it, but since #2810, a space is inserted into each non-empty string. This means that concatenation of tokenized parts of a prompt is not equivalent to tokenization of full prompt.

tokenize(x+y) != tokenize(x)+tokenize(y)

If you want them to be equivalent, you have to work around this behavior. For example:

bool suff_rm_leading_spc = params.escape;
if (suff_rm_leading_spc && params.input_suffix.find_first_of(" ") == 0 && params.input_suffix.size() > 1) {
params.input_suffix.erase(0, 1);
suff_rm_leading_spc = false;
}
std::vector<llama_token> embd_inp;
std::vector<llama_token> inp_pfx = ::llama_tokenize(ctx, params.input_prefix, false);
std::vector<llama_token> inp_sfx = ::llama_tokenize(ctx, params.input_suffix, false);
const int space_token = 29871;
if (suff_rm_leading_spc && inp_sfx[0] == space_token) {
inp_sfx.erase(inp_sfx.begin());
}
or #2810 (comment). Though this may not work for all models. I proposed a general solution in #3664.

This only affects models that use SentencePiece tokenizer (LLAMA_VOCAB_TYPE_SPM).

@WeirdConstructor
Copy link
Contributor

If you want them to be equivalent, you have to work around this behavior. For example:

Thanks, I suspected as much. So I will need to do something like your workaround for now. The problem is, that I build prompts dynamically and attach as sequences in the KV cache. Leading to weird completion differences depending on where/when I call llama_tokenize().
Given that you have to insert BOS tokens at the beginning of prompts for some models, I expected this to behave similarly. Maybe there should be a llama_tokenize_start_prompt() and llama_tokenize_part(). Also instead of another bool flag in the arguments.
Your #3664 issue describes that well.

@KerfuffleV2
Copy link
Collaborator

@WeirdConstructor Sounds like you might want something like token healing: https://github.com/guidance-ai/guidance/blob/main/notebooks/token_healing.ipynb

@WeirdConstructor
Copy link
Contributor

@WeirdConstructor Sounds like you might want something like token healing:

Ah yes, I am aware that tokens span multiple characters and splitting strings is not going to result in the same token vectors. Thanks for the heads up though.
It's just that llama_tokenize() inserting extra characters to the text is not helping with staying in control of the tokens I feed to the model.

@shibe2
Copy link
Collaborator

shibe2 commented Nov 15, 2023

It's not difficult to split text on token boundaries, such that concatenation of token sequences would work (except for the issue with compulsory space). For example, I checked that in models that I use, newline (LF) character does not stick to any other tokens, so splitting the text into lines works. In general, conversational prompt formats don't allow tokens to contain pieces of different messages.

@cebtenzzre cebtenzzre merged commit a6fc554 into master Nov 15, 2023
33 checks passed
@cebtenzzre cebtenzzre deleted the ceb/restore-prefix-space branch November 15, 2023 16:34
KerfuffleV2 pushed a commit to KerfuffleV2/llama.cpp that referenced this pull request Nov 17, 2023
@KerfuffleV2
Copy link
Collaborator

KerfuffleV2 commented Nov 18, 2023

@cebtenzzre I hate to say it, but it seems this broke stuff.

Testing with this model: https://huggingface.co/NousResearch/Nous-Capybara-34B/tree/main

Quantized versions: https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF

Comparing with these two prompts: USER: Do the thing and \nUSER: Do the thing (an actual newline in the second one):

>>> import sys
>>> sys.path.insert(0, '/path/Nous-Capybara-34B')
>>> import tokenization_yi as yt
>>> a = yt.YiTokenizer('/path/Nous-Capybara-34B/tokenizer.model', add_bos_token=False)
>>> [(x, a._convert_token_to_id(x)) for x in a._tokenize('USER: Do the thing')]
[('USER', 27323), (':', 59601), ('▁Do', 2994), ('▁the', 567), ('▁thing', 1919)]
>>> [(x, a._convert_token_to_id(x)) for x in a._tokenize('\nUSER: Do the thing')]
[('\n', 144), ('USER', 27323), (':', 59601), ('▁Do', 2994), ('▁the', 567), ('▁thing', 1919)]

Now testing with the tokenize example - note I had to fix:

// const bool add_bos = true;
const bool add_bos = llama_should_add_bos_token(model);

Here's what tokenizing looked like without this change:

 27323 -> 'USER'
 59601 -> ':'
  2994 -> ' Do'
   567 -> ' the'
  1919 -> ' thing'

and

   144 -> '
'
 27323 -> 'USER'
 59601 -> ':'
  2994 -> ' Do'
   567 -> ' the'
  1919 -> ' thing'

So this is as expected and matches the Python Yi tokenizer.

Now with this PR applied:

  2134 -> ' US'
  1471 -> 'ER'
 59601 -> ':'
  2994 -> ' Do'
   567 -> ' the'
  1919 -> ' thing

and

   59568 -> ' '
   144 -> '
'
 27323 -> 'USER'
 59601 -> ':'
  2994 -> ' Do'
   567 -> ' the'
  1919 -> ' thing'

The tokenize example sets special to true when tokenizing. With this PR, the results are identical regardless of what special is set to.

However, with the pre-PR version, it actually makes a difference. Tokenizing without this PR and special=false:

  2134 -> ' US'
  1471 -> 'ER'
 59601 -> ':'
  2994 -> ' Do'
   567 -> ' the'
  1919 -> ' thing'

and

 59568 -> ' '
   144 -> '
'
 27323 -> 'USER'
 59601 -> ':'
  2994 -> ' Do'
   567 -> ' the'
  1919 -> ' thing'

So basically the same as with this PR.

Anyway, it makes a huge difference in the quality of the response at least with the model I used as an example. The results are basically garbage with USER split into two tokens.

@shibe2
Copy link
Collaborator

shibe2 commented Nov 18, 2023

@KerfuffleV2 Did you use --escape? If yes, try older version without it. I think, space insertion here is a deliberate decision. Nowadays, that decision does not look good, which I reported in #3664.

@KerfuffleV2
Copy link
Collaborator

@shibe2

Did you use --escape?

What do you mean? The tokenize example doesn't support anything other than --ids (to only print token ids)

When actually calling tokenize to produce this output, I just used the shell's input stuff to insert a literal newline in the prompt string. No need for escaping or whatever. (In zsh and bash you can hit ^V^J to do that.)

@shibe2
Copy link
Collaborator

shibe2 commented Nov 18, 2023

@KerfuffleV2 Sorry, I didn't read your comment carefully. What I meant is that main without --escape might give same poor response before this change.

Anyway, it makes a huge difference in the quality of the response at least with the model I used as an example. The results are basically garbage with USER split into two tokens.

@KerfuffleV2
Copy link
Collaborator

What I meant is that main without --escape

I don't really understand. Are you saying --escape will have an effect other than interpreting escape codes in this case? Or are you thinking I did something like -p '\nUSER: Do the thing? (I didn't, I inserted a literal newline in the prompt text via the shell.)

@shibe2
Copy link
Collaborator

shibe2 commented Nov 18, 2023

It just so happened that between #3538 and #4081, --escape could be used to control insertion of space in LLAMA_VOCAB_TYPE_SPM path.

@KerfuffleV2
Copy link
Collaborator

I still don't really know what you mean since #3538 doesn't appear to add any behavior based on --escape and #4081 doesn't remove that behavior, and it also doesn't seem to exist in the current version. I might have missed something looking at the code. Can you link to the exact commit you're saying did that?

@shibe2
Copy link
Collaborator

shibe2 commented Nov 18, 2023

Now looking at the code I don't see it either. Perhaps, I confabulated it. I assumed that --escape is needed to enable processing of special tokens and was adding it to the command when testing prompt formats with special tokens. But it appears to be always on, which is not good, IMHO.

@KerfuffleV2
Copy link
Collaborator

@cebtenzzre Just wanted to make sure you saw this since there was a bunch of other discussion in the comments: #4081 (comment)

olexiyb pushed a commit to Sanctum-AI/llama.cpp that referenced this pull request Nov 23, 2023
@cebtenzzre
Copy link
Collaborator Author

cebtenzzre commented Jan 8, 2024

It seems like this PR was also significant for Mistral-Instruct. [INST] and [/INST] do not appear to be special tokens, so this PR restored the expected behavior of that model.

Without this change (using the Nomic Vulkan backend):

[INST] Write a brief, two-paragraph summary of the history of Unix. [/INST] Unix was first developed as a means of teaching computer science at the Bell Laboratories in 1968 by William H. Gates, who's contributions to Unix were significant and instrumental for its creation. Gates made his contributions free, that Laboratories will also be make for its history instrumental: Thomas H. Gates did a lot of work on the creation of the operating system. He developed an innovative version of Unix called: Unix, by William H. Gates himself. Gates' contributions did not create the first development of Unix until that his multi-user development of Unix was a major milestone in his career. The operating system he did not develop and named Unix was created from scratch by Garth' S. Thompson in 1968, at the urging of William H. Gates III, who was working on creating it at the time. In 1973, the creator of the Unix operating system, William H. Ritchie, made his contributions free, to the development of this, the early version of the Linux kernel was created by H. He was then instrumental in the development of Unix's early version of the same, William H. Bell at the time of its creation (1968). In 1973, at the time of the kernel's development, that is the most important Unix milestone, Ritchie, its contributions did a lot of work on creating a new Linux version that was made by his contributions. He released the first version of Unix, the Creation of the operating system, in 1973, in an early-development stage.

With this change:

[INST] Write a brief, two-paragraph summary of the history of Unix. [/INST] Unix was first developed in the late 1960s and early 1970s by Ken Thompson and Dennis Ritchie at Bell Laboratories, where Ken worked on developing the operating system for research purposes, while Dennis created the multuser and network support systems with the intention of providing a secure, reliable method to share filesystem resources among multiple users. Originally, Unix was just used as an experimental version of time-sharing by multiple developers in the 1980s, but its popularity led to it being used as a basis for multiuser and network development efforts for much of its history. The system was first introduced into the operating system in the early 1990s when it became clear that Unix was not just in the 1970s, but also in much of its workings through the 1990s. In the 2000s, Dennis was able to access Unix’s source code in his late 1970s was able to help him and Dennis with their multiuser and network development efforts. Since then, Unix has helped numerous developers by creating it as a multuser system in much of its workings and sharing history among them, making it possible to use in the 2010s when Ken Thompson and Dennis were able to support the production of Unix. When the original operating system was given their network development efforts in much of its file-sharing legacy, the team was able to develop the operating system for 3960-2470, but it was also able to use this new version during the 2010s when Dennis was working on developing a network of computer systems and file sharing networks at Bell Laboratories, the team was not just in much of their workings in Unix, but also in the operating system that was being given to Ken

hodlen added a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
llama : restore prefix space in llama tokenizer (ggerganov#4081)

gguf : fix potential infinite loops while parsing (ggerganov#4100)

Co-authored-by: Bernhard Gstrein <gstrein@cs.uni-freiburg.de>

Respect tokenizer.ggml.add_bos_token value when tokenizing (ggerganov#4040)

* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode.

* Respect add_bos_token GGUF metadata value

* gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time

llama : fix data units (ggerganov#4101)

* llama : fix data units

ggml-ci

* Revert "llama : fix data units"

This reverts commit f5feac8.

* llama : disambiguate data units

ggml-ci

cuda : get_row_rounding F32 (ggerganov#4095)

* Fix ggerganov#4017

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

finetune : zero the loraB initial vectors (ggerganov#4082)

* finetune : zero the loraB initial vectors

Without this, the first iteration is starting out far from the base model, instead of exactly on it.
Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs
(though it departs from the paper in using a different distribution for the other vector, in some cases).

* tabs to spaces

* Use ggml_set_zero instead of adding a new function

finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (ggerganov#4079)

* Remove logically superfluous assertions and order by dimension

* Use cblas_sgemm() to implement ggml_compute_forward_out_prod()

* Remove ggml_compute_forward_out_prod_use_blas(), fix compiling errors on cmake/zig, remove trailing whitespace

* Add openBLAS support for sgemm() in compute_forward_out_prod()

llama : add functions to get the model's metadata (ggerganov#4013)

* llama : add functions to get the model's metadata

* format -> std::to_string

* better documentation

train : move number of gpu layers argument parsing to common/train.cpp (ggerganov#4074)

- introduces help entry for the argument
 - cuts '--gpu-layers' form in order to simplify usage and documentation.

Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>

py : remove superfluous import statements (ggerganov#4076)

Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
Co-authored-by: Jiri Podivin <jpodivin@redhat.com>

llava : fix compilation warning that fread return value is not used (ggerganov#4069)

common : improve yaml log escaping (ggerganov#4080)

* logging: improve escaping in yaml output

* logging: include review feedback

py : Falcon HF compatibility (ggerganov#4104)

Falcon HF compatibility

convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (ggerganov#4089)

Co-authored-by: Don Mahurin <@>

examples : add tokenize (ggerganov#4039)

tokenize : fix trailing whitespace

build : support ppc64le build for make and CMake (ggerganov#3963)

* build: support ppc64le build for make and CMake

* build: keep __POWER9_VECTOR__ ifdef and extend with __powerpc64__

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

llama : increase max nodes (ggerganov#4115)

Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (ggerganov#4124)

* ggml-cuda.cu: Clean up warnings when compiling with clang

* ggml-cuda.cu: Move static items into anonymous namespace

* ggml-cuda.cu: Fix use of namespace start macro

* Revert "ggml-cuda.cu: Fix use of namespace start macro"

This reverts commit 26c1149.

* Revert "ggml-cuda.cu: Move static items into anonymous namespace"

This reverts commit e29757e.

scripts : Remove missed baichuan convert script (ggerganov#4127)

tokenize example: Respect normal add BOS token behavior (ggerganov#4126)

Allow building with Makefile

gguf-py : export chat templates (ggerganov#4125)

* gguf-py : export chat templates

* llama.cpp : escape new lines in gguf kv info prints

* gguf-py : bump version

* gguf-py : check chat_template type

* gguf-py : initialize chat_template

gitignore : tokenize

common : comma should be semicolon (ggerganov#4137)

server : relay error messages (ggerganov#4131)

finetune : add --n-gpu-layers flag info to --help (ggerganov#4128)

Revert "finetune : add --n-gpu-layers flag info to --help (ggerganov#4128)"

This reverts commit 05e8301.

speculative : fix prompt tokenization in speculative example (ggerganov#4025)

* Support special tokens and not adding BOS to prompt in speculative

* Adapt to new should_add_bos function

* Ensure tgt and dft have same add_bos setting

ci : add flake8 to github actions (python linting) (ggerganov#4129)

Disabled rules:

* E203 Whitespace before ':' - disabled because we often use 'C' Style where values are aligned

* E211 Whitespace before '(' (E211) - disabled because we often use 'C' Style where values are aligned

* E221 Multiple spaces before operator - disabled because we often use 'C' Style where values are aligned

* E225 Missing whitespace around operator - disabled because it's broken so often it seems like a standard

* E231 Missing whitespace after ',', ';', or ':' - disabled because we often use 'C' Style where values are aligned

* E241 Multiple spaces after ',' - disabled because we often use 'C' Style where values are aligned

* E251 Unexpected spaces around keyword / parameter equals - disabled because it's broken so often it seems like a standard

* E261 At least two spaces before inline comment - disabled because it's broken so often it seems like a standard

* E266 Too many leading '#' for block comment - sometimes used as "section" separator

* E501 Line too long - disabled because it's broken so often it seems like a standard

* E701 Multiple statements on one line (colon) - broken only in convert.py when defining abstract methods (we can use# noqa instead)

* E704 Multiple statements on one line - broken only in convert.py when defining abstract methods (we can use# noqa instead)

main : Add ChatML functionality to main example (ggerganov#4046)

Co-authored-by: Sebastian Cramond <sebby37@users.noreply.github.com>

readme : update ROCm Windows instructions (ggerganov#4122)

* Update README.md

* Update README.md

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

finetune - update readme to mention llama support only (ggerganov#4148)

stablelm : simplify + speedup generation (ggerganov#4153)

docs : add llama-star arch idea

examples : fix typo in parallel example doc comment (ggerganov#4181)

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

readme : update hot topics

llama : KV cache view API + better KV cache management (ggerganov#4170)

* llama : keep track of used KV cells + better KV cache management

* llama : zero KV cache used upon clear

ggml-ci

* llama : allow exporting a view of the KV cache (ggerganov#4180)

* Allow exporting a view of the KV cache

* Allow dumping the sequences per cell in common

* Track max contiguous cells value and position as well

* Fix max contiguous empty cells index calculation

Make dump functions deal with lengths or sequences counts > 10 better

* Fix off by one error in dump_kv_cache_view

* Add doc comments for KV cache view functions

Eliminate cell sequence struct; use llama_seq_id directly

Minor cleanups

* common : add -dkvc arg for enabling kv cache dumps

---------

Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>

Fix incorrect format strings and uninitialized variables. (ggerganov#4133)

* Fix incorrect format strings and uninitialized variables.

* Address comments

* Add the missing include statement

readme : use PATH for Windows ROCm (ggerganov#4195)

* Update README.md to use PATH for Windows ROCm

* Update README.md

* Update README.md

main.swift : fix eos checking (ggerganov#4197)

llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.

convert : fix tensors using grad in some models (ggerganov#4173)

ggml-cuda : support stablelm rope (ggerganov#4156)

* ggml-cuda : support stablelm rope

* remove unused freq_base kernel parameter

* add n_dims parameter to llm_build_k_shift, default to n_rot via overload

* llama : fix llm_build_k_shift args

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

llama : set metal log callback correctly (ggerganov#4204)

server : OAI API compatibility (ggerganov#4198)

* Add openai-compatible POST /v1/chat/completions API endpoint to server example

* fix code style

* Update server README.md

* Improve server README.md

* Fix server.cpp code style according to review

* server : some style changes

* server : indentation

* server : enable special tokens during tokenization by default

* server : minor code style

* server : change random string generator

* straightforward /v1/models endpoint

---------

Co-authored-by: kir-gadjello <111190790+kir-gadjello@users.noreply.github.com>
Co-authored-by: Tobi Lütke <tobi@Tobis-MacBook-Pro.local>

readme : update hot topics

Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (ggerganov#4189)

llama : grammar `reserve` space in `decode_utf8` (ggerganov#4210)

* reserve space for codepoints

* improvement for the appended 0

scripts : Use mmap in torch load (ggerganov#4202)

* Use mmap in torch load, prefer .bin files when loading

* Revert .bin > .safetensors preference

metal : fix yarn (ggerganov#4220)

get the correct n_orig_ctx in metal

lookahead : add example for lookahead decoding (ggerganov#4207)

* lookahead : init

* lookahead : generate and store n-grams

* lookahead : use loop instead recursion to generate n-grams

* lookahead : initial working implementation

* lookahead : filter repeating n-grams

* lookahead : use deterministic init

* lookahead : add to Makefile

* lookahead : fix a bug in the seq_id of the lookahead tokens

* lookahead : add comments

---------

Co-authored-by: slaren <slarengh@gmail.com>

readme : update hot topics

lookahead : support `-n -1` infinite generation

ggml : fix -Warray-bounds warning with gcc (ggerganov#4231)

examples : iOS example with swift ui (ggerganov#4159)

* copy to llama.cpp as subdir

* attempt enabling metal, fails

* ggml metal compiles!

* Update README.md

* initial conversion to new format, utf8 errors?

* bug fixes, but now has an invalid memory access :(

* added O3, now has insufficient memory access

* begin sync with master

* update to match latest code, new errors

* fixed it!

* fix for loop conditionals, increase result size

* fix current workflow errors

* attempt a llama.swiftui workflow

* Update .github/workflows/build.yml

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

readme : add Amica to UI list (ggerganov#4230)

cmake : fix issue with version info not getting baked into LlamaConfig.cmake (ggerganov#3970)

* Split CPP generation from build-info query

* Remove blank lines

* Add BUILD_SHARED_LIBS option

ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offload checks in llama.cpp (ggerganov#4240)

* ggml : use blas even if src0 is not F32

* llama : use n_threads_batch only when n_tokens >= 32

ggml-ci

* llama : revert n_threads_batch logic

ggml-ci

ggml : restore abort() in GGML_ASSERT (ggerganov#4242)

readme : add FreeChat (ggerganov#4248)

examples : add readme files

py : fix oai proxy (ggerganov#3972)

* fix oai proxy

fix generation not stoped while bot stop talking in chat mode

fix possible `slot_id` not exist

response for cors (and pre flight)

* oai proxy: workaround for some client (such as Chatbox)

* use stop as separator to replace hardcoded `\n`

llama : fix typical sampling (ggerganov#4261)

Typical sampling was broken because after copying new_candidates into canditates, the "sorted" bool is left at "true", but the new data is no longer sorted according to probability. Patch to set "sorted" to false.

Test: Generating with temp=0.0001 (approx. argmax)  should generate the same sequence at typical>=1.0 and typical=0.9999 (approx. disabled, but enters the typical sampling codepath).

convert.py : fix llama/llama2 conversion due to vocab_size=-1 (ggerganov#4258)

llama : fix alignment of general.name in print meta (ggerganov#4254)

* llama: fix alignment of general.name in print meta

This commit fixes the alignment of the general.name field in the
llm_load_print_meta function.

Currently the output looks like this:
```console
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 13.02 B
llm_load_print_meta: model size       = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name   = LLaMA v2
```
And with this commit it looks like this:
```console
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 13.02 B
llm_load_print_meta: model size       = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name     = LLaMA v2
```

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

* llama: fix alignment of special tokens

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

---------

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

readme : fix typo (ggerganov#4253)

llama.cpp uses GitHub Actions, not Gitlab Actions.

cmake : fix the metal file foder path (ggerganov#4217)

batched.swift : update README.md (ggerganov#4214)

docs: update how to run

docker : add finetune option (ggerganov#4211)

readme : fix (ggerganov#4135)

* fix: readme

* chore: resolve comments

* chore: resolve comments

main : pass LOG_TEE callback to llama.cpp log (ggerganov#4033)

* main : Call llama_log_set to use LOG_TEE

* tabs to spaces

llava : ShareGPT4V compatibility (vision encoder only loading) (ggerganov#4172)

* ShareGPT4 compatibility (vision encoder only loading)

Load only a CLIP vision encoder (as supplied by ShareGPT finetunes)
Corrects the argument parsing for --img_mean and --img_std (which were previously not parsed but attempted to access)
Defines defaults for img_mean and img_std which are equal to the llava 1.5 CLIP encoder, so you do not have to provide them

* Update convert-image-encoder-to-gguf.py

build : fix build info generation and cleanup Makefile (ggerganov#3920)

* cmake : fix joining of REAL_GIT_DIR

* fix includes with help from include-what-you-use

* make : remove unneeded deps and add test-rope target

* fix C includes in C++ source files

* Revert "fix includes with help from include-what-you-use"

This reverts commit 635e9fa.

make : fix Apple clang determination bug (ggerganov#4272)

Co-authored-by: Will Findley <findley@gmail.com>

server : add single-client multi-prompt support (ggerganov#4232)

* * add multiprompt support

* * cleanup

* * more cleanup

* * remove atomicity of id_gen, and change lock_guard to unique_lock on completion requests

* * remove all references to mutex_multitasks

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* * change to set

---------

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

server : add --log-disable to disable logging to file (ggerganov#4260)

* * add --log-disable to disable logging to file in the server example

* * typo fix

ggml : add ggml_soft_max_ext (ggerganov#4256)

* metal : implement soft_max_ext

* cuda : implement soft_max_ext

* ggml : implement soft_max_ext (CPU)

* batched-bench : print threads

ggml-ci

* metal : simplify soft_max encoding

ggml-ci

* cuda : use 512 threads for soft_max instead of 32

* ggml : update soft max cpu

* cuda : do warp-based block reduce

* cuda : increase max block size to 1024

* cuda : fix warp reduction initialization of shared mem

* metal : warp-based reduction for soft max kernel

* metal : warp-based reduce for rms_norm

* metal : simplify soft max kernel

ggml-ci

* alloc : fix build with debug

py : add requirements file for convert-hf-to-gguf.py (ggerganov#4277)

This commit adds a requirements file for the convert-hf-to-gguf.py
script, and also add the torch and transformers packages to it.

The motivation for this is that currently running convert-hf-to-gguf.py
will produce the following error:
```console
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install -r requirements.txt
Collecting numpy==1.24.4
Collecting sentencepiece==0.1.98
Collecting gguf>=0.1.0
Installing collected packages: sentencepiece, numpy, gguf
Successfully installed gguf-0.5.1 numpy-1.24.4 sentencepiece-0.1.98

(venv) $ python convert-hf-to-gguf.py --help
Traceback (most recent call last):
  File "llama.cpp/convert-hf-to-gguf.py", line 16, in <module>
    import torch
ModuleNotFoundError: No module named 'torch'
```
With this commit, and using requirements-hf-to-gguf.txt instead of
requirements.txt, the script can be run and shows the help output.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

llama : fix integer overflow during quantization (ggerganov#4284)

happens with multi-threaded quantization of Qwen-72B

ggml-ci

llama : add Qwen support (ggerganov#4281)

* enable qwen to llama.cpp

* llama : do not GPU split bias tensors

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

llama : support attention bias on LLaMA architecture (ggerganov#4283)

* Support attention_bias on LLaMA architecture

QKVO bias, should fix InternLM (ggerganov#3133) and works for LLaMAfied Qwen models (ggerganov#3743 (comment)).

* check existence of qkvo bias while loading llama models

Tested on LLaMA2, CUDA and CPU.

* Update llama.cpp

build : enable libstdc++ assertions for debug builds (ggerganov#4275)

swift : fix token_to_piece implementation (ggerganov#4278)

* Fix token_to_piece implementation in Swift

* Fix errors

llama : support optional tensors (ggerganov#4283)

llama : avoid using "optional" keyword (ggerganov#4283)

llama : pad KV cache size (ggerganov#4280)

* llama : pad KV cache size to 32

* metal : try to improve batched decoding

py : add grammar to oai like api (ggerganov#4294)

server : fix OpenAI API `stop` field to be optional (ggerganov#4299)

(cherry picked from commit Mozilla-Ocho/llamafile@e8c92bc)

ggml : fix soft max out-of-bounds access (ggerganov#4307)

ggml-ci

ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (ggerganov#4308)

* ggml : fix soft max out-of-bounds access

ggml-ci

* ggml : reuse ggml_get_n_tasks() in ggml_graph_plan()

ggml-ci

grammar-parser : fix typo (ggerganov#4318)

preceeding -> preceding

swift : fix prompt tokenization logic (ggerganov#4321)

swift : fix concatenation method to avoid invalid UTF8 stringfication (ggerganov#4325)

simple : update error message for KV cache check (ggerganov#4324)

This commit updates the error message that is printed when the
KV cache is not big enough to hold all the prompt and generated
tokens. Specifically it removes the reference to n_parallel and
replaces it with n_len.

Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>

swift : revert compiler checks for swift package (ggerganov#4332)

sampling : custom samplers order (ggerganov#4285)

* Samplers sequence order w parameter

* Cleaned commented code

* Fixed formatting

* Rewrote with unordered_map

* Revert and rewrite, too many problems and safeguards would be needed

* Fixed code style

* Code style fixes according to review

* More readable samplers input string, fixed help

* Style fix in sampler_queue

* Formatting fixes

* Fixing whitespaces

llama : allow overriding GGUF metadata when loading model (ggerganov#4092)

* feat: Allow overriding GGUF metadata when loading model

* Fix the one time GCC is stricter than clang about something

* Step1

* Refactor... basically everything!

* Nuke obsolete GetArrayLen struct

* simplify std::string specialization

* Various cleanups

Add informational output when overrides are applied

Warn user when an override with the wrong type is specified

* Fix broken logic for parsing bool KV overrides
Fix issue where overrides didn't apply when key missing in GGUF metadata
Resolve merge changes

* llama : rearrange model params

* Update new GET_KEY call

Add note that metadata KV overrides aren't reflected in initial metadata KV info dump

---------

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

grammar : pre-computed pieces + reserve mem + less string copies (ggerganov#4330)

* reserve space for codepoints

* improvement for the appended 0

* used precomputed token text for grammar sample

* reserve canidates_decoded

* reserve canidates_grammar

* remove candidates_decoded

* Revert "remove candidates_decoded"

This reverts commit 3773328.

* changed decode_utf8 to take src by ref

speculative : support `--color` (ggerganov#4343)

* speculative: add some colors

* minor : add braces

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

common : fix compile warning

server : recognize cache_prompt parameter in OAI API (ggerganov#4347)

train : fix ggerganov#4227 (double free in examples/train-text-from-scratch/train-text-from-scratch.cpp) (ggerganov#4351)

On commit b1108 (44c117f) xaedes added

    ggml_allocr * alloc = NULL;

    ... (many lines in between)

    if (alloc) {
        ggml_allocr_free(alloc);
    }

Which is correct, but it's easy to lose context after many lines in between.

On commit b1287 (0e76a899) xaedes made a big change. From here on, alloc is freed eagerly.

    alloc = ggml_allocr_new(...)
    ... (short lines of code)
    ggml_allocr_free(alloc)

This happens a few times, but alloc is never set to NULL, and many lines below,
we still have

    if (alloc) {
        ggml_allocr_free(alloc);
    }

which causes a double-free.

llama : per-layer KV cache + quantum K cache (ggerganov#4309)

* per-layer KV

* remove unnecessary copies

* less code duplication, offload k and v separately

* llama : offload KV cache per-layer

* llama : offload K shift tensors

* llama : offload for rest of the model arches

* llama : enable offload debug temporarily

* llama : keep the KV related layers on the device

* llama : remove mirrors, perform Device -> Host when partial offload

* common : add command-line arg to disable KV cache offloading

* llama : update session save/load

* llama : support quantum K cache (ggerganov#4312)

* llama : support quantum K cache (wip)

* metal : add F32 -> Q8_0 copy kernel

* cuda : add F32 -> Q8_0 copy kernel

ggml-ci

* cuda : use mmv kernel for quantum cache ops

* llama : pass KV cache type through API

* llama : fix build

ggml-ci

* metal : add F32 -> Q4_0 copy kernel

* metal : add F32 -> Q4_1 copy kernel

* cuda : wip

* cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels

* llama-bench : support type_k/type_v

* metal : use mm kernel only for quantum KV cache

* cuda : add comment

* llama : remove memory_f16 and kv_f16 flags

---------

Co-authored-by: slaren <slarengh@gmail.com>

* readme : add API change notice

---------

Co-authored-by: slaren <slarengh@gmail.com>

sync : ggml (new ops, tests, backend, etc.) (ggerganov#4359)

* sync : ggml (part 1)

* sync : ggml (part 2, CUDA)

* sync : ggml (part 3, Metal)

* ggml : build fixes

ggml-ci

* cuda : restore lost changes

* cuda : restore lost changes (StableLM rope)

* cmake : enable separable compilation for CUDA

ggml-ci

* ggml-cuda : remove device side dequantize

* Revert "cmake : enable separable compilation for CUDA"

This reverts commit 09e35d0.

* cuda : remove assert for rope

* tests : add test-backend-ops

* ggml : fix bug in ggml_concat

* ggml : restore `ggml_get_n_tasks()` logic in `ggml_graph_plan()`

* ci : try to fix macOS

* ggml-backend : remove backend self-registration

* ci : disable Metal for macOS cmake build

ggml-ci

* metal : fix "supports family" call

* metal : fix assert

* metal : print resource path

ggml-ci

---------

Co-authored-by: slaren <slarengh@gmail.com>

grammar : revert the replacement of llama_token_to_piece with id_to_token (ggerganov#4396)

Update README.md (ggerganov#4388)

Fix small typo.

ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (ggerganov#4424)

server : fix local model name in server (ggerganov#4420)

llama : document logits_all deprecation (ggerganov#4418)

llama_context_params.logits_all is a parameter for controlling
llama_eval. This documents that logits_all should not be used with
llama_decode and llama_batch.

build : target Windows 8 for standard mingw-w64 (ggerganov#4405)

* build : target Windows 8 for standard mingw-w64

* make : fix missing console.o deps

This was causing a link error with `make all` on Windows.

english : use `typos` to fix comments and logs (ggerganov#4354)

server : tweak default sampling parameters (ggerganov#4367)

* Set a more typical Top P setting as the default

* Update temp max

llama : add Mixtral support (ggerganov#4406)

* convert : support Mixtral as LLAMA arch

* convert : fix n_ff typo

* llama : model loading

* ggml : sync latest ggml_mul_mat_id

* llama : update graph to support MoE

* llama : fix cur -> cur_expert

* llama : first working version

* llama : fix expert weighting in the FFN

* ggml : ggml_get_rows support 2D indexing [n_tokens, n_experts] (cpu only)

* ggml : add n_as argument to ggml_mul_mat_id

* ggml : fix ggml_get_rows to take into account ne02 / ne11

* metal : add more general support for ggml_get_rows + tests

* llama : add basic support for offloading moe with CUDA

* metal : add/mul/div use general kernel when src1 not cont

* metal : reduce the kernel launches for ggml_mul_mat_id

* ggml : get_rows : support non-contiguos tensors with gaps, generalize up to 3D

* ggml : update get_rows f16 and q

* cuda : support non-contiguous src1 in get_rows

* llama : offload missing ffn_moe_silu

* metal : fix ggml_get_rows to work with non-cont src1

* metal : add indirect mat-vec kernels for all quantization types

* llama : do not quantize expert gating tensors

* llama : add n_expert and n_expert_used to hparams + change quants

* test-backend-ops : add moe test

* cuda : fix get_rows when ncols is odd

* convert : determine n_ctx correctly

* metal : fix ggml_mul_mat_id for F32

* test-backend-ops : make experts more evenly probable (test_moe)

* test-backend-ops : cleanup, add moe test for batches

* test-backend-ops : add cpy from f32 -> all types test

* test-backend-ops : fix dequantize block offset

* llama : fix hard-coded number of experts

* test-backend-ops : simplify and disable slow tests to avoid CI timeout

* test-backend-ops : disable MOE test with thread sanitizer

* cuda : fix mul_mat_id with multi gpu

* convert : use 1e6 rope_freq_base for mixtral

* convert : fix style

* convert : support safetensors format

* gguf-py : bump version

* metal : add cpy f16 -> f32 kernel

* metal : fix binary ops for ne10 % 4 != 0

* test-backend-ops : add one more sum_rows test

* ggml : do not use BLAS with ggml_mul_mat_id

* convert-hf : support for mixtral-instruct (ggerganov#4428)

* convert : typo fix, add additional hyperparameters, use LLaMA arch for Mixtral-instruct

* convert : use sentencepiece tokenizer for Mixtral-instruct

* convert : make flake8 happy

* metal : fix soft_max kernels

ref: ggerganov/ggml@1914017

* metal : limit kernels to not use more than the allowed threads

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Radek Pilar <github@mrkva.eu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants