-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Min P sampler implementation [alternative to Top P/Top K] #3841
Conversation
The current implementation:
This is of course suboptimal in a lot of ways, but when drafting sampler ideas, I wanted to avoid touching the sampler stack order as it currently existed before I found a solution. What would be the best way to integrate this if the objective was to avoid Top P and Top K's flaws via an improved single sampler, where it's not intended to be used in tandem with them? (Maybe they should be disabled like how Mirostat disables samplers when this is enabled?) |
+ fixed 0.0 default for min_p
My small contribution to this great project. Ref: ggerganov/llama.cpp#3841 Closes: abetlen#911
* Added support for min_p My small contribution to this great project. Ref: ggerganov/llama.cpp#3841 Closes: #911 * Fix for negative temp (sample_softmax)
…gerganov#3841) * Introduce the new Min-P sampler by @kalomaze The Min-P sampling method was designed as an alternative to Top-P, and aims to ensure a balance of quality and variety. The parameter *p* represents the minimum probability for a token to be considered, relative to the probability of the most likely token. * Min-P enabled and set to 0.05 default --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
* Update server.cpp with min_p after it was introduced in ggerganov#3841 * Use spaces instead of tabs * Update index.html.hpp after running deps.sh * Fix test - fix line ending
Some languages does not have full word tokens, and you will penalize sub words or characters. |
Having experimented with using strictly
My impression after a brief testing: Current order defaults certainly provide more deterministic output for most cases, including uninformed tweaking of the sampler settings, but probably limit the user's control. Relevant: #4091 |
Also NMS (Non Max Suppression) from image object detection task uses probability threshold too, similarly to Min P. |
Your quote above may very well provide better results for you with your CURRENT settings, but the author of Min P clearly says the correct order for Temperature in his recommended "General Purpose parameters". It includes a note that says (paraphrased) "Temperature should be applied LAST, otherwise you will break Min P".And that claim makes sense, because Temperature artificially boosts the "probability scores" of all tokens by a non-linear amount based on the top token. This would totally screw with Min P's purpose of EXCLUDING tokens that are NOT "likely enough". By putting Temperature BEFORE Min P, you essentially BYPASS/SKIP/BREAK Min P's algorithm! Therefore, Temperature MUST come last with Min P. I suspect that your overall settings were simply bad. Try applying the author of Min P's settings from the image below: The image is taken from the author's presentation. |
You may be right, but I don't know what you mean by "your overall settings were simply bad". The only settings used were given: only min-p and temperature samplers enabled and various values in ranges [0.05..0.005] for min-p and [0.1..4.0] for temp tried with subjective results described. |
@ZoomRmc A quick glance at the Min P author's settings above shows that the active samplers were Repetition Penalty, Min P and Temperature, in that order. His original presentation (see link above, under the image of his settings) explains that Temperature performs a non-linear boost of the probability values of every token. So let's say a token was 5% likely to be the next token. Another token was 95% likely. Temperature would scale those non-linearly, so suddenly 5% might be 25%, while the 95% token becomes 97%. This completely screws up the probability values for every token. Imagine what that does to Min P? It COMPLETELY breaks it. Because the purpose of Min P is to look at the most probable token and then set a dynamic limit at X% of that probability, to cut out all the random noise at the bottom of the list. So it's unfortunately not even a question: Temperature MUST be after Min P, otherwise Min P doesn't work anymore. For what it's worth, I've tried those exact "General Purpose" settings that he recommended above (with Min P before Temperature), and my results are incredibly good. It was like upgrading the brain of my Llama 2 model. I really, really like it! Thanks everyone who implemented it in llama.cpp! |
* Update server.cpp with min_p after it was introduced in ggerganov/llama.cpp#3841 * Use spaces instead of tabs * Update index.html.hpp after running deps.sh * Fix test - fix line ending
The way that this sampler works is:
Top P has a design flaw in that numerous tail end tokens can be considered if the top tokens don't have concentrated enough scores to meet up to the specified Top P value, while TFS and other novel sampler approaches aren't as easily interpretable or consistent as Top P. The primary purpose of the Min P sampler is to accomodate for both of these design flaws.
The current implementation is very rough around the edges code-wise, as I am not very experienced with C++, but I hope to properly polish this implementation to be considered for merging. I have gotten improved results personally and positive feedback from other users, especially in regards to increased coherent creativity.
Mathematically, it is not as complex as TFS or other tail search algorithms, but importantly, it is easily understandable and in how it impacts the probabilities as a result. It is essentially a streamlined linear version of Top A in design. However, it consistently outperforms Top P and Top K for removing tail end tokens.