Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama : use std::abs in llama_sample_tail_free #2800

Merged
merged 1 commit into from
Aug 26, 2023

Conversation

cebtenzzre
Copy link
Collaborator

@cebtenzzre cebtenzzre commented Aug 25, 2023

Fixes this clang warning when building without '-march=native':

clang++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -DGGML_USE_K_QUANTS -c llama.cpp -o llama.o
llama.cpp:3911:33: warning: using integer absolute value function 'abs' when argument is of floating point type [-Wabsolute-value]
        second_derivatives[i] = abs(second_derivatives[i]);
                                ^
llama.cpp:3911:33: note: use function 'std::abs' instead
        second_derivatives[i] = abs(second_derivatives[i]);
                                ^~~
                                std::abs
1 warning generated.

abs will round the input to int, which I don't think was intended here.

llama.cpp Outdated Show resolved Hide resolved
Plain 'abs' casts the input to int.
@cebtenzzre cebtenzzre changed the title llama : use fabsf in llama_sample_tail_free llama : use std::abs in llama_sample_tail_free Aug 26, 2023
Copy link
Collaborator

@ivanstepanovftw ivanstepanovftw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for noticing.

@ggerganov ggerganov merged commit 50526f3 into ggerganov:master Aug 26, 2023
25 checks passed
mattgauf added a commit to mattgauf/llama.cpp that referenced this pull request Aug 26, 2023
* master: (773 commits)
  server : add `/detokenize` endpoint (ggerganov#2802)
  convert.py : advanced option (ggerganov#2753)
  llama : use Unicode Escape Sequence to replace encoded characters (ggerganov#2814)
  flake.nix : add rocm support and cleanup (ggerganov#2808)
  llama : move #includes out of _GNU_SOURCE conditional (ggerganov#2817)
  main : fix bug (penalize_nl=false doesn't work) + suppress warning on mingw (ggerganov#1528)
  llama : use std::abs in llama_sample_tail_free (ggerganov#2800)
  k-quants : remove unnecessary tensor shape restrictions (ggerganov#2811)
  Better perplexity for 2- and 3-bit quantization for LLaMA-v2-70B (ggerganov#2807)
  Fix HellaSwag (ggerganov#2805)
  flake : build llama.cpp on Intel with nix (ggerganov#2795)
  Handle null rope scaling value (ggerganov#2793)
  Fix spm whitespaces (ggerganov#2806)
  examples : skip unnecessary external lib in server README.md how-to (ggerganov#2804)
  llama : fix struct decl (ggerganov#2790)
  Faster perplexity computation (ggerganov#2786)
  llama : add llama_beam_search() (ggerganov#2267)
  convert.py : Get rope scale from HuggingFace models (ggerganov#2772)
  llama-bench : add model sizes (ggerganov#2771)
  convert.py : export rope freq_base when converting CodeLlama from an HF model (ggerganov#2773)
  ...
akawrykow pushed a commit to akawrykow/llama.cpp that referenced this pull request Aug 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants