Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build : enable more non-default compiler warnings #3200

Merged
merged 21 commits into from
Sep 28, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
b52ce44
cmake : make -Wmissing-prototypes etc. match the Makefile
cebtenzzre Sep 15, 2023
5457b0c
make : add some missing build targets
cebtenzzre Sep 15, 2023
e632547
fix more missing 'static' specifiers (-Wmissing-declarations)
cebtenzzre Sep 15, 2023
724a0c2
build : remove -Wno-multichar as it is no longer needed
cebtenzzre Sep 14, 2023
a80cb4c
build : separate common warning flags
cebtenzzre Sep 14, 2023
8092657
quantize : fix missing 'noreturn' (-Wmissing-noreturn)
cebtenzzre Sep 14, 2023
86170e0
make : remove redundant -Wno-pedantic
cebtenzzre Sep 18, 2023
141c645
make : do not pass compiler-specific options to nvcc
cebtenzzre Sep 15, 2023
1191cc3
fix unreachable 'break' and 'return' (-Wunreachable-code-*)
cebtenzzre Sep 14, 2023
90eb665
examples : fix extra ';' after function definitions (-Wextra-semi)
cebtenzzre Sep 14, 2023
df080fe
ggml : do not put ';' after GGML_*_LOCALS (-Wextra-semi-stmt)
cebtenzzre Sep 14, 2023
54e28be
fix more -Wextra-semi-stmt warnings
cebtenzzre Sep 14, 2023
0465daa
baby-llama : fix -Wmaybe-uninitialized warning from gcc
cebtenzzre Sep 18, 2023
05adde4
build : use -Werror=implicit-function-declaration
cebtenzzre Sep 20, 2023
a6b7476
compiler version detection
cebtenzzre Sep 19, 2023
d38d59c
Merge branch 'master' of https://github.com/ggerganov/llama.cpp into …
cebtenzzre Sep 20, 2023
a7d13ac
Merge branch 'master' of https://github.com/ggerganov/llama.cpp into …
cebtenzzre Sep 21, 2023
4b90878
Merge branch 'master' of https://github.com/ggerganov/llama.cpp into …
cebtenzzre Sep 28, 2023
39b5663
fix new warnings after merge
cebtenzzre Sep 28, 2023
7b15e8a
make : fix clang version detection
cebtenzzre Sep 28, 2023
b2130e6
build : re-enable some warnings for train-text-from-scratch
cebtenzzre Sep 28, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
quantize : fix missing 'noreturn' (-Wmissing-noreturn)
  • Loading branch information
cebtenzzre committed Sep 18, 2023
commit 80926572f723323c588a5445a3548fc2389d0629
1 change: 1 addition & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -431,6 +431,7 @@ if (LLAMA_ALL_WARNINGS)
set(cxx_flags
${warning_flags}
-Wmissing-declarations
-Wmissing-noreturn
)
if (CMAKE_CXX_COMPILER_ID MATCHES "Clang") # clang++ only
set(cxx_flags ${cxx_flags} -Wmissing-prototypes)
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ endif # LLAMA_DISABLE_LOGS
WARN_FLAGS = -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function
MK_CFLAGS += $(WARN_FLAGS) -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes \
-Werror=implicit-int
MK_CXXFLAGS += $(WARN_FLAGS) -Wmissing-declarations
MK_CXXFLAGS += $(WARN_FLAGS) -Wmissing-declarations -Wmissing-noreturn

# TODO(cebtenzzre): remove this once PR #2632 gets merged
TTFS_CXXFLAGS = $(CXXFLAGS) -Wno-missing-declarations
Expand Down
1 change: 1 addition & 0 deletions examples/quantize/quantize.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ static bool try_parse_ftype(const std::string & ftype_str_in, llama_ftype & ftyp
// usage:
// ./quantize [--allow-requantize] [--leave-output-tensor] models/llama/ggml-model.gguf [models/llama/ggml-model-quant.gguf] type [nthreads]
//
[[noreturn]]
static void usage(const char * executable) {
printf("usage: %s [--help] [--allow-requantize] [--leave-output-tensor] model-f32.gguf [model-quant.gguf] type [nthreads]\n\n", executable);
printf(" --allow-requantize: Allows requantizing tensors that have already been quantized. Warning: This can severely reduce quality compared to quantizing from 16bit or 32bit\n");
Expand Down