Skip to content

fix: CMake arch auto-detection SIGILL, typos, and improved error messages#137

Open
redbasecap-buiss wants to merge 5 commits intoalibaba:mainfrom
redbasecap-buiss:fix/bugfixes
Open

fix: CMake arch auto-detection SIGILL, typos, and improved error messages#137
redbasecap-buiss wants to merge 5 commits intoalibaba:mainfrom
redbasecap-buiss:fix/bugfixes

Conversation

@redbasecap-buiss
Copy link

Summary

This PR fixes several bugs and code quality issues:

🐛 Bug Fix: SIGILL on x86_64 CPUs without AVX-512 (Fixes #128, Fixes #92)

Root cause: _detect_x86_best() in cmake/option.cmake iterated from the highest microarchitecture (graniterapids) downward and picked the first one the compiler accepted. Since compilers accept all -march= flags regardless of the host CPU, building on a machine with a modern compiler but an AVX2-only CPU would emit AVX-512BW instructions, causing SIGILL at runtime.

Fix: Replace the flawed probe loop with -march=native, which targets the actual build machine's instruction set.

📝 Typo Fixes

  • check_need_adjuct_ctxcheck_need_adjust_ctx — misspelled method name across 6 files in both dense and sparse HNSW implementations
  • wihtwith in index_reducer.h, index.cc, mixed_streamer_reducer.*
  • spasesparse in hnsw_builder_entity.h, hnsw_sparse_builder_entity.h
  • seperateseparate in vector_column_indexer.h
  • OrignalOriginal in cosine_converter.cc error message
  • Full-width ! in cpu_features.h/.cc Doxygen comments
  • Extra space in README.md align=" center"align="center"

💬 Improved Error Message (Related to #78)

When using cosine metric with INT8 data without a quantizer, the error was a bare "Unsupported data type: " with no actionable detail. Now explains that cosine without quantizer only supports FP32/FP16 and suggests using QuantizerType::kInt8.

Changes

File(s) Change
cmake/option.cmake Fix -march= auto-detection to use -march=native
6 HNSW files Rename adjuctadjust
8 source files Fix various typos
src/core/interface/index.cc Improve cosine+INT8 error message
src/ailego/internal/cpu_features.* Fix full-width comment character
README.md Fix HTML attribute spacing

Ocean added 5 commits February 16, 2026 22:40
…ported arch

The auto-detection in _detect_x86_best() iterated from the highest
microarchitecture (graniterapids) down and picked the first one the
*compiler* accepted. Since compilers accept all -march= flags regardless
of the host CPU, this would emit e.g. AVX-512BW instructions on machines
that only support AVX2, causing SIGILL at runtime.

Replace with -march=native which targets the actual build machine's ISA.

Fixes alibaba#128
Fixes alibaba#92
The method check_need_adjuct_ctx() was consistently misspelled across
both dense and sparse HNSW implementations. Renamed to
check_need_adjust_ctx() for correctness.
- 'wiht' → 'with' in index_reducer.h, index.cc, mixed_streamer_reducer
- 'spase' → 'sparse' in hnsw_builder_entity.h, hnsw_sparse_builder_entity.h
- 'seperate' → 'separate' in vector_column_indexer.h
- Fix extra space in README.md HTML align attribute
When using cosine metric without a quantizer on INT8 data, the error
message was a bare 'Unsupported data type: ' with no detail. Now it
explains that cosine without quantizer only supports FP32/FP16 and
suggests using QuantizerType::kInt8 as a workaround.

Related to alibaba#78
- Replace '!' (U+FF01) with '!' in cpu_features.h/.cc comments
- Fix 'Orignal' → 'Original' in cosine_converter.cc error message
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


Ocean seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

3 participants