Add Fast Fourier Transform (FFT) implementation with examples and tests and implement FFT library with 2-SAT solver and supporting infrastructure#44
Conversation
Adds an iterative radix-2 Cooley-Tukey FFT implementation for complex and real-valued sequences, including methods for forward/inverse transforms and linear convolution. Also includes comprehensive tests and documentation updates.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
🚧 Files skipped from review as they are similar to previous changes (2)
WalkthroughAdds FFT and NTT math modules (with Montgomery support), a policy‑driven Link‑Cut Tree (vertex and edge variants), a Two‑Sat solver, many examples (demos/benchmarks/probes), extensive tests and CI presets, Graphviz tree export, new CMake options/presets, release/perf scripts, and comprehensive documentation/tutorials. Changes
Sequence Diagram(s)sequenceDiagram
participant CI as "CI"
participant Bench as "fft_benchmark"
participant Probe as "fft_reference_probe"
participant Audit as "fft_release_audit.rb"
participant Py as "Python+NumPy"
CI->>Bench: build & run (--validate / --json)
Bench-->>CI: upload JSON artifact
CI->>Probe: build & run (emit JSON)
Probe-->>CI: upload reference JSON
CI->>Audit: invoke fft_release_audit.rb (paths)
Audit->>Bench: run benchmark (--validate) and collect JSON
Audit->>Probe: run probe (or use provided)
Audit->>Py: call Python checker (via fft_reference_check.rb)
Py-->>Audit: per‑metric error report (JSON)
Audit-->>CI: exit code + report (pass/fail)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: cff5ad8b63
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Pull request overview
Adds a new FFT module to Aleph-w, including documentation, a runnable example, and a comprehensive GoogleTest suite.
Changes:
- Introduces
fft.Hwith an iterative radix-2 Cooley–Tukey FFT plus real/complex convolution helpers. - Adds
fft_example.ccand integrates it into the Examples build. - Adds
fft_test.ccand integrates it into the Tests build; updates docs to list FFT support.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
fft.H |
New FFT API: in-place transform, functional wrappers, container-friendly overloads, and convolution/multiply helpers. |
Tests/fft_test.cc |
Adds broad unit tests covering correctness vs naive DFT/convolution, round-trips, and generic containers. |
Tests/CMakeLists.txt |
Registers fft_test in the test target list. |
Examples/fft_example.cc |
Adds a detailed FFT usage demonstration (spectrum, convolution, polynomial multiplication, generic containers). |
Examples/CMakeLists.txt |
Registers fft_example in the example target list. |
README.md |
Documents the new fft.H header and example. |
README.es.md |
Spanish documentation entry for the new FFT header and example. |
doxygen_groups.H |
Updates algorithm group description to mention FFT/NTT under numerical algorithms. |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
doxygen_groups.H (1)
52-65: Nice to have: add explicit@see fft.Hcross-reference in the Algorithms group.Line 60 introduces FFT/NTT in the group text; adding a direct
@see fft.Himproves API discoverability from generated docs.💡 Suggested doc tweak
/** * `@defgroup` Algorithms Algorithms * `@brief` General-purpose algorithm implementations + * `@see` fft.H * * This module contains a wide variety of algorithms including:As per coding guidelines: "Cross-reference related classes/functions with
@seein Doxygen."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doxygen_groups.H` around lines 52 - 65, Update the Algorithms Doxygen group documentation (the block with `@defgroup` Algorithms) to add an explicit cross-reference to the FFT header; insert a `@see fft.H` line inside that comment block so generated docs link directly to fft.H and improve discoverability of the FFT/NTT mention.Tests/fft_test.cc (1)
353-388: Consider adding a performance baseline test for regression detection.The coding guidelines recommend including performance regression tests for critical algorithms. FFT is a performance-critical algorithm where O(n log n) complexity is essential.
💡 Optional: Add a simple performance sanity check
TEST(FFT, PerformanceSanityCheck) { // Verify FFT scales reasonably for larger inputs constexpr size_t n = 1024; Array<Complex> signal; signal.reserve(n); for (size_t i = 0; i < n; ++i) signal.append(Complex(static_cast<double>(i), 0.0)); auto start = std::chrono::high_resolution_clock::now(); const auto spectrum = FFTD::transformed(signal, false); auto end = std::chrono::high_resolution_clock::now(); // Basic sanity: 1024-point FFT should complete in < 10ms on any modern hardware auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start); EXPECT_LT(duration.count(), 10) << "FFT(1024) took unexpectedly long"; EXPECT_EQ(spectrum.size(), n); }Based on learnings: "Include performance regression tests for critical algorithms"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/fft_test.cc` around lines 353 - 388, Add a lightweight performance regression test to catch large regressions in the FFT implementation: create a new test (e.g. TEST(FFT, PerformanceSanityCheck)) that builds a 1024-point Array<Complex> signal, times FFTD::transformed(signal, false) with std::chrono high_resolution_clock, and asserts the call completes within a reasonable threshold (and that the returned spectrum size equals n) using EXPECT_LT and EXPECT_EQ; include the necessary <chrono> header and choose a conservative threshold value (or read it from an environment variable) to avoid flaky CI, and place the test alongside the existing FFT randomized tests so it runs with the unit suite.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@fft.H`:
- Around line 65-82: Add class-level documentation for FFT<Real> that states
thread-safety and exception-safety guarantees: declare that FFT<Real> objects
are safe for concurrent read-only use (const methods can be called concurrently)
but are not safe for concurrent mutation (non-const methods such as transform,
convolve, and the in-place bit-reversal/transform routines must be externally
synchronized); specify exception-safety: document that public API functions
(transform, convolve and helpers) provide the basic exception safety guarantee
(if an exception is thrown—e.g. from memory allocation or user-provided iterator
operations—the container/state remains valid but may be partially modified),
note any strong or no-throw guarantees if you make methods noexcept (otherwise
explicitly state no methods are noexcept), and call out specific exceptions
callers may expect (std::bad_alloc, exceptions from element
conversions/iterators) so users know what to handle.
- Around line 331-557: Several public FFT overloads (transformed, transform,
inverse_transform, inverse_transform_real, and the template multiply overloads)
are missing required Doxygen tags; for each of these functions add `@brief`,
`@param`, `@return`, and `@throws` entries matching the style of nearby documented
methods (e.g., mirror the documented Array-based variants). Specifically: add a
concise `@brief` describing the overload behavior, `@param` describing the generic
Container parameter (and the invert flag where present), `@return` describing the
returned Array type, and `@throws` entries for the same exceptions thrown by the
non-template overloads (e.g., std::invalid_argument for non-power-of-two sizes,
std::domain_error where projection may fail). For template overloads also note
in the `@param` that generic iterables are accepted and reference helper builders
(build_complex_input / build_real_input) where appropriate.
---
Nitpick comments:
In `@doxygen_groups.H`:
- Around line 52-65: Update the Algorithms Doxygen group documentation (the
block with `@defgroup` Algorithms) to add an explicit cross-reference to the FFT
header; insert a `@see fft.H` line inside that comment block so generated docs
link directly to fft.H and improve discoverability of the FFT/NTT mention.
In `@Tests/fft_test.cc`:
- Around line 353-388: Add a lightweight performance regression test to catch
large regressions in the FFT implementation: create a new test (e.g. TEST(FFT,
PerformanceSanityCheck)) that builds a 1024-point Array<Complex> signal, times
FFTD::transformed(signal, false) with std::chrono high_resolution_clock, and
asserts the call completes within a reasonable threshold (and that the returned
spectrum size equals n) using EXPECT_LT and EXPECT_EQ; include the necessary
<chrono> header and choose a conservative threshold value (or read it from an
environment variable) to avoid flaky CI, and place the test alongside the
existing FFT randomized tests so it runs with the unit suite.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6bd6c7c2-e7d3-477e-8f4d-ea3ef934672f
📒 Files selected for processing (8)
Examples/CMakeLists.txtExamples/fft_example.ccREADME.es.mdREADME.mdTests/CMakeLists.txtTests/fft_test.ccdoxygen_groups.Hfft.H
Refactor FFT implementation to use mixed radix-2/radix-4 algorithm for performance. Introduce a `Plan` class for precomputing twiddle factors and bit-reversal tables, and add parallel `p` APIs utilizing a `ThreadPool`. Add various DSP utilities, including STFT/ISTFT, windowing functions (Hann, Hamming, Blackman), and IIR/FIR filtering (lfilter, sosfilt, filtfilt, OverlapAdd convolver). Introduce methods for frequency response analysis (freqz), polynomial roots (zeros, poles), stability analysis, and filter design (Butterworth, Chebyshev). Update documentation and examples to reflect new APIs and optimized paths.
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tpl_arrayQueue.H (1)
231-259:⚠️ Potential issue | 🟠 MajorHandle
sz == 0before returning a reference.When
putn(0)is called, the loop is skipped and the function returnsrear_item(). On an empty queue that is a reference to unconstructed storage; on a non-empty queue it returns an unrelated older element even though nothing was inserted. Reject zero or special-case it before computing the return value.Possible fix
T & putn(size_t sz) requires std::is_default_constructible_v<T> { + ah_invalid_argument_if(sz == 0) + << "ArrayQueue::putn(): sz must be greater than 0"; const size_t max_sz = 2*this->dim - this->n; ah_overflow_error_if(sz > max_sz) << "Maximum putn size reached";As per coding guidelines, "Use
ah_invalid_argument_if(condition)macro for invalid arguments."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_arrayQueue.H` around lines 231 - 259, The putn function must reject sz == 0 before constructing/returning any reference: at the top of putn(size_t sz) (the function named putn) add an ah_invalid_argument_if(sz == 0) << "putn requires sz > 0" to enforce non-zero size, then proceed with the existing allocation loop and return rear_item(); this prevents returning a reference to unconstructed or unrelated storage when no elements are inserted.tpl_memArray.H (1)
438-444:⚠️ Potential issue | 🟠 MajorDeallocate the raw buffer when element copy construction throws.
copy_construct()cleans up the partially built elements, but it does not free the raw storage. If a value copy throws here, the copy constructor leaksptr, and copy assignment leaksnewptr. Wrap both allocations in local cleanup or catch/deallocate before rethrowing. As per coding guidelines, "Ensure exception safety (basic, strong, or no-throw guarantee)."Also applies to: 463-468
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_memArray.H` around lines 438 - 444, The copy constructor (MemArray(const MemArray&)) and the copy-assignment path that allocates into newptr must free the raw buffer if copy_construct(...) throws: after calling allocate() (or allocating newptr) wrap the copy_construct call in RAII or a try/catch that calls deallocate()/free buffer (ptr or newptr) before rethrowing so the raw storage is not leaked; e.g., use a small local scope guard that calls deallocate(ptr) on failure in the copy constructor and deallocate(newptr) in the copy-assignment flow to ensure exception safety.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Examples/CMakeLists.txt`:
- Around line 142-143: The two entries fft_benchmark and fft_reference_probe in
Examples/CMakeLists.txt currently resolve to empty sources via get_source_file()
so no CMake targets are created; update the build to either (a) add the missing
source files for fft_benchmark and fft_reference_probe in the Examples tree and
ensure get_source_file() points to them, or (b) explicitly create targets (e.g.,
add_executable(fft_benchmark <correct_source.cxx>) and
add_executable(fft_reference_probe <correct_source.cxx>)) in
Examples/CMakeLists.txt so the targets exist; make sure these target names match
the names used to gate FFTRelease.* tests in Tests/CMakeLists.txt so the tests
register correctly.
In `@Examples/fft_example.cc`:
- Around line 42-47: The include block at the top of Examples/fft_example.cc is
missing <algorithm>, causing uses of std::min (seen at the spots around lines
495, 498, 501, 628, 631) to be undeclared on some platforms; fix by adding
`#include` <algorithm> to the existing include list (near the other standard
headers) so std::min is properly declared and then rebuild to verify
compilation.
In `@Tests/CMakeLists.txt`:
- Around line 508-514: The test registration for FFTRelease.BenchmarkValidate is
incorrectly gated by ALEPH_RUBY_EXECUTABLE; remove the Ruby check and only
require the fft_benchmark target to exist. Change the conditional to test only
TARGET fft_benchmark (leave find_program(ALEPH_RUBY_EXECUTABLE ruby) in place if
used elsewhere) and keep the add_test(NAME FFTRelease.BenchmarkValidate COMMAND
fft_benchmark --validate) inside that TARGET-only branch so the test runs
whenever the fft_benchmark target is available.
In `@Tests/two_sat_test.cc`:
- Around line 434-465: The test LargerSatisfiable currently only checks ok;
store the generated clause literals while creating them (e.g. push_back pairs of
(v1,target(v1)) and (v2,target(v2)) when calling sat.add_clause) and after auto
[ok, assignment] = sat.solve() iterate the stored clauses and assert each clause
is satisfied by the returned assignment (for a stored clause check
(assignment[v1] == expected1) || (assignment[v2] == expected2)). Use the
existing symbols target, sat.add_clause, sat.solve(), assignment and the stored
clause vector to validate the model.
In `@tpl_arrayStack.H`:
- Around line 454-459: The popn(const int & n) noexcept implementation allows
negative n to underflow due to mixed signed/unsigned arithmetic and thus
increases head; fix by explicitly validating the input before mutating head:
ensure n is non-negative and no larger than current size (e.g., assert(n >= 0 &&
n <= static_cast<int>(head)) or convert head to an int for the comparison), then
subtract n from head and return the moved element; update the assertion in
popn() to reference n >= 0 and n <= static_cast<int>(head) so negative n cannot
grow the stack.
In `@tpl_memArray.H`:
- Around line 543-563: The push() overloads currently placement-new ptr[n]
before shifting and incrementing n, which can leave a live tail object if a
subsequent assignment throws; modify both push(const T&) and push(T&&) (the
functions using expand(), ptr and n) to only increment n after all
moves/assignments succeed and wrap the shifting/assignment block in a try/catch
that calls the destructor on the tail slot (ptr[n].~T()) before rethrowing;
ensure the tail is created/destroyed symmetrically so no live object remains on
exception and n remains unchanged unless the operation fully completes.
- Around line 244-267: relocate_construct currently unconditionally uses move
for move-only types which may throw; add a compile-time check to prevent unsafe
use by inserting a static_assert inside the template (e.g., within
relocate_construct) that requires std::is_nothrow_move_constructible_v<T> when
std::is_copy_constructible_v<T> is false (or equivalently:
static_assert(std::is_copy_constructible_v<T> ||
std::is_nothrow_move_constructible_v<T>, "relocate_construct requires T to be
copy-constructible or nothrow-move-constructible");), so that move-only but
throwing-move types are rejected at compile time and avoid leaving partially
moved objects in src (reference symbols: relocate_construct, T, Index_Fn,
destroy_elems).
- Around line 164-173: raw_alloc and raw_dealloc currently call the non-aligned
::operator new/delete which can return misaligned storage for over-aligned
types; modify raw_alloc(const size_t count) to check alignof(T) against
__STDCPP_DEFAULT_NEW_ALIGNMENT__ and call the aligned allocation overload
::operator new(count * sizeof(T), std::align_val_t{alignof(T)}) when
over-aligned (otherwise keep the existing call), and modify raw_dealloc(T* p)
noexcept to call ::operator delete(p, std::align_val_t{alignof(T)}) for
over-aligned T (falling back to ::operator delete(p) otherwise); ensure <new> is
included so std::align_val_t is available and preserve noexcept semantics.
In `@Two_Sat.H`:
- Around line 137-142: signed_to_lit() currently computes -v - 1 which overflows
for v == std::numeric_limits<long>::min(); update signed_to_lit to compute the
magnitude for negative v without invoking UB (e.g., compute magnitude as -(v +
1) + 1 or otherwise convert safely to size_t before subtracting) while still
calling neg_lit/pos_lit, and also update add_clause_signed() to explicitly
reject the sentinel value std::numeric_limits<long>::min() (in addition to zero)
so LONG_MIN cannot reach signed_to_lit; refer to the functions signed_to_lit,
add_clause_signed, pos_lit and neg_lit when making these changes.
---
Outside diff comments:
In `@tpl_arrayQueue.H`:
- Around line 231-259: The putn function must reject sz == 0 before
constructing/returning any reference: at the top of putn(size_t sz) (the
function named putn) add an ah_invalid_argument_if(sz == 0) << "putn requires sz
> 0" to enforce non-zero size, then proceed with the existing allocation loop
and return rear_item(); this prevents returning a reference to unconstructed or
unrelated storage when no elements are inserted.
In `@tpl_memArray.H`:
- Around line 438-444: The copy constructor (MemArray(const MemArray&)) and the
copy-assignment path that allocates into newptr must free the raw buffer if
copy_construct(...) throws: after calling allocate() (or allocating newptr) wrap
the copy_construct call in RAII or a try/catch that calls deallocate()/free
buffer (ptr or newptr) before rethrowing so the raw storage is not leaked; e.g.,
use a small local scope guard that calls deallocate(ptr) on failure in the copy
constructor and deallocate(newptr) in the copy-assignment flow to ensure
exception safety.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c2e0ac7a-65d1-438c-8b4a-2648ced10500
📒 Files selected for processing (16)
CMakeLists.txtExamples/CMakeLists.txtExamples/fft_example.ccREADME.es.mdREADME.mdTests/CMakeLists.txtTests/array.ccTests/arrayqueue.ccTests/fft_test.ccTests/two_sat_test.ccTwo_Sat.Hfft.Htpl_arrayQueue.Htpl_arrayStack.Htpl_memArray.Htpl_segment_tree.H
🚧 Files skipped from review as they are similar to previous changes (1)
- README.es.md
Rework MemArray to simplify constructor and assignment operators, reducing template constraints and improving move semantics. Update ArrayQueue to align with MemArray changes, especially in copy/move constructors and assignment. Update tests for both modules to reflect these internal changes and improve test coverage.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 37 out of 41 changed files in this pull request and generated 4 comments.
Comments suppressed due to low confidence (1)
tpl_memArray.H:647
- In
traverse(), the loop counter isintwhilenissize_t. For large arrays (n > INT_MAX), this can overflow/terminate early and it also triggers signed/unsigned warnings. Usesize_tfor the index (and prefer++i).
There was a problem hiding this comment.
Actionable comments posted: 19
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tpl_arrayStack.H (1)
210-237:⚠️ Potential issue | 🟠 MajorMust fix — const methods expose mutable access to stack elements.
The
constmethodstop(),top(i),get_last(), andbase()in bothArrayStack(lines 210–237) andFixedStack(lines 465–535) return mutable references, allowing modification of elements through const-qualified stacks. Additionally,FixedStack::traverse(...) constusesconst_castto bypass const semantics.Add overloaded
constvariants returningconst T &:T &top() noexcept { return array[head - 1]; } const T &top() const noexcept { return array[head - 1]; }Replace
traverse(...) constto avoidconst_cast:template <class Operation> bool traverse(Operation & operation) const { for (size_t i = 0; i < head; ++i) if (not operation(static_cast<const T &>(array[i]))) return false; return true; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_arrayStack.H` around lines 210 - 237, The const-qualified accessors (ArrayStack::top(), ArrayStack::top(size_t), ArrayStack::get_last(), ArrayStack::base() and the equivalents in FixedStack) currently return non-const T& allowing mutation through const objects; add const overloads that return const T& (e.g. provide both T& top() noexcept and const T& top() const noexcept, and similarly for top(size_t), base(), get_last()) and adjust their implementations to call the underlying array accessors as const; also remove the const_cast from FixedStack::traverse(...) const by changing the traversal to pass elements as const T& to the operation (e.g. call operation(static_cast<const T&>(array[i])) or simply operation(array[i]) when using a const reference) so the const method no longer exposes mutable references.
♻️ Duplicate comments (1)
tpl_arrayStack.H (1)
175-179:⚠️ Potential issue | 🔴 CriticalMust fix — validate count arguments before touching stack state.
pushn(0)returnslast()/array[head - 1]even though no element was pushed, andFixedStack::popn()still acceptsn <= 0: negative values growhead, while zero returnsarray[head]. Please reject invalid counts up front.Suggested fix
T &pushn(const size_t & n = 1) { + ah_invalid_argument_unless(n > 0); array.putn(n); return array.last(); }T &pushn(const size_t & n = 1) noexcept { - assert(head + n <= dim); + assert(n > 0); + assert(n <= dim - head); head += n; return array[head - 1]; }T popn(const int & n) noexcept { - assert((int (head - n)) >= 0); + assert(n > 0); + const auto count = static_cast<size_t>(n); + assert(count <= head); - head -= n; + head -= count; return std::move(array[head]); }As per coding guidelines, "Check boundary conditions and off-by-one errors" and "Use assertions (
assert, customah-errors.Hmacros) for preconditions."Also applies to: 435-440, 456-461
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_arrayStack.H` around lines 175 - 179, pushn and the FixedStack::popn variants currently accept non-positive counts and can mutate or read stack state incorrectly; add precondition checks that n > 0 (and for popn also n <= current_size if applicable) before touching array or head. In pushn(const size_t & n = 1) validate n>0 and return/raise an assertion (use existing assert or ah-errors.H macros) without calling array.putn when invalid; in FixedStack::popn and any popn overloads check n>0 (and bounds) up front and reject/assert rather than allowing negative/zero to change head or return invalid elements. Apply the same checks to the other popn/pushn occurrences referenced (around the 435-440 and 456-461 blocks).
🟡 Minor comments (10)
.gitignore-343-345 (1)
343-345:⚠️ Potential issue | 🟡 MinorAnchor the Doxygen ignores to the actual output path.
Line 344 and Line 345 are broader than the comment suggests:
html/andlatex/ignore any directory with those names anywhere in the repo. If the intent is only generated docs output, scope them to the repo root (/html/,/latex/) or the explicit docs path (docs/html/,docs/latex/) to avoid hiding legitimate source/assets.Proposed fix
-# Doxygen generated output -html/ -latex/ +# Doxygen generated output +/html/ +/latex/If Doxygen writes under
docs/instead, use:-# Doxygen generated output -html/ -latex/ +# Doxygen generated output +docs/html/ +docs/latex/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 343 - 345, The .gitignore currently ignores any directory named html/ or latex/ anywhere in the repo; change those entries to anchored paths so only the generated Doxygen output is ignored (for example replace "html/" and "latex/" with "/html/" and "/latex/" or with the explicit docs path such as "docs/html/" and "docs/latex/") so legitimate source/assets named html or latex in subfolders are not accidentally excluded.tpl_arrayStack.H-499-514 (1)
499-514:⚠️ Potential issue | 🟡 MinorShould fix —
FixedStack::traverse()runs in the opposite order from its contract.The doc says youngest-to-oldest, but Line 509 starts at
array[0], which is the oldest pushed element. Reverse the loop, or update the contract and tests if the order is intentional.Suggested fix
bool traverse(Operation & operation) { - for (size_t i = 0; i < head; i++) - if (not operation(array[i])) + for (size_t i = head; i > 0; --i) + if (not operation(array[i - 1])) return false; return true; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_arrayStack.H` around lines 499 - 514, The traverse implementation in FixedStack::traverse currently iterates from array[0] to array[head-1], which yields oldest-to-youngest contrary to the doc comment; change the loop in traverse (template<class Operation> bool traverse(Operation & operation)) to iterate from head-1 down to 0 (youngest to oldest), calling operation(array[i]) in that reversed order and preserving the early-return behavior when operation returns false; ensure you handle the empty-stack case correctly so no underflow occurs when head is zero.Examples/tree_node_graphviz_example.cc-1-6 (1)
1-6:⚠️ Potential issue | 🟡 MinorMissing MIT license header.
This example file lacks the standard MIT license header that other example files in this PR include (see
ntt_example.cclines 1-29). Per coding guidelines, all.ccfiles should include the license header.Add license header
+/* + Aleph_w + + Data structures & Algorithms + version 2.0.0b + https://github.com/lrleon/Aleph-w + + This file is part of Aleph-w library + + Copyright (c) 2002-2026 Leandro Rabindranath Leon + + Permission is hereby granted, free of charge, to any person obtaining a copy + of this software and associated documentation files (the "Software"), to deal + in the Software without restriction, including without limitation the rights + to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + copies of the Software, and to permit persons to whom the Software is + furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included in all + copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + SOFTWARE. +*/ + `#include` <iostream> `#include` <fstream>As per coding guidelines: "All
.Hheader files must include the MIT license header at the top" and this applies to implementation files as well based on the project convention shown inntt_example.cc.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Examples/tree_node_graphviz_example.cc` around lines 1 - 6, Add the standard MIT license header to the top of the Examples/tree_node_graphviz_example.cc file (matching the header used in ntt_example.cc) so the file begins with the project license before any includes; update the header block above the existing includes (e.g., before `#include` <iostream>, `#include` "tpl_tree_node.H", and `#include` "generate_tree.H") to ensure the file follows the same licensing convention as other examples.Tests/ntt_test.cc-1349-1375 (1)
1349-1375:⚠️ Potential issue | 🟡 MinorShould fix: make this perf regression check explicitly opt-in and less hardware-sensitive.
ENABLE_PERF_TESTS=0still runs this test because the guard only checks presence, and the fixed2500 mswall-clock budget is brittle on slower or noisier machines. Gate it on== "1"like the million-digit test, and only enforce a budget when the caller explicitly provides one.Suggested change
- if (std::getenv("ENABLE_PERF_TESTS") == nullptr) + if (const char * env = std::getenv("ENABLE_PERF_TESTS"); + env == nullptr or std::string(env) != "1") GTEST_SKIP() << "Set ENABLE_PERF_TESTS=1 to run NTT performance checks"; @@ - EXPECT_LE(elapsed, 2500) - << "NTT performance regression detected"; + if (const char * budget = std::getenv("NTT_PERF_BUDGET_MS"); + budget != nullptr) + EXPECT_LE(elapsed, std::strtoull(budget, nullptr, 10)) + << "NTT performance regression detected";Based on learnings: Tests must be deterministic and repeatable; Include performance regression tests for critical algorithms.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/ntt_test.cc` around lines 1349 - 1375, The test PerformanceRegressionIsGuardedByEnvironment currently runs when ENABLE_PERF_TESTS is merely present and always asserts elapsed <= 2500ms; change the enable check to require ENABLE_PERF_TESTS == "1" (use std::getenv and strcmp or string compare) like the million-digit test, and stop using the hard-coded 2500ms budget: instead read an explicit budget environment variable (e.g., PERF_TEST_BUDGET_MS) and only perform the EXPECT_LE(elapsed, budget) assertion when that env var is set and successfully parsed to an integer; otherwise skip the budget assertion (or skip the test entirely if you prefer) while leaving the setup and call to DefaultNTT::pmultiply and the size assertion intact.Tests/ntt_test.cc-842-860 (1)
842-860:⚠️ Potential issue | 🟡 MinorShould fix: test should verify the square-root property, not the specific root value.
The
poly_sqrt()contract specifies that the returned polynomialgsatisfiesg² ≡ coeffs (mod x^n, MOD), not that it matches a pre-computed root. Bothrand-rare valid square roots. The test should verifyrecovered * recovered == squarerather thanrecovered == root, ensuring algorithmic correctness instead of a sign-choice detail.Suggested change
const Array<uint64_t> recovered = DefaultNTT::poly_sqrt(square, n); - expect_equal_arrays(recovered, root, "poly_sqrt non-zero constant"); + expect_equal_arrays( + naive_multiply_trunc<DefaultNTT>(recovered, recovered, n), + square, + "poly_sqrt non-zero constant"); const Array<uint64_t> shifted_root = series_prefix_mod<DefaultNTT>( Array<uint64_t>({0, 5, 4, 1}), n); const Array<uint64_t> shifted_square = naive_multiply_trunc<DefaultNTT>(shifted_root, shifted_root, n); const Array<uint64_t> shifted_recovered = DefaultNTT::poly_sqrt(shifted_square, n); - expect_equal_arrays(shifted_recovered, shifted_root, - "poly_sqrt shifted root"); + expect_equal_arrays( + naive_multiply_trunc<DefaultNTT>(shifted_recovered, shifted_recovered, n), + shifted_square, + "poly_sqrt shifted root");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/ntt_test.cc` around lines 842 - 860, The test currently asserts the returned polynomial equals a specific precomputed root; instead it should verify the square-root property. For both the unshifted and shifted cases, replace the expect_equal_arrays(recovered, root, ...) and expect_equal_arrays(shifted_recovered, shifted_root, ...) checks with computation of product = naive_multiply_trunc<DefaultNTT>(recovered, recovered, n) (and similarly for shifted_recovered) and assert expect_equal_arrays(product, square, "poly_sqrt squares to original") (and for shifted case use shifted_square), i.e. use DefaultNTT::poly_sqrt result squared and compared to the original square rather than comparing to a specific root.docs/fft-tutorial.en.md-213-227 (1)
213-227:⚠️ Potential issue | 🟡 MinorDefine the constructor argument in the streaming STFT snippet.
ISTFTProcessor synthesizer(fft_size, window, iopts);references an identifier that is never introduced in this example. Readers copying the block get a compile error; declare the value explicitly here or replace it with the already-defined option/source that the constructor expects.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/fft-tutorial.en.md` around lines 213 - 227, The snippet uses an undefined identifier iopts when constructing FFTD::ISTFTProcessor; update the example to pass the correct options variable or explicitly declare it (e.g., define FFTD::ISTFTOptions iopts = {/* ... */}; or reuse the previously shown opts if that is the intended value) so the line FFTD::ISTFTProcessor synthesizer(fft_size, window, iopts); compiles; ensure the example shows the declaration/initialization of iopts (or replace iopts with opts) and keep references to FFTD::STFTProcessor analyzer, fft_size, and window consistent.docs/fft-tutorial.en.md-44-54 (1)
44-54:⚠️ Potential issue | 🟡 MinorNormalize the code block style to the repo lint rule.
This fenced-block pattern is repeated through the tutorial, and markdownlint is already flagging MD046 on each sample. Converting the blocks to the repository’s expected style will clear the current warnings.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/fft-tutorial.en.md` around lines 44 - 54, The markdown code fences in the FFT tutorial use a ```cpp fenced block; change these to the repo's canonical code-block style (replace the current triple-backtick fenced blocks that contain the FFT example including "# include <fft.H>", "template <std::floating_point Real = double> class FFT", and the aliases "FFTD" and "FFTF") to match the repository lint rule (normalize all similar fenced blocks throughout the tutorial) so MD046 warnings are cleared.README.md-1115-1119 (1)
1115-1119:⚠️ Potential issue | 🟡 MinorShould fix — the lazy-update snippet reuses nodes from the previous tree instance.
uandvcome from theGen_Link_Cut_Tree<int, ...>example above, so this block does not type-check againstlazy_lct. Create vertices fromlazy_lctbefore callingpath_apply().Suggested fix
// Lazy path updates (add delta to every node on a path) Gen_Link_Cut_Tree<long long, SumMonoid<long long>, AddLazyTag<long long>> lazy_lct; -// ... link nodes, then: -lazy_lct.path_apply(u, v, +10LL); // add 10 to all path vertices +auto *p = lazy_lct.make_vertex(10LL); +auto *q = lazy_lct.make_vertex(20LL); +lazy_lct.link(p, q); +lazy_lct.path_apply(p, q, +10LL); // add 10 to all path vertices🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 1115 - 1119, The snippet reuses vertices u and v from the earlier Gen_Link_Cut_Tree<int,...> example which don't belong to lazy_lct; instead construct vertices from lazy_lct and use those when calling lazy_lct.path_apply(...). Locate the Gen_Link_Cut_Tree<long long, SumMonoid<long long>, AddLazyTag<long long>> lazy_lct declaration and replace the reused u,v with new vertices created via lazy_lct.make_vertex() (or the tree's vertex-construction method used elsewhere) and then call lazy_lct.path_apply(new_u, new_v, +10LL).Tests/link_cut_tree_test.cc-683-707 (1)
683-707:⚠️ Potential issue | 🟡 MinorShould fix — add oracle-backed stress for lazy path updates.
These cases only cover one static chain and a couple of hand-picked update patterns. The lazy-tag machinery is the most bug-prone part of the aggregate LCT surface, so it needs the same kind of seeded oracle stress you already added for connectivity and path queries, especially with overlapping updates plus reroot/cut/relink operations.
Based on learnings, "Applies to Tests//*_test.cc : Use property-based testing or randomized tests for complex algorithms" and "Applies to Tests//*_test.cc : Tests must verify algorithmic correctness, not just absence of crashes".
Also applies to: 720-747
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/link_cut_tree_test.cc` around lines 683 - 707, The test only covers a fixed chain and a few manual updates; add an oracle-backed randomized stress test that repeatedly performs mixed operations (create nodes via lct.make_vertex, lct.link, lct.cut, lct.reroot if available, and path_apply) with a fixed RNG seed and verifies results against a naive array model using the same SumMonoid<long long>/AddLazyTag<long long> semantics by comparing lct.path_query to the oracle after every operation (including overlapping updates and after reroot/cut/link sequences); ensure the harness uses NodePtr references from the LCT (nd vector), exercises overlapping path_apply ranges, seeds the RNG for reproducibility, and fails the test with diagnostic info (seed + sequence) when any mismatch is detected.Two_Sat.H-140-149 (1)
140-149:⚠️ Potential issue | 🟡 MinorShould fix — reject invalid sentinels in the public signed-literal helper.
add_clause_signed()now rejects0andLONG_MIN, butsigned_to_lit()is still public and accepts both, returning an invalid literal value that only fails much later in unrelated APIs. Please enforce the same precondition at the conversion boundary, or make the helper non-public.As per coding guidelines, "Validate external inputs before processing" and "Use assertions (
assert, customah-errors.Hmacros) for preconditions".Also applies to: 203-210
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Two_Sat.H` around lines 140 - 149, signed_to_lit currently accepts invalid sentinel values (0 and LONG_MIN) that add_clause_signed already rejects; update the public conversion boundary by validating inputs and asserting preconditions: in signed_to_lit(const long v) noexcept add checks that v != 0 and v != LONG_MIN (use the project's assert/error macro from ah-errors.H or assert) before converting, returning/handling as appropriate, and keep the existing safe magnitude handling; alternatively, if you prefer to hide the function, make signed_to_lit non-public and document that callers must validate inputs (ensure references to pos_lit and neg_lit remain intact).
🧹 Nitpick comments (11)
tpl_segment_tree.H (2)
1530-1530: Nice to have: centralize the4 * nallocation rule inSegment_Tree_Beats.These constructors now repeat the same size expression in five places. A tiny
alloc_size(size_t)helper or a delegating constructor would keep future sizing changes in one spot and reduce drift between overloads.As per coding guidelines, "Avoid magic numbers; use named constants or enums" and "Refactor duplicated code into reusable functions or templates".
Also applies to: 1539-1540, 1552-1553, 1562-1563, 1571-1572
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_segment_tree.H` at line 1530, The constructors in Segment_Tree_Beats repeat the 4 * n allocation expression; add a single helper (e.g., static size_t alloc_size(size_t m)) or a delegating constructor inside class Segment_Tree_Beats and replace all occurrences like the initializer "tree(Array<Node>::create(num == 0 ? 1 : 4 * num)), n(num)" with a call to that helper (or the delegating ctor) so the tree allocation uses alloc_size(n) (and handle the num==0 case inside the helper), then update the other overloaded constructors that initialize tree and n to use the same helper to centralize the sizing rule.
487-490: Should fix: avoid adding morestd::vectorentry points without an explicit interop reason.
Array<T>andDynList<T>constructors already cover the native Aleph container story, so these newstd::vectoroverloads widen the public API against the project’s container policy. If they are required purely for external-library interop, I’d document that exception; otherwise I’d keep the wrappers on Aleph containers only.As per coding guidelines, "MUST use Aleph containers when an equivalent exists. Standard library containers are ONLY permitted when no Aleph equivalent is available."
Also applies to: 544-547, 602-605
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_segment_tree.H` around lines 487 - 490, Remove the public std::vector<T> constructors (e.g., Sum_Segment_Tree(const std::vector<T>&) and the similar overloads at the other occurrences) so the API only exposes Aleph container entry points (Array<T>/DynList<T>); if external-library interop is truly required, do not add raw std::vector overloads—either document a justified exception in the header and wrap the conversion in a named interop factory function (e.g., FromStdVector) or make the overloads internal/hidden and clearly annotated as interop-only. Ensure changes are applied to the Sum_Segment_Tree constructor and the two other matching constructors referenced in the comment.Examples/ntt_example.cc (1)
65-76: Consider handling empty string edge case.The
decimal_digits_from_stringfunction doesn't explicitly handle empty string input. While line 75 has a fallback returning{0}, it relies on theis_empty()check after the loop processes zero characters.Optional: Add explicit empty check
Array<uint64_t> decimal_digits_from_string(const string & value) { + if (value.empty()) + return Array<uint64_t>({0}); + Array<uint64_t> digits; digits.reserve(value.size());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Examples/ntt_example.cc` around lines 65 - 76, The function decimal_digits_from_string currently relies on post-loop logic to handle an empty input; add an explicit check at the start of decimal_digits_from_string (e.g., if value.empty()) and immediately return Array<uint64_t>({0}) to avoid iterating or relying on digits.is_empty(); keep the rest of the logic for non-empty inputs (preserve digits.reserve, append, and trimming via digits.get_last()/digits.remove_last()) so callers always get a non-empty digits array.CMakeLists.txt (1)
59-66: Consider moving-fno-omit-frame-pointerto compile options.The flag
-fno-omit-frame-pointeris a compile-time option that affects code generation, not linking. While some compilers may tolerate it in link options, placing it inadd_compile_optionsis more semantically correct.Suggested adjustment
if(ALEPH_USE_SANITIZERS) if(CMAKE_CXX_COMPILER_ID MATCHES "GNU|Clang") - add_compile_options(-fsanitize=address,undefined) - add_link_options(-fsanitize=address,undefined -fno-omit-frame-pointer -g) + add_compile_options(-fsanitize=address,undefined -fno-omit-frame-pointer -g) + add_link_options(-fsanitize=address,undefined) else() message(WARNING "ALEPH_USE_SANITIZERS is only supported with GCC or Clang") endif() endif()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CMakeLists.txt` around lines 59 - 66, Move the compile-only flag to the compile options: when ALEPH_USE_SANITIZERS is set and CMAKE_CXX_COMPILER_ID matches "GNU|Clang", ensure -fno-omit-frame-pointer is added via add_compile_options (alongside -fsanitize=address,undefined) rather than in add_link_options; leave sanitizer runtime/link flags (-fsanitize=address,undefined -g) in add_link_options and keep the existing warning branch for non-GCC/Clang compilers.CMakePresets.json (1)
61-90: Hardcoded paths reduce portability of Clang presets.The
clang-debugandclang-releasepresets contain hardcoded paths to GCC 13 headers and libraries:
/usr/include/c++/13/usr/include/x86_64-linux-gnu/c++/13/usr/lib/x86_64-linux-gnu/libstdc++.so.6These paths are specific to Debian/Ubuntu with GCC 13 on x86_64. They will fail on:
- Systems with different GCC versions
- Non-Debian distributions
- ARM64 or other architectures
Consider documenting that these presets are CI-specific, or use environment variables to make paths configurable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CMakePresets.json` around lines 61 - 90, The clang-debug and clang-release presets embed Debian/x86_64 GCC-13-specific paths in CMAKE_CXX_FLAGS_INIT and CMAKE_EXE_LINKER_FLAGS_INIT which breaks portability; update these presets (names: "clang-debug", "clang-release") to remove hardcoded paths and instead read configurable values (e.g., via environment variables or CMake cache variables) or provide fallback logic, and add a comment/documentation string indicating these presets are CI-specific if you choose to keep CI-only defaults; ensure references in CMAKE_CXX_FLAGS_INIT and CMAKE_EXE_LINKER_FLAGS_INIT use the configurable variables (or are omitted) so builds work across GCC versions, distributions, and architectures..github/workflows/ci.yml (1)
236-256: Nice to have: centralize the GoogleTest bootstrap step.This is now a third copy of the same
/usr/src/googletest//usr/src/gtestinstall logic, and this area has already drifted once. Pulling it into a shared script or composite action would make future runner fixes land everywhere at once. As per coding guidelines "Refactor duplicated code into reusable functions or templates".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 236 - 256, The CI repeats the same GoogleTest bootstrap logic in the "Build and install GoogleTest" step (checking pkg-config for gtest/gtest_main and building from /usr/src/googletest or /usr/src/gtest), causing duplication and drift; extract that logic into a single reusable unit (either a shell script checked into the repo, e.g. scripts/bootstrap_gtest.sh, or a composite GitHub Action) and replace each repeated block with a single invocation of that script/action (preserving the pkg-config checks and the cmake build/install behavior for /usr/src/googletest and /usr/src/gtest).tpl_link_cut_tree_with_edges.H (1)
70-75: Add a class-level Doxygen block for the new public type.
Gen_Link_Cut_Tree_WithEdgesis a new exported class, but the declaration has no@brief/detailed Doxygen comment of its own. The file header is not enough for the repository’s public API documentation rule.As per coding guidelines, "All public classes must have
@briefand detailed description in Doxygen".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_link_cut_tree_with_edges.H` around lines 70 - 75, Add a class-level Doxygen block immediately above the template declaration for Gen_Link_Cut_Tree_WithEdges that provides an `@brief` short description and a more detailed paragraph describing the class purpose, responsibilities, and intended usage; include `@tparam` entries for VT, ET, EdgeMonoid, and LazyTag explaining each template parameter, and document any important invariants or thread-safety/ownership notes relevant to this exported public type so the public API doc generator picks up the class-level documentation.tpl_link_cut_tree.H (1)
105-115: Require equality onLazyTag::tag_typein the concept.
push()comparesx->lazy == LazyTag::tag_identity(), butLCTLazyTagnever requirestag_typeto be equality-comparable. A custom tag can satisfy the concept and then fail later with a much less readable template error.🔧 Suggested fix
template <typename L, typename T> concept LCTLazyTag = + std::equality_comparable<typename L::tag_type> and requires(const typename L::tag_type & t1, const typename L::tag_type & t2, const T & val, size_t cnt)As per coding guidelines, "Use concepts (C++20) to constrain template parameters; avoid SFINAE when possible".
Also applies to: 395-397
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_link_cut_tree.H` around lines 105 - 115, The concept LCTLazyTag must require that L::tag_type is equality-comparable because push() compares x->lazy == L::tag_identity(); update the LCTLazyTag concept to add a constraint like std::equality_comparable<typename L::tag_type> (or an equivalent equality requirement) alongside the existing requires-expression so custom tags must support operator==; reference the concept name LCTLazyTag and the type symbol L::tag_type and keep the existing requirements (tag_identity, apply, compose) intact.scripts/fft_reference_check.rb (1)
37-37: Nice to have — mark the ignored NumPy probe outputs as intentionally unused.
stdoutandstderrare never read here. Renaming them to_stdout/_stderrkeeps the lint clean without changing behavior.Suggested fix
-stdout, stderr, status = Open3.capture3('python3', '-c', 'import numpy') +_stdout, _stderr, status = Open3.capture3('python3', '-c', 'import numpy')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/fft_reference_check.rb` at line 37, The assignment from Open3.capture3 currently binds unused variables stdout and stderr; rename them to _stdout and _stderr in the Open3.capture3 call (the tuple assignment on the line with Open3.capture3('python3', '-c', 'import numpy')) so they are explicitly marked as intentionally unused while leaving status unchanged.Tests/link_cut_tree_test.cc (1)
1369-1393: Nice to have — use RAII for exported tree ownership in these tests.Each case manually calls
destroy_tree(...), but any earlierASSERT_*returns before cleanup. Wrapping the exported pointer in a small RAII handle keeps these tests leak-safe under failure paths too.As per coding guidelines, "All resources (memory, file handles, locks) must follow RAII principles" and "Prefer
std::unique_ptr/std::shared_ptrover raw pointers for ownership".Also applies to: 1411-1431, 1450-1475, 1493-1507
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/link_cut_tree_test.cc` around lines 1369 - 1393, The test leaks if an ASSERT_* returns before destroy_tree is called; wrap the pointer returned by lct.export_to_tree_node (symbol: export_to_tree_node) in a std::unique_ptr with a custom deleter that calls destroy_tree so ownership is RAII-managed; replace raw uses of the returned TreeNode* and the manual destroy_tree(tree) call with the unique_ptr variable in these test blocks (also apply the same change to the other occurrences noted around lines 1411-1431, 1450-1475, 1493-1507).Two_Sat.H (1)
80-82: Should fix — constrainGTat the template boundary.
Two_SatassumesGTexposesNode,Arc,insert_node(),insert_arc(), and works withTarjan_Connected_Components/NODE_COUNTER, but none of that is expressed in the signature. A small C++20 concept orrequiresclause would make unsupported graph types fail at the API boundary instead of deep inside the implementation.As per coding guidelines, "Use concepts (C++20) to constrain template parameters; avoid SFINAE when possible" and "Use
requiresclauses for complex constraints".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Two_Sat.H` around lines 80 - 82, The Two_Sat template parameter GT is unconstrained even though the implementation requires GT::Node, GT::Arc, insert_node(), insert_arc(), compatibility with Tarjan_Connected_Components, and NODE_COUNTER; add a C++20 concept or a requires clause on the template header of class Two_Sat (constraining GT) that asserts the presence of the nested types (Node, Arc), the member functions (insert_node(), insert_arc()), and that GT works with Tarjan_Connected_Components / NODE_COUNTER (e.g., by requiring a valid instantiation or expression usage like Tarjan_Connected_Components<GT> or usage of NODE_COUNTER on GT), so unsupported graph types fail at the template boundary rather than deep inside methods.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Examples/fft_benchmark.cc`:
- Around line 864-935: print_validation currently only prints metrics and main()
always returns 0, so validation never fails the release gate; change
print_validation(const bool json, const unsigned seed) to return a bool (or int)
indicating overall pass/fail by evaluating the ValidationMetrics returned from
compute_validation_metrics<float/double/long double> (e.g., compare
max_forward_error/max_roundtrip_error/etc. against the validation
envelope/thresholds used elsewhere or implement a small is_within_envelope check
against the expected limits), update JSON output to include a top-level "valid":
true/false field, and in main() call that new return value (if options.validate
&& !print_validation(...)) return non-zero to signal failure to the caller.
- Around line 159-181: The validation currently checks options.sizes.is_empty()
but not that any individual size is zero; add a check after the existing empty
check to iterate options.sizes and reject any entry equal to 0 (print a clear
error like "Benchmark sizes must be positive" and std::exit(1)), so code paths
that later access output[0] / output[0][0] cannot run with a zero-length
dimension; reference options.sizes (and the existing is_empty() check) when
adding the loop/validation.
In `@Examples/fft_example.cc`:
- Around line 669-695: The main() function uses unqualified cout but the file
places using std::cout; inside an unnamed namespace so cout is not visible at
global scope; update main() to use std::cout for both output statements (the
header line and the completion line) so symbols in main() (main(),
demo_real_signal_spectrum(), demo_real_convolution(),
demo_complex_convolution(), demo_generic_containers(), demo_concurrent_fft(),
demo_stft_and_zero_phase_filtering(), demo_production_dsp_utilities()) compile
correctly.
In `@Examples/fft_reference_probe.cc`:
- Around line 16-18: Pin the C++ "C" numeric locale to ensure deterministic JSON
number formatting by calling std::locale::global(std::locale::classic()) (or
imbue std::cout and any std::ostringstream with std::locale::classic()) before
emitting the reference JSON; add `#include` <locale> and place the call at program
startup (e.g., at the top of main or immediately before the JSON emission code
that uses std::cout or any stream) so that functions outputting the FFT
reference (the JSON emitter using std::cout/ostringstream) always use '.' as the
decimal separator.
In `@modular_arithmetic.H`:
- Around line 205-212: MontgomeryCtx must not be default-constructible because
callers can create an invalid context and trigger divides by zero in to_mont;
delete the implicit default constructor and make the precomputed fields
immutable/opaque by making them private (or const) and providing a single
explicit constructor or static factory (e.g., MontgomeryCtx::Create or
make_montgomery_ctx(uint64_t p)) that validates p (nonzero, odd), computes mod,
mod2, r, r2, mod_inv_neg and returns a valid MontgomeryCtx (or an error/optional
on failure); update any callsites that relied on MontgomeryCtx{} to use the
factory/constructor and add public accessors if needed so other code can read
the fields without mutating them.
- Around line 273-297: The mont_reduce function currently only does a single
conditional subtraction and only produces a canonical result when x <
(static_cast<__uint128_t>(ctx.mod) << 64); fix by enforcing the precondition or
fully reducing the output: either document a `@pre` that x < (uint128_t(ctx.mod)
<< 64) in mont_reduce and keep the current fast path, or change mont_reduce to
guarantee [0, p) by performing full reduction on t (e.g., after computing t =
x_hi + qmod_hi + carry, loop or perform repeated conditional subtractions
against ctx.mod until t < ctx.mod, or at minimum perform two conditional
subtractions to cover the possible double-sized case); reference mont_reduce,
the parameter x, and ctx.mod when making the change.
In `@scripts/fft_perf_regression.rb`:
- Around line 57-60: The current validate_payload only checks mode and rows but
we must fail fast if baseline and current benchmarks are incomparable; add
metadata checks for thread count and SIMD backend before diffing rows: either
extend validate_payload or add a new compare_metadata(baseline_payload,
current_payload, baseline_label, current_label) that asserts both payloads
contain the thread and SIMD keys (e.g., 'threads' and 'simd' or 'simd_backend')
and aborts with a clear message if the values differ, and call this check
immediately after loading/validating both payloads and before any row-level
diffs.
In `@Tests/CMakeLists.txt`:
- Around line 526-543: The FFTRelease.ReferenceCrossCheck CTest entry is passing
${CMAKE_BINARY_DIR} which breaks on multi-config generators; change the test to
invoke fft_reference_check.rb with the existing --probe parameter instead of
--build-dir so the script resolves the actual target path (reference
FFTRelease.ReferenceCrossCheck -> fft_reference_check.rb, use --probe). For
FFTRelease.Audit either extend fft_release_audit.rb to accept --benchmark and
--probe options mirroring fft_reference_check.rb and update the CTest to pass
those, or keep the script unchanged and pass a config-aware build path for the
benchmark/probe using a generator expression like
$<TARGET_FILE_DIR:fft_benchmark> so the test locates the correct
per-configuration output (FFTRelease.Audit -> fft_release_audit.rb or
$<TARGET_FILE_DIR:fft_benchmark>).
In `@Tests/montgomery_test.cc`:
- Around line 75-160: Add a compile-time instantiation of the new consteval
builder by invoking montgomery_ctx_for_mod<> for small odd moduli (e.g. 3 and an
odd composite like 9) so the consteval path is exercised: create inline
constexpr MontgomeryCtx values via montgomery_ctx_for_mod<3>() and
montgomery_ctx_for_mod<9>() (or similar names) in the test file
(Tests/montgomery_test.cc) alongside the existing TESTs to force compile-time
construction; optionally add simple constexpr/static_assert checks (e.g. that
the returned ctx is well-formed) to ensure the consteval invocation is actually
validated by the compiler.
- Around line 41-160: The Montgomery tests reference MontgomeryCtx,
montgomery_ctx, to_mont, from_mont, mont_mul, and mont_exp which are only
available when __SIZEOF_INT128__ is defined; wrap the entire block of Montgomery
tests (all TEST(MontgomeryTest, ...) cases in this file) with a preprocessor
guard `#if` defined(__SIZEOF_INT128__) before the first Montgomery TEST and a
matching `#endif` after the last one so the tests are compiled only when the
production API is present.
In `@tpl_arrayStack.H`:
- Around line 345-386: The copy operations are not exception-safe: change
FixedStack(const FixedStack & s) to perform allocation and element-copy into an
RAII temporary (e.g., std::unique_ptr<T[]> tmp(new T[s.dim]) and copy into tmp)
and only on success assign array = tmp.release(), head = s.head, dim = s.dim;
then refactor FixedStack &operator=(const FixedStack & s) to use the
strong-guarantee copy-and-swap idiom by constructing a local FixedStack tmp(s)
and calling swap(tmp) (swap(FixedStack & s) remains noexcept), so failed copies
won't delete or corrupt the original storage.
- Around line 399-415: The push, pop, popn, append, and insert methods (e.g.,
push(const T&), push(T&&), pop(), popn(), and their FixedStack counterparts) are
incorrectly marked noexcept unconditionally; change their noexcept specifiers to
conditional noexcept expressions that depend on T's exception guarantees (use
std::is_nothrow_copy_constructible<T>, std::is_nothrow_move_constructible<T>,
and/or std::is_nothrow_move_assignable<T> as appropriate) so the functions are
noexcept only when the underlying copy/move operations they perform cannot
throw; update each delegating method (append/insert) to use the same conditional
noexcept as the operation they forward to.
In `@tpl_link_cut_tree_with_edges.H`:
- Around line 70-75: The API declares a VT payload and provides
make_vertex(const VT&), but the implementation discards it (nodes initialized
with EdgeMonoid::identity() and ExportData lacks VT), so either preserve VT per
vertex or remove VT/make_vertex from the public API; fix by adding storage for
VT in the internal node/export types (e.g., extend ExportData and node struct
used by Gen_Link_Cut_Tree_WithEdges to carry a VT member and propagate it in
make_vertex/vertex creation and exports) or alternatively remove the VT template
parameter and make_vertex overloads so the public API cannot accept a payload it
won’t store; update all related points (including the usages around make_vertex,
ExportData, and places noted at lines ~126-140 and ~280-288) to match the chosen
approach.
In `@tpl_link_cut_tree.H`:
- Around line 786-790: The get_val(Node *x) accessor can return stale data
because it doesn't propagate deferred tags; modify get_val to push/expose
pending updates on x (e.g., call the tree's expose(x) or push(x)/path_apply(x)
helper before returning) so any lazy tags above the node are applied; keep the
null check (ah_invalid_argument_if) and then invoke the existing
push/expose/path_apply routine used elsewhere in the class before returning
x->val.
- Around line 756-829: Each path operation (path_query, path_size, path_apply,
and for_each_on_path) must reject disconnected handles before exposing v; add a
connectivity check using the existing connected(u, v) (or equivalent find_root
comparison) right after the null-argument check and before calling access(v),
and if not connected throw std::domain_error("vertices not connected"); apply
the same change to the other occurrences around the for_each_on_path region (the
referenced block at ~904-910) so no path operation exposes or mutates the wrong
auxiliary tree when u and v are in different components.
- Around line 627-637: In find_root(Node *x) the loop checks x->left before
calling push(x), which can descend the wrong side if x has a pending rev; to
fix, ensure you call push(x) before testing or following x->left (i.e., call
push(x) at the start of each loop iteration), keeping the existing calls to
access(x) and splay(x) intact so traversal correctly finds the root even when
lazy reversal flags are present.
- Around line 229-288: The AddLazyTag and AssignLazyTag are only correct for
sum-like monoids but are unconstrained; constrain them so they cannot be used
with MinMonoid/MaxMonoid/XorMonoid etc. Update the templates (e.g., AddLazyTag
and AssignLazyTag) to accept or be parameterized by the Monoid (or require a
trait), and add a compile-time check (static_assert or SFINAE/enable_if with a
trait like is_sum_monoid or supports_counted_apply) that enforces these tags are
only instantiable/usable with SumMonoid (or monoids that document support for
count-scaled apply). Ensure the check references the unique symbols AddLazyTag,
AssignLazyTag, and the monoid trait name you introduce so misuse fails to
compile with a clear message.
In `@tpl_segment_tree.H`:
- Around line 477-495: The four public constructors
Sum_Segment_Tree(std::initializer_list<T>), Sum_Segment_Tree(const Array<T>&),
Sum_Segment_Tree(const std::vector<T>&) and Sum_Segment_Tree(const DynList<T>&)
only have `@brief` tags; expand each Doxygen comment to match the style of the
other constructors by adding `@param` describing the input container (e.g. values
or il), `@throws` describing possible exceptions (e.g. std::invalid_argument or
std::bad_alloc if applicable), and `@return` (if your style requires for
constructors, a note that the object is constructed) so the API docs
consistently document parameters and exception contracts—mirror the existing
constructor docs elsewhere in the file for wording and exception details.
In `@Two_Sat.H`:
- Around line 307-329: The assignment uses SCC ids from
Tarjan_Connected_Components::connected_components and assumes they are in
reverse topological order when comparing NODE_COUNTER(lit_nodes(pos_lit(i))) and
NODE_COUNTER(lit_nodes(neg_lit(i))); add a guard: either assert/verify that the
returned scc ordering is reverse-topological (e.g., call a check after sccs are
produced or expose a boolean flag from Tarjan_Connected_Components) or, if you
prefer order-independence, compute a topological rank for each SCC (or use
component graph reachability) and base assignment(i) on those ranks; also add a
short comment on the dependency near the use in Two_Sat.H referencing
connected_components and NODE_COUNTER so future changes to Tarjan do not
silently break model construction.
---
Outside diff comments:
In `@tpl_arrayStack.H`:
- Around line 210-237: The const-qualified accessors (ArrayStack::top(),
ArrayStack::top(size_t), ArrayStack::get_last(), ArrayStack::base() and the
equivalents in FixedStack) currently return non-const T& allowing mutation
through const objects; add const overloads that return const T& (e.g. provide
both T& top() noexcept and const T& top() const noexcept, and similarly for
top(size_t), base(), get_last()) and adjust their implementations to call the
underlying array accessors as const; also remove the const_cast from
FixedStack::traverse(...) const by changing the traversal to pass elements as
const T& to the operation (e.g. call operation(static_cast<const T&>(array[i]))
or simply operation(array[i]) when using a const reference) so the const method
no longer exposes mutable references.
---
Minor comments:
In @.gitignore:
- Around line 343-345: The .gitignore currently ignores any directory named
html/ or latex/ anywhere in the repo; change those entries to anchored paths so
only the generated Doxygen output is ignored (for example replace "html/" and
"latex/" with "/html/" and "/latex/" or with the explicit docs path such as
"docs/html/" and "docs/latex/") so legitimate source/assets named html or latex
in subfolders are not accidentally excluded.
In `@docs/fft-tutorial.en.md`:
- Around line 213-227: The snippet uses an undefined identifier iopts when
constructing FFTD::ISTFTProcessor; update the example to pass the correct
options variable or explicitly declare it (e.g., define FFTD::ISTFTOptions iopts
= {/* ... */}; or reuse the previously shown opts if that is the intended value)
so the line FFTD::ISTFTProcessor synthesizer(fft_size, window, iopts); compiles;
ensure the example shows the declaration/initialization of iopts (or replace
iopts with opts) and keep references to FFTD::STFTProcessor analyzer, fft_size,
and window consistent.
- Around line 44-54: The markdown code fences in the FFT tutorial use a ```cpp
fenced block; change these to the repo's canonical code-block style (replace the
current triple-backtick fenced blocks that contain the FFT example including "#
include <fft.H>", "template <std::floating_point Real = double> class FFT", and
the aliases "FFTD" and "FFTF") to match the repository lint rule (normalize all
similar fenced blocks throughout the tutorial) so MD046 warnings are cleared.
In `@Examples/tree_node_graphviz_example.cc`:
- Around line 1-6: Add the standard MIT license header to the top of the
Examples/tree_node_graphviz_example.cc file (matching the header used in
ntt_example.cc) so the file begins with the project license before any includes;
update the header block above the existing includes (e.g., before `#include`
<iostream>, `#include` "tpl_tree_node.H", and `#include` "generate_tree.H") to
ensure the file follows the same licensing convention as other examples.
In `@README.md`:
- Around line 1115-1119: The snippet reuses vertices u and v from the earlier
Gen_Link_Cut_Tree<int,...> example which don't belong to lazy_lct; instead
construct vertices from lazy_lct and use those when calling
lazy_lct.path_apply(...). Locate the Gen_Link_Cut_Tree<long long, SumMonoid<long
long>, AddLazyTag<long long>> lazy_lct declaration and replace the reused u,v
with new vertices created via lazy_lct.make_vertex() (or the tree's
vertex-construction method used elsewhere) and then call
lazy_lct.path_apply(new_u, new_v, +10LL).
In `@Tests/link_cut_tree_test.cc`:
- Around line 683-707: The test only covers a fixed chain and a few manual
updates; add an oracle-backed randomized stress test that repeatedly performs
mixed operations (create nodes via lct.make_vertex, lct.link, lct.cut,
lct.reroot if available, and path_apply) with a fixed RNG seed and verifies
results against a naive array model using the same SumMonoid<long
long>/AddLazyTag<long long> semantics by comparing lct.path_query to the oracle
after every operation (including overlapping updates and after reroot/cut/link
sequences); ensure the harness uses NodePtr references from the LCT (nd vector),
exercises overlapping path_apply ranges, seeds the RNG for reproducibility, and
fails the test with diagnostic info (seed + sequence) when any mismatch is
detected.
In `@Tests/ntt_test.cc`:
- Around line 1349-1375: The test PerformanceRegressionIsGuardedByEnvironment
currently runs when ENABLE_PERF_TESTS is merely present and always asserts
elapsed <= 2500ms; change the enable check to require ENABLE_PERF_TESTS == "1"
(use std::getenv and strcmp or string compare) like the million-digit test, and
stop using the hard-coded 2500ms budget: instead read an explicit budget
environment variable (e.g., PERF_TEST_BUDGET_MS) and only perform the
EXPECT_LE(elapsed, budget) assertion when that env var is set and successfully
parsed to an integer; otherwise skip the budget assertion (or skip the test
entirely if you prefer) while leaving the setup and call to
DefaultNTT::pmultiply and the size assertion intact.
- Around line 842-860: The test currently asserts the returned polynomial equals
a specific precomputed root; instead it should verify the square-root property.
For both the unshifted and shifted cases, replace the
expect_equal_arrays(recovered, root, ...) and
expect_equal_arrays(shifted_recovered, shifted_root, ...) checks with
computation of product = naive_multiply_trunc<DefaultNTT>(recovered, recovered,
n) (and similarly for shifted_recovered) and assert expect_equal_arrays(product,
square, "poly_sqrt squares to original") (and for shifted case use
shifted_square), i.e. use DefaultNTT::poly_sqrt result squared and compared to
the original square rather than comparing to a specific root.
In `@tpl_arrayStack.H`:
- Around line 499-514: The traverse implementation in FixedStack::traverse
currently iterates from array[0] to array[head-1], which yields
oldest-to-youngest contrary to the doc comment; change the loop in traverse
(template<class Operation> bool traverse(Operation & operation)) to iterate from
head-1 down to 0 (youngest to oldest), calling operation(array[i]) in that
reversed order and preserving the early-return behavior when operation returns
false; ensure you handle the empty-stack case correctly so no underflow occurs
when head is zero.
In `@Two_Sat.H`:
- Around line 140-149: signed_to_lit currently accepts invalid sentinel values
(0 and LONG_MIN) that add_clause_signed already rejects; update the public
conversion boundary by validating inputs and asserting preconditions: in
signed_to_lit(const long v) noexcept add checks that v != 0 and v != LONG_MIN
(use the project's assert/error macro from ah-errors.H or assert) before
converting, returning/handling as appropriate, and keep the existing safe
magnitude handling; alternatively, if you prefer to hide the function, make
signed_to_lit non-public and document that callers must validate inputs (ensure
references to pos_lit and neg_lit remain intact).
---
Duplicate comments:
In `@tpl_arrayStack.H`:
- Around line 175-179: pushn and the FixedStack::popn variants currently accept
non-positive counts and can mutate or read stack state incorrectly; add
precondition checks that n > 0 (and for popn also n <= current_size if
applicable) before touching array or head. In pushn(const size_t & n = 1)
validate n>0 and return/raise an assertion (use existing assert or ah-errors.H
macros) without calling array.putn when invalid; in FixedStack::popn and any
popn overloads check n>0 (and bounds) up front and reject/assert rather than
allowing negative/zero to change head or return invalid elements. Apply the same
checks to the other popn/pushn occurrences referenced (around the 435-440 and
456-461 blocks).
---
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 236-256: The CI repeats the same GoogleTest bootstrap logic in the
"Build and install GoogleTest" step (checking pkg-config for gtest/gtest_main
and building from /usr/src/googletest or /usr/src/gtest), causing duplication
and drift; extract that logic into a single reusable unit (either a shell script
checked into the repo, e.g. scripts/bootstrap_gtest.sh, or a composite GitHub
Action) and replace each repeated block with a single invocation of that
script/action (preserving the pkg-config checks and the cmake build/install
behavior for /usr/src/googletest and /usr/src/gtest).
In `@CMakeLists.txt`:
- Around line 59-66: Move the compile-only flag to the compile options: when
ALEPH_USE_SANITIZERS is set and CMAKE_CXX_COMPILER_ID matches "GNU|Clang",
ensure -fno-omit-frame-pointer is added via add_compile_options (alongside
-fsanitize=address,undefined) rather than in add_link_options; leave sanitizer
runtime/link flags (-fsanitize=address,undefined -g) in add_link_options and
keep the existing warning branch for non-GCC/Clang compilers.
In `@CMakePresets.json`:
- Around line 61-90: The clang-debug and clang-release presets embed
Debian/x86_64 GCC-13-specific paths in CMAKE_CXX_FLAGS_INIT and
CMAKE_EXE_LINKER_FLAGS_INIT which breaks portability; update these presets
(names: "clang-debug", "clang-release") to remove hardcoded paths and instead
read configurable values (e.g., via environment variables or CMake cache
variables) or provide fallback logic, and add a comment/documentation string
indicating these presets are CI-specific if you choose to keep CI-only defaults;
ensure references in CMAKE_CXX_FLAGS_INIT and CMAKE_EXE_LINKER_FLAGS_INIT use
the configurable variables (or are omitted) so builds work across GCC versions,
distributions, and architectures.
In `@Examples/ntt_example.cc`:
- Around line 65-76: The function decimal_digits_from_string currently relies on
post-loop logic to handle an empty input; add an explicit check at the start of
decimal_digits_from_string (e.g., if value.empty()) and immediately return
Array<uint64_t>({0}) to avoid iterating or relying on digits.is_empty(); keep
the rest of the logic for non-empty inputs (preserve digits.reserve, append, and
trimming via digits.get_last()/digits.remove_last()) so callers always get a
non-empty digits array.
In `@scripts/fft_reference_check.rb`:
- Line 37: The assignment from Open3.capture3 currently binds unused variables
stdout and stderr; rename them to _stdout and _stderr in the Open3.capture3 call
(the tuple assignment on the line with Open3.capture3('python3', '-c', 'import
numpy')) so they are explicitly marked as intentionally unused while leaving
status unchanged.
In `@Tests/link_cut_tree_test.cc`:
- Around line 1369-1393: The test leaks if an ASSERT_* returns before
destroy_tree is called; wrap the pointer returned by lct.export_to_tree_node
(symbol: export_to_tree_node) in a std::unique_ptr with a custom deleter that
calls destroy_tree so ownership is RAII-managed; replace raw uses of the
returned TreeNode* and the manual destroy_tree(tree) call with the unique_ptr
variable in these test blocks (also apply the same change to the other
occurrences noted around lines 1411-1431, 1450-1475, 1493-1507).
In `@tpl_link_cut_tree_with_edges.H`:
- Around line 70-75: Add a class-level Doxygen block immediately above the
template declaration for Gen_Link_Cut_Tree_WithEdges that provides an `@brief`
short description and a more detailed paragraph describing the class purpose,
responsibilities, and intended usage; include `@tparam` entries for VT, ET,
EdgeMonoid, and LazyTag explaining each template parameter, and document any
important invariants or thread-safety/ownership notes relevant to this exported
public type so the public API doc generator picks up the class-level
documentation.
In `@tpl_link_cut_tree.H`:
- Around line 105-115: The concept LCTLazyTag must require that L::tag_type is
equality-comparable because push() compares x->lazy == L::tag_identity(); update
the LCTLazyTag concept to add a constraint like
std::equality_comparable<typename L::tag_type> (or an equivalent equality
requirement) alongside the existing requires-expression so custom tags must
support operator==; reference the concept name LCTLazyTag and the type symbol
L::tag_type and keep the existing requirements (tag_identity, apply, compose)
intact.
In `@tpl_segment_tree.H`:
- Line 1530: The constructors in Segment_Tree_Beats repeat the 4 * n allocation
expression; add a single helper (e.g., static size_t alloc_size(size_t m)) or a
delegating constructor inside class Segment_Tree_Beats and replace all
occurrences like the initializer "tree(Array<Node>::create(num == 0 ? 1 : 4 *
num)), n(num)" with a call to that helper (or the delegating ctor) so the tree
allocation uses alloc_size(n) (and handle the num==0 case inside the helper),
then update the other overloaded constructors that initialize tree and n to use
the same helper to centralize the sizing rule.
- Around line 487-490: Remove the public std::vector<T> constructors (e.g.,
Sum_Segment_Tree(const std::vector<T>&) and the similar overloads at the other
occurrences) so the API only exposes Aleph container entry points
(Array<T>/DynList<T>); if external-library interop is truly required, do not add
raw std::vector overloads—either document a justified exception in the header
and wrap the conversion in a named interop factory function (e.g.,
FromStdVector) or make the overloads internal/hidden and clearly annotated as
interop-only. Ensure changes are applied to the Sum_Segment_Tree constructor and
the two other matching constructors referenced in the comment.
In `@Two_Sat.H`:
- Around line 80-82: The Two_Sat template parameter GT is unconstrained even
though the implementation requires GT::Node, GT::Arc, insert_node(),
insert_arc(), compatibility with Tarjan_Connected_Components, and NODE_COUNTER;
add a C++20 concept or a requires clause on the template header of class Two_Sat
(constraining GT) that asserts the presence of the nested types (Node, Arc), the
member functions (insert_node(), insert_arc()), and that GT works with
Tarjan_Connected_Components / NODE_COUNTER (e.g., by requiring a valid
instantiation or expression usage like Tarjan_Connected_Components<GT> or usage
of NODE_COUNTER on GT), so unsupported graph types fail at the template boundary
rather than deep inside methods.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
fft-contracts.md (1)
15-15: Make the empty-input contract explicit.The contract states "empty-input handling follows the behavior exercised by
Tests/fft_test.cc" but doesn't explicitly document what that behavior is. Based on the code snippet from fft.H:437-440, empty inputs are rejected with an invalid_argument exception. For clarity and to make this contract self-contained, consider explicitly stating the expected behavior.📝 Suggested clarification
-- empty-input handling follows the behavior exercised by `Tests/fft_test.cc` +- empty inputs must be rejected with an `invalid_argument` exception (as exercised by `Tests/fft_test.cc`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@fft-contracts.md` at line 15, The contract line "empty-input handling follows the behavior exercised by `Tests/fft_test.cc`" is ambiguous; make it explicit by stating that functions in this API reject empty inputs and throw std::invalid_argument (as shown in fft.H around lines 437-440 and exercised by Tests/fft_test.cc). Update the sentence to explicitly document that an empty input causes an std::invalid_argument exception to be thrown, referencing the relevant behavior in fft.H and Tests/fft_test.cc so callers know to validate inputs before calling.fft-changelog.md (1)
1-5: Consider adding version or release identifier.The changelog introduction states "This file records notable FFT-facing release changes for the current stabilization cycle" but doesn't identify which version or release this applies to. Adding a version number or release identifier would make this document more useful for tracking changes over time.
📝 Suggested enhancement
# FFT Changelog +## Version [TBD] + This file records notable FFT-facing release changes for the current stabilization cycle.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@fft-changelog.md` around lines 1 - 5, The changelog header "FFT Changelog" lacks a version or release identifier; add a clear release line directly under the "# FFT Changelog" heading (for example "Release: vX.Y.Z — YYYY-MM-DD" or "Stabilization cycle: <name/identifier>") so readers immediately know which version the notes cover; ensure the introductory sentence ("This file records notable FFT-facing release changes for the current stabilization cycle.") remains and update it to reference the newly added release identifier format for future entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@fft-changelog.md`:
- Around line 1-5: The changelog header "FFT Changelog" lacks a version or
release identifier; add a clear release line directly under the "# FFT
Changelog" heading (for example "Release: vX.Y.Z — YYYY-MM-DD" or "Stabilization
cycle: <name/identifier>") so readers immediately know which version the notes
cover; ensure the introductory sentence ("This file records notable FFT-facing
release changes for the current stabilization cycle.") remains and update it to
reference the newly added release identifier format for future entries.
In `@fft-contracts.md`:
- Line 15: The contract line "empty-input handling follows the behavior
exercised by `Tests/fft_test.cc`" is ambiguous; make it explicit by stating that
functions in this API reject empty inputs and throw std::invalid_argument (as
shown in fft.H around lines 437-440 and exercised by Tests/fft_test.cc). Update
the sentence to explicitly document that an empty input causes an
std::invalid_argument exception to be thrown, referencing the relevant behavior
in fft.H and Tests/fft_test.cc so callers know to validate inputs before
calling.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c126c67a-ef54-40a7-a98c-f527240eba05
📒 Files selected for processing (5)
fft-changelog.mdfft-contracts.mdfft-plan.mdfft-support-matrix.mdfft-validation.md
✅ Files skipped from review due to trivial changes (3)
- fft-support-matrix.md
- fft-plan.md
- fft-validation.md
Refactor Link-Cut Tree tests to use clearer structures and add error handling. Enhance graph visualization tests by escaping special characters in labels and improving tree/forest generation. Update Montgomery arithmetic for compile-time context and better error handling. Minor documentation and benchmark script updates.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 43 out of 47 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (2)
generate_tree.H:1
- This function uses
std::endlrepeatedly, which forces a stream flush each time and can significantly slow down large exports. Prefer writing '\n' (or a single final flush) instead ofstd::endlfor performance.
tpl_memArray.H:1 - Grammar:
con capacityshould bewith capacityin the constructor documentation comment.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tpl_segment_tree.H (1)
1544-1598:⚠️ Potential issue | 🟠 MajorMust fix: document the changed
Segment_Tree_Beatsconstructors like the rest of the public API.The modified public constructors here still have brief-only blocks, and the
DynList<T>overload at Line 1586 has no Doxygen block at all. Please add the missing@param/@throwscontract details so this header stays Doxygen-complete.Based on learnings, "Missing Doxygen documentation for new or modified code blocks merge in code review". As per coding guidelines, "If a new public method or class is added, it MUST be fully documented with Doxygen" and "All public member functions must have
@brief,@param,@return, and@throwsDoxygen tags".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_segment_tree.H` around lines 1544 - 1598, The public Segment_Tree_Beats constructors (Segment_Tree_Beats(const size_t num, const T & init_val), Segment_Tree_Beats(std::initializer_list<T>), Segment_Tree_Beats(const Array<T>&), Segment_Tree_Beats(const std::vector<T>&) and the missing Doxygen for Segment_Tree_Beats(const DynList<T>&)) need full Doxygen blocks: add `@brief`, `@param` for each input (e.g., num, init_val, il, values), and `@throws/`@return as per project conventions (e.g., document allocation failures or preconditions) so each public constructor matches the other API docs; place the comments directly above the respective constructor declarations (referencing the constructor signatures above) and mirror the style and tags used elsewhere in the header.
♻️ Duplicate comments (7)
tpl_segment_tree.H (1)
468-510:⚠️ Potential issue | 🟠 MajorMust fix: finish the Doxygen on the wrapper constructors.
The new
Sum_Segment_Tree,Min_Segment_Tree, andMax_Segment_Treeoverloads are still documented inconsistently. In particular, the min/max container constructors on Lines 550-567 and Lines 608-625 still only have@brief, and the size constructors on Lines 468-472, 540-544, and 598-602 still omit exception-contract details. That leaves the generated API docs incomplete for public header APIs.As per coding guidelines, "If a new public method or class is added, it MUST be fully documented with Doxygen" and "All public member functions must have
@brief,@param,@return, and@throwsDoxygen tags".Also applies to: 540-568, 598-626
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_segment_tree.H` around lines 468 - 510, The Doxygen for the public wrapper constructors (e.g., Sum_Segment_Tree(const size_t, const T&), Sum_Segment_Tree(std::initializer_list<T>), Sum_Segment_Tree(const Array<T>&), Sum_Segment_Tree(const std::vector<T>&), Sum_Segment_Tree(const DynList<T>&) and the equivalent Min_Segment_Tree/Max_Segment_Tree overloads) is incomplete—add missing `@param` and `@throws` tags (and ensure `@brief` is present) so each public ctor documents parameters and possible std::bad_alloc exceptions; update the size-based ctors to include `@param` num and `@param` init_val and an `@throws` std::bad_alloc, and update each container-based ctor to include `@param` values (or il) and an `@throws` std::bad_alloc to satisfy the project's Doxygen requirements.tpl_arrayStack.H (3)
479-484:⚠️ Potential issue | 🔴 CriticalMust fix — negative
ninpopn()still grows the stack.Line 481 subtracts signed
nfrom unsignedheadbefore the check. Forn < 0, the expression wraps, the assert passes, and Line 483 increasesheadinstead of decreasing it.Suggested fix
T popn(const int & n) noexcept { - assert((int (head - n)) >= 0); + assert(n > 0); + const auto count = static_cast<size_t>(n); + assert(count <= head); - head -= n; + head -= count; return std::move(array[head]); }As per coding guidelines, "Check boundary conditions and off-by-one errors" and "Use assertions (
assert, customah-errors.Hmacros) for preconditions."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_arrayStack.H` around lines 479 - 484, In popn(const int & n) noexcept the subtraction head -= n executes with unsigned head causing wrap when n < 0; change the precondition to assert(n >= 0 && head >= static_cast<size_t>(n)) (or equivalent) and perform the boundary check before any arithmetic, then subtract safely (e.g., compute newHead = head - n after the assert) so negative n cannot grow the stack and head underflow is prevented; refer to popn, head and array in the fix.
368-374:⚠️ Potential issue | 🟠 MajorMust fix —
FixedStackcopy operations are still not exception-safe.If copying
Tthrows on Line 373 or Line 404, the copy constructor leaks its newly allocated buffer, and copy assignment has already destroyed the old storage before the new state is fully built. That breaks the strong guarantee and can leave*thiswith mismatchedarray/head/dimafter an exception.Suggested fix
FixedStack(const FixedStack & s) - : array(new T [s.dim]), head(s.head), dim(s.dim) + : array(nullptr), head(0), dim(0) { - for (size_t i = 0; i < head; ++i) - array[i] = s.array[i]; + auto tmp = std::make_unique<T[]>(s.dim); + for (size_t i = 0; i < s.head; ++i) + tmp[i] = s.array[i]; + array = tmp.release(); + head = s.head; + dim = s.dim; } @@ FixedStack &operator =(const FixedStack & s) { if (this == &s) return *this; - - T *ptr = new T [s.dim]; - - delete [] array; - array = ptr; - for (size_t i = 0; i < s.head; ++i) - array[i] = s.array[i]; - head = s.head; - dim = s.dim; + FixedStack tmp(s); + swap(tmp); return *this; }As per coding guidelines, "All resources (memory, file handles, locks) must follow RAII principles" and "Ensure exception safety (basic, strong, or no-throw guarantee)."
Also applies to: 393-409
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_arrayStack.H` around lines 368 - 374, The copy constructor and copy-assignment for FixedStack are not exception-safe because they assign the raw array or destroy old storage before element copies can complete; fix both by performing element copies into a temporary RAII-managed buffer (e.g., std::unique_ptr<T[]> or std::vector<T>) and only swap/assign the new buffer, head, and dim into *this after all element copies succeed (or implement copy-and-swap for operator= using a FixedStack copy constructor that builds the new state safely); ensure no raw delete/assign happens before successful construction so exceptions won't leak memory or leave inconsistent array/head/dim.
422-451:⚠️ Potential issue | 🟠 MajorMust fix — these generic operations cannot be unconditionally
noexcept.Lines 425, 437, 471, and 484 depend on
Tcopy/move assignment or move construction. For a throwingT, the current signatures convert ordinary exceptions intostd::terminate.Suggested fix
- T &push(const T & data) noexcept + T &push(const T & data) noexcept(std::is_nothrow_copy_assignable_v<T>) @@ - T &push(T && data) noexcept + T &push(T && data) noexcept(std::is_nothrow_move_assignable_v<T>) @@ - T &append(const T & data) noexcept { return push(data); } + T &append(const T & data) noexcept(std::is_nothrow_copy_assignable_v<T>) + { return push(data); } @@ - T &append(T && data) noexcept { return push(std::forward<T>(data)); } + T &append(T && data) noexcept(std::is_nothrow_move_assignable_v<T>) + { return push(std::forward<T>(data)); } @@ - T &insert(const T & data) noexcept { return push(data); } + T &insert(const T & data) noexcept(std::is_nothrow_copy_assignable_v<T>) + { return push(data); } @@ - T &insert(T && data) noexcept { return push(std::forward<T>(data)); } + T &insert(T && data) noexcept(std::is_nothrow_move_assignable_v<T>) + { return push(std::forward<T>(data)); } @@ - T pop() noexcept + T pop() noexcept(std::is_nothrow_move_constructible_v<T>) @@ - T popn(const int & n) noexcept + T popn(const int & n) noexcept(std::is_nothrow_move_constructible_v<T>)As per coding guidelines, "Mark functions as
noexceptwhen they guarantee no exceptions."Also applies to: 467-479
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_arrayStack.H` around lines 422 - 451, The push/append/insert overloads are incorrectly marked noexcept unconditionally; change their noexcept to conditional expressions that reflect whether T's copy/move operations can throw (e.g., for T &push(const T& data) use noexcept(std::is_nothrow_copy_assignable<T>::value) or noexcept(std::is_nothrow_copy_constructible<T>::value) as appropriate, and for T &push(T&& data) use noexcept(std::is_nothrow_move_assignable<T>::value) or noexcept(std::is_nothrow_move_constructible<T>::value); update the corresponding append/insert overloads to use the same conditional noexcepts and add `#include` <type_traits> if missing.tpl_link_cut_tree_with_edges.H (1)
153-167:⚠️ Potential issue | 🟠 MajorMust fix —
VTis still accepted but never stored.
make_vertex(const VT&)discards the caller payload, andExportDatastill has nowhere to keep it, so any vertex data passed through this API is lost immediately. Either persistVTper vertex or drop the payload-bearing API surface.Also applies to: 327-335
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_link_cut_tree_with_edges.H` around lines 153 - 167, make_vertex(const VT& = VT{}) currently ignores the VT payload (it always calls lct_.make_vertex(EdgeMonoid::identity())), so VT is never stored; either remove the unused parameter or persist VT per-node: add a VT field to the per-node storage (e.g. ExportData or Node metadata) and ensure make_vertex forwards the VT into the underlying lct_.make_vertex or into the Node/ExportData after creation, updating any constructors/initializers (and export serialization) accordingly; reference symbols: make_vertex, VT, ExportData, Node, and lct_.make_vertex.Tests/CMakeLists.txt (1)
536-543:⚠️ Potential issue | 🟠 MajorMust fix —
FFTRelease.Auditis still multi-config fragile.
$<TARGET_FILE_DIR:fft_benchmark>/..resolves to<build>/Examplesunder Visual Studio/Xcode, but the audit script still expects a build root it can resolve the FFT executables from. That keeps this CTest entry single-config-only unless the script is taught to accept explicit executable paths.Verify whether
scripts/fft_release_audit.rbappendsExamples/...under--build-dir; if it does, the current argument is still wrong on multi-config generators.#!/bin/bash set -e echo "=== Current CTest registration ===" sed -n '536,543p' Tests/CMakeLists.txt echo echo "=== fft_release_audit.rb build-dir handling ===" if [ -f scripts/fft_release_audit.rb ]; then rg -n -C2 'build-dir|fft_benchmark|fft_reference_probe|Examples' scripts/fft_release_audit.rb else echo "scripts/fft_release_audit.rb not found" exit 1 fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/CMakeLists.txt` around lines 536 - 543, The FFTRelease.Audit test is fragile on multi-config generators because passing $<TARGET_FILE_DIR:fft_benchmark>/.. yields an Examples subdir instead of the real build root; fix by changing the CTest registration to pass explicit executable paths via generator expressions (e.g. add --fft-benchmark $<TARGET_FILE:fft_benchmark> and --fft-reference $<TARGET_FILE:fft_reference_probe> to the COMMAND) and update scripts/fft_release_audit.rb to accept and use these new flags (falling back to --build-dir if not provided) so the script no longer relies on deriving executable locations from a single build-dir.tpl_link_cut_tree.H (1)
229-288:⚠️ Potential issue | 🟠 MajorMust fix — the built-in lazy tags still silently corrupt non-sum monoids.
AddLazyTag::apply()andAssignLazyTag::apply()fold updates astag * cnt, sopath_apply()returns wrong aggregates forMinMonoid,MaxMonoid,XorMonoid, and other non-sum monoids while compiling cleanly.Minimal guard to stop the silent misuse
template <typename T = int, class Monoid = DefaultMonoid<T>, class LazyTag = NoLazyTag<T>> requires LCTMonoid<Monoid, T> and LCTLazyTag<LazyTag, T> class Gen_Link_Cut_Tree { + static_assert(not std::is_same_v<LazyTag, AddLazyTag<T>> + or std::is_same_v<Monoid, SumMonoid<T>>, + "AddLazyTag currently preserves aggregates only with SumMonoid"); + static_assert(not std::is_same_v<LazyTag, AssignLazyTag<T>> + or std::is_same_v<Monoid, SumMonoid<T>>, + "AssignLazyTag currently preserves aggregates only with SumMonoid");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_link_cut_tree.H` around lines 229 - 288, The built-in AddLazyTag::apply and AssignLazyTag::apply multiply by cnt and thus corrupt non-sum monoids; add a compile-time guard so these lazy tags can only be instantiated/used with monoids that explicitly opt into count-aware lazy updates (e.g. a trait Monoid::supports_counted_lazy or a std::bool_constant-like trait). Concretely, add a static_assert inside AddLazyTag and AssignLazyTag (e.g. in their apply() or at class scope) that checks the monoid trait and fails compilation with a clear message directing authors to provide a custom tag for Min/Max/Xor; call out the symbols AddLazyTag::apply, AssignLazyTag::apply, and the new trait name so reviewers can locate and verify the change.
🧹 Nitpick comments (2)
Tests/link_cut_tree_test.cc (1)
1054-1080: Consider adding timing bounds for performance regression.The test verifies correctness at scale (N=50,000) but doesn't enforce timing bounds. For critical algorithms like Link-Cut Trees, consider adding a timing assertion to catch performance regressions (e.g., ensuring operations complete within expected O(log N) amortized time).
Example timing check
// After the operations, optionally verify timing: const auto start = std::chrono::steady_clock::now(); for (int i = 0; i < 1000; ++i) { lct.connected(nd[0], nd[N - 1]); lct.find_root(nd[i % N]); } const auto elapsed = std::chrono::steady_clock::now() - start; // Expect 1000 operations to complete in reasonable time (e.g., < 100ms) EXPECT_LT(std::chrono::duration_cast<std::chrono::milliseconds>(elapsed).count(), 100);Based on learnings: "Include performance regression tests for critical algorithms."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/link_cut_tree_test.cc` around lines 1054 - 1080, Add a lightweight timing assertion to the LargePathStaysReasonable test to catch performance regressions: measure a short sequence of representative operations (e.g., repeated calls to lct.connected, lct.find_root and/or lct.path_size) inside TEST(LinkCutTreePerf, LargePathStaysReasonable) after the path build/relink steps, compute elapsed milliseconds, and ASSERT/EXPECT that the elapsed time is below a reasonable bound (choose a conservative threshold for CI). Ensure sampling uses the existing nd vector and Link_Cut_Tree instance (lct) and that the timing block is small and non-flaky (e.g., 500–1000 iterations) so the test remains deterministic and fast.Examples/fft_benchmark.cc (1)
125-139: Consider handling parse exceptions for--sizes.
std::stoullcan throwstd::invalid_argumentorstd::out_of_rangeon malformed input (e.g.,--sizes abc,def). Currently this would produce an unhelpful exception message rather than a user-friendly error.Suggested defensive parsing
[[nodiscard]] Array<size_t> parse_sizes(const std::string & csv) { Array<size_t> sizes; std::stringstream ss(csv); std::string item; while (std::getline(ss, item, ',')) { if (item.empty()) continue; - const unsigned long long parsed = std::stoull(item); - sizes.append(static_cast<size_t>(parsed)); + try + { + const unsigned long long parsed = std::stoull(item); + sizes.append(static_cast<size_t>(parsed)); + } + catch (const std::exception &) + { + std::cerr << "Invalid size value: " << item << "\n"; + std::exit(1); + } } return sizes; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Examples/fft_benchmark.cc` around lines 125 - 139, parse_sizes currently calls std::stoull on each token (item) which can throw std::invalid_argument or std::out_of_range for malformed or too-large inputs; wrap the parse in a try/catch inside parse_sizes, catch those exceptions for each token, and produce a clear, user-friendly error (including the offending token and that it came from the --sizes argument) — either throw a new std::runtime_error with that message or log and exit as the rest of the program expects; ensure you still skip empty tokens and preserve existing behavior for valid tokens.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modular_arithmetic.H`:
- Around line 305-329: The mont_reduce function's repeated subtraction loop
(while (t >= ctx.mod()) t -= ctx.mod();) can loop many times for small moduli;
replace that tail with a single modular reduction to bound runtime: compute t =
static_cast<__uint128_t>(static_cast<uint64_t>(t % ctx.mod())); then perform a
single conditional subtraction if necessary to ensure result < ctx.mod(); update
the return to use that canonicalized value. Locate mont_reduce and use ctx.mod()
and ctx.mod_inv_neg() as shown; alternatively, if you prefer the original
precondition, enforce/ document x < (static_cast<__uint128_t>(ctx.mod()) << 64)
and keep the single-subtraction fast path instead of the loop.
In `@Tests/montgomery_test.cc`:
- Around line 47-51: Add runtime tests exercising the arithmetic code paths for
an odd composite modulus by reusing the existing test suites but invoking them
with the composite context kCtx9 (constructed via montgomery_ctx_for_mod<9> and
type MontgomeryCtx) instead of only relying on the static_assert; specifically,
call the round-trip, multiplication, and exponentiation test helpers (the same
functions used for kCtx3/kCtx primes) with kCtx9 to verify runtime behavior and
catch regressions that assume primality.
In `@tpl_arrayStack.H`:
- Around line 458-463: The pushn function can underflow when n == 0 and the
capacity check can overflow; update pushn(const size_t & n = 1) to assert
preconditions safely: assert(n > 0) and assert(n <= dim - head) (use dim - head
to avoid head + n wraparound) before incrementing head, then increment head and
return array[head - 1]; reference the function name pushn, and the members head,
dim, and array when applying these checks.
- Around line 488-510: The const-qualified accessors currently return mutable
references, allowing modification through a const FixedStack; change the
signatures of the const overloads top() const noexcept, get_last() const
noexcept, base() const noexcept, and top(const size_t i) const noexcept to
return const T& instead of T& so callers receive immutable references; keep the
existing non-const overloads returning T& intact to preserve modifiable access.
Ensure return types are updated everywhere these const methods are
declared/defined (names: top(), get_last(), base(), top(size_t)) and rebuild to
catch any const-correctness compile errors to adjust call sites if needed.
In `@tpl_link_cut_tree_with_edges.H`:
- Around line 200-221: The hidden edge vertex created by
lct_.make_vertex(weight) in link(...) is never released, causing the LCT node
pool to grow; modify both link(...) and cut(...) to free that vertex: after
creating e with lct_.make_vertex(weight) ensure any subsequent failure rolls
back by calling lct_.destroy_vertex(e) (or the LCT's corresponding destroy/free
method) before rethrowing, and in cut(...) after detaching the hidden edge_node
and erasing edge_recs_/edge_idx_ call
lct_.destroy_vertex(edge_recs_(idx).edge_node) (or the appropriate free API) so
the node is actually removed; apply the same cleanup in the other cut-related
block referenced (lines ~242-252) to avoid leaks.
- Around line 113-116: The member edge_idx_ currently uses
std::unordered_map<EdgeKey, size_t, EdgeKeyHash>, which violates the policy;
replace it with the Aleph library's equivalent hash-table container (e.g.,
aleph::unordered_map or the project's canonical aleph hash type) using the same
key type EdgeKey and hash EdgeKeyHash so semantics remain identical; update any
construction/lookup/emplace uses to the Aleph API and include a short comment
above edge_idx_ explaining the change; if no Aleph equivalent actually exists
for EdgeKey, add a brief documented exception comment by edge_idx_ explaining
why std::unordered_map is required.
---
Outside diff comments:
In `@tpl_segment_tree.H`:
- Around line 1544-1598: The public Segment_Tree_Beats constructors
(Segment_Tree_Beats(const size_t num, const T & init_val),
Segment_Tree_Beats(std::initializer_list<T>), Segment_Tree_Beats(const
Array<T>&), Segment_Tree_Beats(const std::vector<T>&) and the missing Doxygen
for Segment_Tree_Beats(const DynList<T>&)) need full Doxygen blocks: add `@brief`,
`@param` for each input (e.g., num, init_val, il, values), and `@throws/`@return as
per project conventions (e.g., document allocation failures or preconditions) so
each public constructor matches the other API docs; place the comments directly
above the respective constructor declarations (referencing the constructor
signatures above) and mirror the style and tags used elsewhere in the header.
---
Duplicate comments:
In `@Tests/CMakeLists.txt`:
- Around line 536-543: The FFTRelease.Audit test is fragile on multi-config
generators because passing $<TARGET_FILE_DIR:fft_benchmark>/.. yields an
Examples subdir instead of the real build root; fix by changing the CTest
registration to pass explicit executable paths via generator expressions (e.g.
add --fft-benchmark $<TARGET_FILE:fft_benchmark> and --fft-reference
$<TARGET_FILE:fft_reference_probe> to the COMMAND) and update
scripts/fft_release_audit.rb to accept and use these new flags (falling back to
--build-dir if not provided) so the script no longer relies on deriving
executable locations from a single build-dir.
In `@tpl_arrayStack.H`:
- Around line 479-484: In popn(const int & n) noexcept the subtraction head -= n
executes with unsigned head causing wrap when n < 0; change the precondition to
assert(n >= 0 && head >= static_cast<size_t>(n)) (or equivalent) and perform the
boundary check before any arithmetic, then subtract safely (e.g., compute
newHead = head - n after the assert) so negative n cannot grow the stack and
head underflow is prevented; refer to popn, head and array in the fix.
- Around line 368-374: The copy constructor and copy-assignment for FixedStack
are not exception-safe because they assign the raw array or destroy old storage
before element copies can complete; fix both by performing element copies into a
temporary RAII-managed buffer (e.g., std::unique_ptr<T[]> or std::vector<T>) and
only swap/assign the new buffer, head, and dim into *this after all element
copies succeed (or implement copy-and-swap for operator= using a FixedStack copy
constructor that builds the new state safely); ensure no raw delete/assign
happens before successful construction so exceptions won't leak memory or leave
inconsistent array/head/dim.
- Around line 422-451: The push/append/insert overloads are incorrectly marked
noexcept unconditionally; change their noexcept to conditional expressions that
reflect whether T's copy/move operations can throw (e.g., for T &push(const T&
data) use noexcept(std::is_nothrow_copy_assignable<T>::value) or
noexcept(std::is_nothrow_copy_constructible<T>::value) as appropriate, and for T
&push(T&& data) use noexcept(std::is_nothrow_move_assignable<T>::value) or
noexcept(std::is_nothrow_move_constructible<T>::value); update the corresponding
append/insert overloads to use the same conditional noexcepts and add `#include`
<type_traits> if missing.
In `@tpl_link_cut_tree_with_edges.H`:
- Around line 153-167: make_vertex(const VT& = VT{}) currently ignores the VT
payload (it always calls lct_.make_vertex(EdgeMonoid::identity())), so VT is
never stored; either remove the unused parameter or persist VT per-node: add a
VT field to the per-node storage (e.g. ExportData or Node metadata) and ensure
make_vertex forwards the VT into the underlying lct_.make_vertex or into the
Node/ExportData after creation, updating any constructors/initializers (and
export serialization) accordingly; reference symbols: make_vertex, VT,
ExportData, Node, and lct_.make_vertex.
In `@tpl_link_cut_tree.H`:
- Around line 229-288: The built-in AddLazyTag::apply and AssignLazyTag::apply
multiply by cnt and thus corrupt non-sum monoids; add a compile-time guard so
these lazy tags can only be instantiated/used with monoids that explicitly opt
into count-aware lazy updates (e.g. a trait Monoid::supports_counted_lazy or a
std::bool_constant-like trait). Concretely, add a static_assert inside
AddLazyTag and AssignLazyTag (e.g. in their apply() or at class scope) that
checks the monoid trait and fails compilation with a clear message directing
authors to provide a custom tag for Min/Max/Xor; call out the symbols
AddLazyTag::apply, AssignLazyTag::apply, and the new trait name so reviewers can
locate and verify the change.
In `@tpl_segment_tree.H`:
- Around line 468-510: The Doxygen for the public wrapper constructors (e.g.,
Sum_Segment_Tree(const size_t, const T&),
Sum_Segment_Tree(std::initializer_list<T>), Sum_Segment_Tree(const Array<T>&),
Sum_Segment_Tree(const std::vector<T>&), Sum_Segment_Tree(const DynList<T>&) and
the equivalent Min_Segment_Tree/Max_Segment_Tree overloads) is incomplete—add
missing `@param` and `@throws` tags (and ensure `@brief` is present) so each public
ctor documents parameters and possible std::bad_alloc exceptions; update the
size-based ctors to include `@param` num and `@param` init_val and an `@throws`
std::bad_alloc, and update each container-based ctor to include `@param` values
(or il) and an `@throws` std::bad_alloc to satisfy the project's Doxygen
requirements.
---
Nitpick comments:
In `@Examples/fft_benchmark.cc`:
- Around line 125-139: parse_sizes currently calls std::stoull on each token
(item) which can throw std::invalid_argument or std::out_of_range for malformed
or too-large inputs; wrap the parse in a try/catch inside parse_sizes, catch
those exceptions for each token, and produce a clear, user-friendly error
(including the offending token and that it came from the --sizes argument) —
either throw a new std::runtime_error with that message or log and exit as the
rest of the program expects; ensure you still skip empty tokens and preserve
existing behavior for valid tokens.
In `@Tests/link_cut_tree_test.cc`:
- Around line 1054-1080: Add a lightweight timing assertion to the
LargePathStaysReasonable test to catch performance regressions: measure a short
sequence of representative operations (e.g., repeated calls to lct.connected,
lct.find_root and/or lct.path_size) inside TEST(LinkCutTreePerf,
LargePathStaysReasonable) after the path build/relink steps, compute elapsed
milliseconds, and ASSERT/EXPECT that the elapsed time is below a reasonable
bound (choose a conservative threshold for CI). Ensure sampling uses the
existing nd vector and Link_Cut_Tree instance (lct) and that the timing block is
small and non-flaky (e.g., 500–1000 iterations) so the test remains
deterministic and fast.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4dafce3a-89de-4fec-87ac-dc97a4d805a9
📒 Files selected for processing (19)
.gitignoreExamples/fft_benchmark.ccExamples/fft_example.ccExamples/fft_reference_probe.ccTests/CMakeLists.txtTests/graph_visualization_test.ccTests/link_cut_tree_test.ccTests/montgomery_test.ccTwo_Sat.Hfft-changelog.mdfft-contracts.mdgenerate_tree.Hmodular_arithmetic.Hscripts/fft_perf_regression.rbscripts/fft_reference_check.rbtpl_arrayStack.Htpl_link_cut_tree.Htpl_link_cut_tree_with_edges.Htpl_segment_tree.H
🚧 Files skipped from review as they are similar to previous changes (4)
- fft-changelog.md
- fft-contracts.md
- Examples/fft_reference_probe.cc
- scripts/fft_reference_check.rb
Refactor internal data structures for `ArrayStack`, `AVL_Tree`, and `RB_Tree` to improve correctness and robustness, particularly concerning move semantics and stack operations. This includes introducing safer stack pop operations and better initialization patterns. Add new monadic data structures to link-cut tree for improved tree traversal. Update Montgomery arithmetic module to use `__uint128_t` for large modulus calculations, improving precision and correctness for edge cases. Introduce new FFT benchmarks and update existing ones to cover a wider range of scenarios, including parallel execution and validation checks. Enhance `fft_release_audit.rb` to support explicit paths for benchmark executables.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
tpl_link_cut_tree.H (1)
839-844:⚠️ Potential issue | 🟡 Minor
get_valmarkedconstbut calls non-constaccess().The method is declared
constbut internally callsaccess(x)which modifies the tree structure via splaying. This is a const-correctness issue.🔧 Suggested fix: Remove const qualifier
- [[nodiscard]] const T &get_val(Node *x) const + [[nodiscard]] const T &get_val(Node *x) { ah_invalid_argument_if(x == nullptr) << "null handle"; access(x); return x->val; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_link_cut_tree.H` around lines 839 - 844, The get_val method is declared const but calls the non-const splaying helper access(Node*), violating const-correctness; update the declaration of get_val (the function named get_val(Node *x) const) to remove the const qualifier so it matches the side-effecting call to access(Node*), and ensure any callers expecting a const overload are adjusted or use a const-safe alternative if needed.
🧹 Nitpick comments (4)
scripts/fft_release_audit.rb (1)
131-147: Nice to have — Add trailing comma after the last array element for consistency.RuboCop flags the missing trailing comma. Adding it improves diff readability when appending new arguments.
♻️ Suggested fix
command = [ 'ruby', perf_script.to_s, '--baseline', options[:perf_baseline], '--benchmark', benchmark.to_s, '--metric', 'median', '--warn-slowdown', options[:warn_slowdown].to_s, '--fail-slowdown', options[:fail_slowdown].to_s, '--warmup', options[:warmup].to_s, '--samples', options[:samples].to_s, - '--batch-count', options[:batch_count].to_s + '--batch-count', options[:batch_count].to_s, ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/fft_release_audit.rb` around lines 131 - 147, The command array built when options[:perf_baseline] is true is missing a trailing comma on the final literal element which triggers RuboCop; update the array literal in the block that constructs command (the variable named command used with run_step('performance-regression', command)) so that the last listed argument '--batch-count', options[:batch_count].to_s ends with a trailing comma, and ensure any conditional concat calls remain unchanged.modular_arithmetic.H (1)
326-331: Remove dead code after modulo operation.After
t % ctx.mod(), the result is already in[0, mod), making the subsequent checkif (reduced >= ctx.mod())always false.♻️ Proposed fix
const __uint128_t t = static_cast<__uint128_t>(x_hi) + qmod_hi + carry; - uint64_t reduced = static_cast<uint64_t>(t % ctx.mod()); - if (reduced >= ctx.mod()) - reduced -= ctx.mod(); - return reduced; + return static_cast<uint64_t>(t % ctx.mod()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modular_arithmetic.H` around lines 326 - 331, The modulo result is guaranteed in [0, ctx.mod()), so remove the dead branch that checks and subtracts ctx.mod(); inside the function that computes t and reduced (using __uint128_t t = ... and uint64_t reduced = static_cast<uint64_t>(t % ctx.mod())), delete the lines "if (reduced >= ctx.mod()) reduced -= ctx.mod();" and directly return the reduced value (e.g., return static_cast<uint64_t>(t % ctx.mod());) to eliminate unreachable code.tpl_link_cut_tree_with_edges.H (1)
455-530: Note: O(n²) export complexity is documented but could be improved.The
export_to_tree_nodemethod has O(n²) complexity due to the linearfind_idxlambda (lines 472-478) being called repeatedly. The complexity is documented in the Doxygen comment (line 439), which is appropriate. Consider using a hash map for index lookup if this becomes a bottleneck in practice.♻️ Optional optimization using DynMapHash for O(n) lookup
[[nodiscard]] Tree_Node<ExportData> * export_to_tree_node(Node * root) { ah_invalid_argument_if(root == nullptr) << "null handle"; make_root(root); using TN = Tree_Node<ExportData>; // Build adjacency from edge records (only edges in root's component) // adj[i] = list of (neighbor_vertex_index, edge_weight) struct AdjEntry { size_t neighbor; ET weight; }; // Collect vertices in root's component Array<Node *> verts; + DynMapHash<Node *, size_t, NodePtrEq> vert_idx{ + Primes::DefaultPrime, node_ptr_hash, NodePtrEq() + }; verts.append(root); + vert_idx.insert(root, 0); for (size_t i = 0; i < edge_recs_.size(); ++i) { const auto & er = edge_recs_(i); if (lct_.connected(er.u, root)) { - // Ensure both endpoints are collected (avoid duplicates) - bool found_u = false, found_v = false; - for (size_t j = 0; j < verts.size(); ++j) - { - if (verts(j) == er.u) found_u = true; - if (verts(j) == er.v) found_v = true; - } - if (not found_u) verts.append(er.u); - if (not found_v) verts.append(er.v); + if (not vert_idx.contains(er.u)) + { + vert_idx.insert(er.u, verts.size()); + verts.append(er.u); + } + if (not vert_idx.contains(er.v)) + { + vert_idx.insert(er.v, verts.size()); + verts.append(er.v); + } } } - auto find_idx = [&](Node * nd) -> size_t - { - for (size_t i = 0; i < verts.size(); ++i) - if (verts(i) == nd) - return i; - return verts.size(); - }; + auto find_idx = [&](Node * nd) -> size_t + { + auto * entry = vert_idx.search(nd); + return entry ? entry->second : verts.size(); + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_link_cut_tree_with_edges.H` around lines 455 - 530, The export_to_tree_node function currently uses the find_idx lambda (captures verts and scans linearly) causing O(n²) behavior; replace this with a direct hash map from Node* to index (e.g., std::unordered_map<Node*, size_t> or DynMapHash) populated immediately after verts is complete (or as verts is built) and then use map lookups for ui/vi when iterating edge_recs_; update any place using find_idx (including root_idx lookup) to use the map and remove the linear-scan lambda to achieve O(n) index lookups.Tests/link_cut_tree_test.cc (1)
1100-1100: Consider moving include to the top of the file.The
#include <tpl_link_cut_tree_with_edges.H>at line 1100 is placed mid-file before the edge-weighted tests. While functional, placing includes at the top of the file is the conventional style for better readability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/link_cut_tree_test.cc` at line 1100, Move the mid-file include of '#include <tpl_link_cut_tree_with_edges.H>' into the header section with the other includes at the top of the test file so it appears before the edge-weighted tests that depend on it; locate the existing '#include <tpl_link_cut_tree_with_edges.H>' and cut/paste it into the file's include block (above the test cases) to follow conventional style and avoid scattering includes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tpl_avl.H`:
- Around line 172-176: Add a unit test to cover the zero-pop duplicate branch
where candidate_pos == avl_stack.size(): generate or craft a sequence that
inserts keys up to a current maximum, then attempts to insert a duplicate of
that maximum so the calculation in avl_stack.size() > candidate_pos yields false
and no popn() is called; after the operation call the AVL verification method
(verify()) to assert tree invariants hold. Place this test alongside the other
AVL tests (Tests/*_test.cc), and consider adding a randomized/property-based
variant that repeats many sequences including duplicate-max inserts to reliably
exercise the branch in both AVL variants.
- Around line 120-128: The avl_stack sentinel push is currently hidden inside an
assert in avl_stack_empty(), which removes the side effect in release builds and
breaks code that relies on candidate_pos = avl_stack.size(); fix by
unconditionally seeding the stack with head_ptr before any assertions: ensure
avl_stack.push(head_ptr) is called (for example at the start of
avl_stack_empty() or in the calling search methods) so that
avl_stack.size()/candidate_pos is correct, then keep any assert checks separate
(e.g., assert(avl_stack.top() == head_ptr)) to avoid removing the initialization
under NDEBUG; reference avl_stack_empty(), avl_stack, head_ptr and candidate_pos
to locate and adjust the logic.
In `@tpl_avlRk.H`:
- Around line 124-132: avl_stack_empty currently pushes head_ptr onto avl_stack
only when avl_stack.is_empty() and then returns true, but callers call
avl_stack_empty through assert so that repair is optimized away under NDEBUG;
make the repair unconditional: if avl_stack.is_empty() always push(head_ptr)
before any assert usage, then keep the existing boolean return (or subsequent
assert checks) unchanged. Update the implementation of avl_stack_empty (and any
call sites that relied on assert side-effects) so the stack is repaired
unconditionally via avl_stack.push(head_ptr) when empty, then perform the assert
or comparison avl_stack.top() == head_ptr as before.
In `@tpl_rb_tree.H`:
- Around line 171-175: Add a regression test that exercises the candidate_pos ==
rb_stack.size() branch so we don't regress to calling rb_stack.popn(0); create a
focused unit/property-based test that constructs a tree with duplicates or runs
a small randomized duplicate workload targeting the current maximum key and
verifies both RB variants do not call popn(0) (or that stack size remains
unchanged). Reference rb_stack, candidate_pos and popn in the test and assert
the stack size/behavior after the operation to ensure the guard path is covered.
---
Duplicate comments:
In `@tpl_link_cut_tree.H`:
- Around line 839-844: The get_val method is declared const but calls the
non-const splaying helper access(Node*), violating const-correctness; update the
declaration of get_val (the function named get_val(Node *x) const) to remove the
const qualifier so it matches the side-effecting call to access(Node*), and
ensure any callers expecting a const overload are adjusted or use a const-safe
alternative if needed.
---
Nitpick comments:
In `@modular_arithmetic.H`:
- Around line 326-331: The modulo result is guaranteed in [0, ctx.mod()), so
remove the dead branch that checks and subtracts ctx.mod(); inside the function
that computes t and reduced (using __uint128_t t = ... and uint64_t reduced =
static_cast<uint64_t>(t % ctx.mod())), delete the lines "if (reduced >=
ctx.mod()) reduced -= ctx.mod();" and directly return the reduced value (e.g.,
return static_cast<uint64_t>(t % ctx.mod());) to eliminate unreachable code.
In `@scripts/fft_release_audit.rb`:
- Around line 131-147: The command array built when options[:perf_baseline] is
true is missing a trailing comma on the final literal element which triggers
RuboCop; update the array literal in the block that constructs command (the
variable named command used with run_step('performance-regression', command)) so
that the last listed argument '--batch-count', options[:batch_count].to_s ends
with a trailing comma, and ensure any conditional concat calls remain unchanged.
In `@Tests/link_cut_tree_test.cc`:
- Line 1100: Move the mid-file include of '#include
<tpl_link_cut_tree_with_edges.H>' into the header section with the other
includes at the top of the test file so it appears before the edge-weighted
tests that depend on it; locate the existing '#include
<tpl_link_cut_tree_with_edges.H>' and cut/paste it into the file's include block
(above the test cases) to follow conventional style and avoid scattering
includes.
In `@tpl_link_cut_tree_with_edges.H`:
- Around line 455-530: The export_to_tree_node function currently uses the
find_idx lambda (captures verts and scans linearly) causing O(n²) behavior;
replace this with a direct hash map from Node* to index (e.g.,
std::unordered_map<Node*, size_t> or DynMapHash) populated immediately after
verts is complete (or as verts is built) and then use map lookups for ui/vi when
iterating edge_recs_; update any place using find_idx (including root_idx
lookup) to use the map and remove the linear-scan lambda to achieve O(n) index
lookups.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b4c84ab2-94f0-4f96-87d2-a69cc0f7b363
📒 Files selected for processing (15)
Examples/fft_benchmark.ccTests/CMakeLists.txtTests/link_cut_tree_test.ccTests/montgomery_test.ccTwo_Sat.Hmodular_arithmetic.Hscripts/fft_release_audit.rbtpl_arrayStack.Htpl_avl.Htpl_avlRk.Htpl_link_cut_tree.Htpl_link_cut_tree_with_edges.Htpl_rbRk.Htpl_rb_tree.Htpl_segment_tree.H
🚧 Files skipped from review as they are similar to previous changes (1)
- Two_Sat.H
Add regression tests for AVL and Red-Black trees to specifically target the "zero-pop" branch when inserting a duplicate of the maximum key, which can cause issues with stack management during rebalancing. This includes both a direct test case and a randomized stress test. Refactor AVL tree stack handling in tpl_avl.H and tpl_avlRk.H for improved clarity and correctness. Update modular_arithmetic.H to simplify modulo reduction. Fix minor syntax in fft_release_audit.rb. Adjust includes in link_cut_tree_test.cc. Clean up comments in tpl_link_cut_tree_with_edges.H. Update path_query return type in tpl_link_cut_tree.H.
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (2)
scripts/fft_release_audit.rb (2)
123-146: Use the current Ruby interpreter for nested scripts.Nice to have: Lines 123 and 133 shell out via literal
ruby. In rbenv/RVM/Bundler setups that can select a different interpreter than the one running this audit.RbConfig.rubyis safer here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/fft_release_audit.rb` around lines 123 - 146, The scripts call the Ruby executable with the literal string 'ruby' when building subprocess command arrays (see reference_command and the perf baseline command array assigned to command before run_step), which can pick the wrong interpreter; require RbConfig and replace the literal 'ruby' entries with RbConfig.ruby (use that value for the first element of reference_command and the perf command array) so nested scripts run under the same Ruby interpreter as the current process.
24-152: Add regression coverage for this new audit workflow.Should fix: this CLI adds several observable branches—missing docs, missing binaries, successful audit, skipped perf gate, and child-step failure—but no tests are included here to lock down exit codes and messaging. Please add script-level coverage, or note in the PR why tests do not apply. As per coding guidelines, "If the PR changes behavior, require tests or explain why they don't apply."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/fft_release_audit.rb` around lines 24 - 152, The PR adds many observable branches in scripts/fft_release_audit.rb but lacks tests; add script-level tests (RSpec or minitest) that exercise parser-driven branches and run_step behavior: stub ARGV/options to cover missing docs (simulate required_docs exist? false), missing binaries (benchmark.exist? and probe.exist? false), successful audit with and without options[:perf_baseline] (cover the skipped perf gate path), and a failing child step by stubbing Open3.capture3 to return non-success and assert abort; target symbols to stub/mutate are parser/options, Pathname#exist? for required_docs/benchmark/probe, and Open3.capture3 used by run_step (labels: 'benchmark-validate', 'reference-cross-check', 'performance-regression'); alternatively, if adding tests is impossible, add a short PR note explaining why tests don't apply and how you manually validated each branch.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modular_arithmetic.H`:
- Around line 314-327: mont_reduce currently always computes t % ctx.mod(),
causing a slow 128/64 division in hot paths; change the design to provide a fast
bounded reducer for internal callers and a separate canonicalizing path for
unbounded inputs. Implement mont_reduce as the fast path that assumes REDC-bound
inputs (t < 2*ctx.mod()) and returns t or t - ctx.mod() via a single conditional
subtraction (use MontgomeryCtx::mod() for the compare), and add a new function
(e.g., mont_reduce_unbounded or mont_reduce_canonical) that performs the full
128->64 modulo using t % ctx.mod() for public/unbounded callers. Update internal
callers mont_mul, to_mont, and from_mont to call the fast mont_reduce; leave
public-facing conversion/verification code to call the canonical reducer when
needed. Ensure mod_inv_neg() usage and existing q computation remain unchanged.
- Around line 218-225: Add Doxygen documentation blocks for each new public
accessor in the header—mod(), mod2(), r(), r2(), and mod_inv_neg()—so the public
API is documented and Doxygen emits no warnings; for each function add a `@brief`
describing the value returned (e.g., modulus, squared modulus, Montgomery R,
R^2, and negative modular inverse), include a `@return` describing the returned
uint64_t value, and include an empty/none `@throws` noting that these accessors do
not throw (or omit if project convention prefers), ensuring each accessor has
its own Doxygen block above the corresponding constexpr method.
- Around line 198-202: The public accessor methods of MontgomeryCtx (mod(),
mod2(), r(), r2(), mod_inv_neg()) lack Doxygen comments; add concise Doxygen for
each getter including a brief description using `@brief` and an `@return` tag
describing the returned value (e.g., for mod(): "@brief Returns the odd modulus
p." and "@return The odd modulus."). Update the declarations of [[nodiscard]]
constexpr uint64_t mod() const noexcept, mod2(), r(), r2(), and mod_inv_neg() to
include these comments above each method to complete the API documentation.
In `@scripts/fft_release_audit.rb`:
- Around line 101-119: The preflight currently only checks existence for the
benchmark and probe paths (benchmark, probe) which allows directories or
non-executable files to slip through; update those checks to require both file?
and executable? (e.g., change the unless benchmark.exist? / probe.exist? guards
to unless benchmark.file? && benchmark.executable? and similarly for probe) and
inside run_step(label, command) wrap the Open3.capture3 call (and any child
process spawning) in a rescue for SystemCallError to abort with a controlled
"[audit] {label} failed: <error>" message instead of letting
Errno::ENOENT/Errno::EACCES bubble up.
In `@Tests/link_cut_tree_test.cc`:
- Around line 1056-1095: The test LinkCutTreePerf::LargePathStaysReasonable
currently mixes functional checks with a wall-clock timing gate (start/elapsed,
ITERS and EXPECT_LT) which makes the unit test non-deterministic; remove the
timing measurement and the EXPECT_LT assertion (references: ITERS, start,
elapsed, and the EXPECT_LT line) and keep only the functional assertions
(connected, find_root, path_size) so the test is deterministic; if performance
regression coverage is required, move the timing loop and EXPECT_LT into a
separate benchmark/perf job or non-blocking test harness instead of the unit
test.
- Around line 1101-1579: Tests for Gen_Link_Cut_Tree_WithEdges are missing
assertions for wrapper-specific contracts: add new unit cases that call
num_edges(), num_components(), path_size(), make_root(), and find_root() on
Gen_Link_Cut_Tree_WithEdges instances to verify pre/post conditions and return
values; specifically, create tests that (1) after linking N vertices assert
num_edges() == expected and num_components() == expected, (2) after cutting
assert num_edges()/num_components() updated, (3) verify path_size(u,v) returns
correct edge count for simple paths, and (4) exercise make_root(node) and
find_root(node) to assert root changes and that find_root returns the canonical
root before/after make_root, using the existing test fixtures (e.g.,
LinkCutTreeEdgesTest and helper nodes created via lct.make_vertex) and methods
lct.link, lct.cut, lct.path_query to locate where to add these assertions.
In `@tpl_avlRk.H`:
- Around line 566-567: The docstring for getRoot() is misleading: the constexpr
Node * getRoot() const noexcept returns a pointer by value, not a modifiable
reference. Update the comment for getRoot() to something like "Return the tree's
root pointer" or "Return a pointer to the tree's root" (or otherwise remove
"modifiable reference") so it accurately describes the returned Node * from
getRoot().
In `@tpl_link_cut_tree_with_edges.H`:
- Around line 443-525: export_to_tree_node() allocates tn(root_idx) early and
then performs further allocations/insertions that can throw, leaking the
partially-built Tree_Node tree; wrap the root Tree_Node* in a local RAII guard
(e.g. a small unique_ptr-like owner or ScopedTreeOwner) that calls
destroy_tree(Tree_Node<ExportData>*) in its destructor, create tn(root_idx) into
that guard before the BFS, and only release the guard (transfer ownership) right
before returning tn(root_idx) at the end; ensure the guard is constructed/used
around the tn array and child insertions so any exception triggers destroy_tree,
and keep references to symbols tn, tn(root_idx), Tree_Node<ExportData>,
export_to_tree_node, and destroy_tree to locate the changes.
- Around line 248-252: Multiple public methods (set_vertex_val, connected, link,
path_query, make_root, find_root, path_size, export_to_tree_node) accept raw
Node* handles without verifying ownership, allowing foreign handles to reach the
internal lct_. Add a shared guard function require_vertex(Node* x) that checks x
!= nullptr and that vertex_vals_.contains(x) (or equivalent lookup) and call it
at the start of every public vertex-taking entry point including set_vertex_val
(mirror get_vertex_val's check). Update these methods to return/throw
consistently on failure and add a two-instance regression test in
Tests/link_cut_tree_test.cc that creates two Gen_Link_Cut_Tree_WithEdges
instances and verifies passing a node from the other instance fails the guard.
---
Nitpick comments:
In `@scripts/fft_release_audit.rb`:
- Around line 123-146: The scripts call the Ruby executable with the literal
string 'ruby' when building subprocess command arrays (see reference_command and
the perf baseline command array assigned to command before run_step), which can
pick the wrong interpreter; require RbConfig and replace the literal 'ruby'
entries with RbConfig.ruby (use that value for the first element of
reference_command and the perf command array) so nested scripts run under the
same Ruby interpreter as the current process.
- Around line 24-152: The PR adds many observable branches in
scripts/fft_release_audit.rb but lacks tests; add script-level tests (RSpec or
minitest) that exercise parser-driven branches and run_step behavior: stub
ARGV/options to cover missing docs (simulate required_docs exist? false),
missing binaries (benchmark.exist? and probe.exist? false), successful audit
with and without options[:perf_baseline] (cover the skipped perf gate path), and
a failing child step by stubbing Open3.capture3 to return non-success and assert
abort; target symbols to stub/mutate are parser/options, Pathname#exist? for
required_docs/benchmark/probe, and Open3.capture3 used by run_step (labels:
'benchmark-validate', 'reference-cross-check', 'performance-regression');
alternatively, if adding tests is impossible, add a short PR note explaining why
tests don't apply and how you manually validated each branch.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f0645bea-865b-46cf-9a31-ab4a4e16079a
📒 Files selected for processing (9)
Tests/avl.ccTests/link_cut_tree_test.ccTests/rb_tree_test.ccmodular_arithmetic.Hscripts/fft_release_audit.rbtpl_avl.Htpl_avlRk.Htpl_link_cut_tree.Htpl_link_cut_tree_with_edges.H
🚧 Files skipped from review as they are similar to previous changes (1)
- tpl_link_cut_tree.H
…g and new tests Refactor the Link-Cut Tree with edges to enhance error handling for null and foreign handles, add new tests for `num_edges`, `num_components`, `path_size`, and `make_root/find_root` functionality. Also updates the `export_to_tree_node` complexity comment. Additional changes: - Refine Montgomery REDC comments and implementation for large primes. - Update `fft_release_audit.rb` for improved executable checks and ruby path. - Clarify NTT convolution length in `ntt.H`. - Correct `tpl_avlRk.H` root return type description.
…ailed constructor descriptions
refactor: Improve FFT reference check script and config Refactor the fft_reference_check.rb script for better error handling and readability. Update CMakeLists.txt to use C++20 and add full address sanitizer flags.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (2)
Tests/CMakeLists.txt (1)
518-546:⚠️ Potential issue | 🟡 MinorShould fix — standalone
Tests/builds still skip everyFFTRelease.*check.When
Tests/CMakeLists.txtis the top-level project, Lines 38-39 forceBUILD_EXAMPLES=OFFbefore the parent tree is added. That makes theTARGET fft_benchmark/TARGET fft_reference_probeguards here permanently false in standalone test builds, so the new FFT release coverage disappears silently. Either opt that mode into the required example targets or emit a clear status/warning that these checks are unavailable there. As per coding guidelines, "If the PR changes behavior, require tests or explain why they don't apply."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/CMakeLists.txt` around lines 518 - 546, Standalone test-only builds currently set BUILD_EXAMPLES=OFF upstream which makes the TARGET checks (TARGET fft_benchmark and TARGET fft_reference_probe) always false and silently skips the FFTRelease.* tests; to fix, in Tests/CMakeLists.txt change the logic so when this list is the top-level project (CMAKE_SOURCE_DIR == CMAKE_CURRENT_SOURCE_DIR) you either set BUILD_EXAMPLES to ON (e.g., set(BUILD_EXAMPLES ON CACHE BOOL "..." FORCE)) so the example targets are available, or emit a clear message/warning (message(WARNING "... FFTRelease checks unavailable without examples")) and explicitly opt the required example targets into the build; update the guards around add_test() for FFTRelease.Audit / FFTRelease.ReferenceCrossCheck / FFTRelease.BenchmarkValidate to rely on that new setting rather than silently failing.tpl_link_cut_tree.H (1)
367-372:⚠️ Potential issue | 🟠 MajorDon’t expose stale payloads through
Node::get_val().
Nodeis public, so callers can still bypassGen_Link_Cut_Tree::get_val(Node *)and readvaldirectly from the handle afterpath_apply(). That accessor can return stale data until the node is exposed, so it should not be public on an “opaque handle” unless it can synchronize pending tags first.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_link_cut_tree.H` around lines 367 - 372, Node::get_val() currently exposes the raw stored value from the public Node handle and can return stale data after path_apply(); change the API so callers cannot bypass synchronization: make Node::get_val() non-public (e.g., private/protected) or remove it entirely from Node, and instead require callers to use Gen_Link_Cut_Tree::get_val(Node*) which must perform the necessary push/propagate/sync before returning the value; update or add comments/tests to ensure only the synchronized accessor (Gen_Link_Cut_Tree::get_val) is used and that any direct access from Node is prevented.
🧹 Nitpick comments (3)
tpl_avlRk.H (1)
124-129: Consider renaming for clarity: the function doesn't check for emptiness.The name
avl_stack_empty()implies checking if the stack has zero elements, but it actually returnstruewhen the stack is in its "initial/base" state (exactly one element:head_ptr). This could confuse future maintainers.A name like
avl_stack_at_base()oris_avl_stack_reset()would better convey the intent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_avlRk.H` around lines 124 - 129, The function avl_stack_empty() is misnamed because it returns true when avl_stack.is_empty() is false and the stack has exactly one element equal to head_ptr; rename it to something that conveys the "at base" state (e.g., avl_stack_at_base or is_avl_stack_reset) and update all references accordingly; keep the implementation using avl_stack.is_empty(), avl_stack.size() == 1 and avl_stack.base() == head_ptr but change the function name and any callers to the new name to avoid confusion.Tests/CMakeLists.txt (1)
24-29: Nice to have — expose the same sanitizer toggle as the root build.This block still keys off
USE_SANITIZERS, while the mainCMakeLists.txtexposesALEPH_USE_SANITIZERS. That makes the standaloneTests/entry point easy to misconfigure compared with the top-level build.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/CMakeLists.txt` around lines 24 - 29, The Tests CMake block uses the local variable USE_SANITIZERS which is inconsistent with the root build; update it to expose and honor the same ALEPH_USE_SANITIZERS toggle: replace the option name and conditional (USE_SANITIZERS -> ALEPH_USE_SANITIZERS) or, if you want to preserve a local alias, define option(ALEPH_USE_SANITIZERS ...) and set USE_SANITIZERS to ALEPH_USE_SANITIZERS so the sanitizer flags are enabled when ALEPH_USE_SANITIZERS is set; ensure the conditional that sets CMAKE_CXX_FLAGS and CMAKE_EXE_LINKER_FLAGS checks ALEPH_USE_SANITIZERS (or the alias) and not USE_SANITIZERS.tpl_link_cut_tree.H (1)
72-78: Make the header self-contained.This public header uses
std::move,std::swap,std::is_same_v,std::void_t,std::true_type/std::false_type, andstd::initializer_listwithout including the corresponding standard headers directly. It works only through transitive includes today, which is brittle for downstream users.Direct includes to add
# include <concepts> +# include <initializer_list> # include <limits> +# include <type_traits> +# include <utility> # include <tpl_array.H> # include <tpl_arrayStack.H> # include <tpl_tree_node.H> # include <ah-errors.H>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_link_cut_tree.H` around lines 72 - 78, The header tpl_link_cut_tree.H uses std::move, std::swap, std::is_same_v, std::void_t, std::true_type/false_type, and std::initializer_list but doesn't include their defining headers; make the header self-contained by adding the appropriate standard includes (e.g., include <utility> for std::move/std::swap, <type_traits> for std::is_same_v/std::void_t/std::true_type/std::false_type, and <initializer_list> for std::initializer_list) near the top of tpl_link_cut_tree.H so the symbols used in classes/functions like the link-cut tree templates compile without relying on transitive includes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Tests/CMakeLists.txt`:
- Around line 518-546: The new FFT tests register runtime checks but do not
ensure the helper executables are built before aleph_check runs; add explicit
add_dependencies on aleph_build_tests so those binaries are built first: where
you check TARGET fft_benchmark and TARGET fft_reference_probe (and/or
ALEPH_RUBY_EXECUTABLE) add corresponding add_dependencies(aleph_build_tests
fft_benchmark) and add_dependencies(aleph_build_tests fft_reference_probe) (or a
single call with both) guarded by the same conditions so the targets are only
added when present.
In `@tpl_link_cut_tree.H`:
- Around line 634-645: In destroy_vertex, validate that the Node* x belongs to
this tree before calling access(x): first check ah_invalid_argument_if(x ==
nullptr) then scan the nodes container (the existing loop comparing nodes(idx)
to x) and raise the ah_domain_error_if(idx == nodes.size()) << "vertex does not
belong to this link-cut tree" if not found, and only after that call access(x)
and the subsequent isolation checks; i.e., move the ownership/membership check
to occur before invoking access(x) to avoid mutating foreign handles.
- Around line 105-115: The concept LCTLazyTag is missing an equality comparable
constraint for typename L::tag_type which causes comparisons in push()
(specifically x->lazy == LazyTag::tag_identity()) to fail for some tag
implementations; update the concept LCTLazyTag to also require
std::equality_comparable<typename L::tag_type> so any L::tag_type (e.g., in
AddLazyTag<T>) supports operator==, keeping the existing requirements (tag_type,
tag_identity(), apply(), compose()) intact.
---
Duplicate comments:
In `@Tests/CMakeLists.txt`:
- Around line 518-546: Standalone test-only builds currently set
BUILD_EXAMPLES=OFF upstream which makes the TARGET checks (TARGET fft_benchmark
and TARGET fft_reference_probe) always false and silently skips the FFTRelease.*
tests; to fix, in Tests/CMakeLists.txt change the logic so when this list is the
top-level project (CMAKE_SOURCE_DIR == CMAKE_CURRENT_SOURCE_DIR) you either set
BUILD_EXAMPLES to ON (e.g., set(BUILD_EXAMPLES ON CACHE BOOL "..." FORCE)) so
the example targets are available, or emit a clear message/warning
(message(WARNING "... FFTRelease checks unavailable without examples")) and
explicitly opt the required example targets into the build; update the guards
around add_test() for FFTRelease.Audit / FFTRelease.ReferenceCrossCheck /
FFTRelease.BenchmarkValidate to rely on that new setting rather than silently
failing.
In `@tpl_link_cut_tree.H`:
- Around line 367-372: Node::get_val() currently exposes the raw stored value
from the public Node handle and can return stale data after path_apply(); change
the API so callers cannot bypass synchronization: make Node::get_val()
non-public (e.g., private/protected) or remove it entirely from Node, and
instead require callers to use Gen_Link_Cut_Tree::get_val(Node*) which must
perform the necessary push/propagate/sync before returning the value; update or
add comments/tests to ensure only the synchronized accessor
(Gen_Link_Cut_Tree::get_val) is used and that any direct access from Node is
prevented.
---
Nitpick comments:
In `@Tests/CMakeLists.txt`:
- Around line 24-29: The Tests CMake block uses the local variable
USE_SANITIZERS which is inconsistent with the root build; update it to expose
and honor the same ALEPH_USE_SANITIZERS toggle: replace the option name and
conditional (USE_SANITIZERS -> ALEPH_USE_SANITIZERS) or, if you want to preserve
a local alias, define option(ALEPH_USE_SANITIZERS ...) and set USE_SANITIZERS to
ALEPH_USE_SANITIZERS so the sanitizer flags are enabled when
ALEPH_USE_SANITIZERS is set; ensure the conditional that sets CMAKE_CXX_FLAGS
and CMAKE_EXE_LINKER_FLAGS checks ALEPH_USE_SANITIZERS (or the alias) and not
USE_SANITIZERS.
In `@tpl_avlRk.H`:
- Around line 124-129: The function avl_stack_empty() is misnamed because it
returns true when avl_stack.is_empty() is false and the stack has exactly one
element equal to head_ptr; rename it to something that conveys the "at base"
state (e.g., avl_stack_at_base or is_avl_stack_reset) and update all references
accordingly; keep the implementation using avl_stack.is_empty(),
avl_stack.size() == 1 and avl_stack.base() == head_ptr but change the function
name and any callers to the new name to avoid confusion.
In `@tpl_link_cut_tree.H`:
- Around line 72-78: The header tpl_link_cut_tree.H uses std::move, std::swap,
std::is_same_v, std::void_t, std::true_type/false_type, and
std::initializer_list but doesn't include their defining headers; make the
header self-contained by adding the appropriate standard includes (e.g., include
<utility> for std::move/std::swap, <type_traits> for
std::is_same_v/std::void_t/std::true_type/std::false_type, and
<initializer_list> for std::initializer_list) near the top of
tpl_link_cut_tree.H so the symbols used in classes/functions like the link-cut
tree templates compile without relying on transitive includes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c6a3ebcd-95e7-46fc-bffe-ec9b058db3b7
📒 Files selected for processing (11)
CMakeLists.txtTests/CMakeLists.txtTests/link_cut_tree_test.ccfft.Hmodular_arithmetic.Hntt.Hscripts/fft_reference_check.rbscripts/fft_release_audit.rbtpl_avlRk.Htpl_link_cut_tree.Htpl_link_cut_tree_with_edges.H
🚧 Files skipped from review as they are similar to previous changes (5)
- scripts/fft_release_audit.rb
- CMakeLists.txt
- Tests/link_cut_tree_test.cc
- modular_arithmetic.H
- scripts/fft_reference_check.rb
Refactored existing Link-Cut Tree tests to use common setup, and added a new test case for rejecting foreign handles. Introduced path aggregate tests with various monoids (sum, min, max, xor, gcd, product) and lazy tags (add, assign). Expanded error handling tests for link, cut, and LCA operations. Added performance and stress tests for both vertex-only and edge-weighted LCTs. Updated CMakeLists to include new test dependencies.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 49 out of 53 changed files in this pull request and generated 2 comments.
Comments suppressed due to low confidence (4)
tpl_memArray.H:1
nis asize_t, but the loop index isint. For large arrays this can overflow or prematurely terminate, and it also mixes signed/unsigned comparisons. Usesize_t i = 0; i < n; ++ito match the container’s size type.
generate_tree.H:1generate_tree_graphvizdefault-constructsWritefor every visited node (Write()(p)), which (1) unnecessarily constructs objects repeatedly and (2) prevents using writers that must carry state (and/or are not default-constructible). Consider either accepting aWriteinstance as a function parameter (with a default value), or constructing a singleWrite writer{}once and reusing it throughout traversal.
generate_tree.H:1generate_tree_graphvizdefault-constructsWritefor every visited node (Write()(p)), which (1) unnecessarily constructs objects repeatedly and (2) prevents using writers that must carry state (and/or are not default-constructible). Consider either accepting aWriteinstance as a function parameter (with a default value), or constructing a singleWrite writer{}once and reusing it throughout traversal.
tpl_link_cut_tree_with_edges.H:1- This new edge-weighted Link-Cut Tree wrapper introduces non-trivial behavior (edge-as-node representation, edge index bookkeeping, rollback on exceptions, and
path_size()translation). There are no corresponding tests shown in this PR forGen_Link_Cut_Tree_WithEdges(e.g., link/cut correctness,path_queryover edge weights,set_edge_val/get_edge_val, duplicate-edge rejection, and component counting). Adding a focused test file would help prevent regressions in the edge bookkeeping and wrapper invariants.
Refactor AVL tree stack naming to `avl_stack_at_base`. Improve Link-Cut Tree node value access. Add `std::domain_error` checks for Two-Sat `signed_to_lit`. Update CI workflow for FFT benchmark target checks and sanitizers option name.
Summary by CodeRabbit
New Features
Examples
Documentation
Tests
Chores