Skip to content

Add comprehensive dynamic programming toolkit with knapsack, LIS, matrix chain, subset sum, tree DP, and optimizations#39

Merged
lrleon merged 20 commits intomasterfrom
dp
Feb 26, 2026
Merged

Add comprehensive dynamic programming toolkit with knapsack, LIS, matrix chain, subset sum, tree DP, and optimizations#39
lrleon merged 20 commits intomasterfrom
dp

Conversation

@lrleon
Copy link
Copy Markdown
Owner

@lrleon lrleon commented Feb 26, 2026

Summary by CodeRabbit

  • New Features

    • Adds a comprehensive Dynamic Programming toolkit: knapsack (variants with reconstruction), LIS/LNDS, matrix‑chain ordering, subset‑sum (including MITM), tree DP & rerooting, DP optimizations (divide‑and‑conquer, Knuth, CHT, Li Chao, monotone queue) and related utilities.
  • Documentation

    • Adds a Dynamic Programming usage section (duplicated in places) and removes several legacy guides.
  • Examples

    • Adds multiple runnable DP example programs demonstrating the new APIs.
  • Tests

    • Adds extensive unit tests covering algorithms, optimizations, edge cases, and stress scenarios.
  • Chores

    • Expands the set of publicly installed headers to include the new DP headers.

lrleon added 14 commits January 28, 2026 09:44
This commit introduces a new suite of dynamic programming algorithms:
- Knapsack: 0/1, unbounded, and bounded variants with reconstruction.
- LIS: Longest Increasing and Non-decreasing Subsequence (O(n log n)) with reconstruction.
- Matrix Chain: Optimal parenthesization for matrix multiplication.
- Subset Sum: Classical DP and meet-in-the-middle for existence, count, and reconstruction.
- Tree DP: Generic bottom-up and rerooting dynamic programming on trees, including convenience functions for subtree sizes, max distance, and sum of distances.

Extensive tests and examples are included for each new algorithm. READMEs and CMakeLists.txt are updated to reflect the new additions.
Introduce `DP_Optimizations.H` with D&C DP, Knuth, CHT, Li Chao, and monotone queue.
Add `dp_optimizations_example.cc` and comprehensive tests.
Update documentation to reflect new DP features.
Copilot AI review requested due to automatic review settings February 26, 2026 17:44
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 26, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • ✅ Review completed - (🔄 Check again to review again)

Walkthrough

Adds a Dynamic Programming module: six new DP headers, six examples, extensive tests, README/doxygen updates, and CMake export/include changes. No build targets or core build logic altered beyond exporting headers and registering examples. (≤50 words)

Changes

Cohort / File(s) Summary
Build Configuration
CMakeLists.txt, Examples/CMakeLists.txt
Added DP headers to exported HLIST, registered six example programs, and added include_directories(${CMAKE_CURRENT_SOURCE_DIR}) for examples.
Classic DP Headers
Knapsack.H, LIS.H, Matrix_Chain.H, Subset_Sum.H
New public APIs and implementations: multiple knapsack variants with reconstruction, LIS/LNDS (length + subsequence), matrix-chain ordering/min-cost (with parenthesization), and subset-sum (DP, existence/count, MITM).
Advanced DP & Utilities
Tree_DP.H, DP_Optimizations.H
Added generic tree DP and rerooting framework (Tree_Topology, Gen_Tree_DP, Gen_Reroot_DP, subtree/max/sum helpers) and a DP-optimizations toolkit (divide-and-conquer partition DP, Knuth optimization, CHT, Li Chao tree, monotone-queue DP, weighted squared-distance).
Examples
Examples/dp_optimizations_example.cc, Examples/knapsack_example.cc, Examples/lis_example.cc, Examples/matrix_chain_example.cc, Examples/subset_sum_example.cc, Examples/tree_dp_example.cc
Six new example programs demonstrating the new headers’ APIs and sample scenarios; added dp_optimizations_example entry.
Tests
Tests/dp_optimizations_test.cc, Tests/knapsack_test.cc, Tests/lis_test.cc, Tests/matrix_chain_test.cc, Tests/subset_sum_test.cc, Tests/tree_dp_test.cc
Extensive GoogleTest suites per header covering correctness, edge cases, randomized stress tests, brute-force references, and error-condition validation.
Documentation & Doxygen
README.md, README.es.md, doxygen_groups.H
Inserted Dynamic Programming sections and example listings into READMEs (content duplicated in places) and updated Algorithms doxygen group to include Dynamic programming.
Removed docs / Housekeeping
docs/ERROR_HANDLING.md, docs/GEOMETRY_MODULE_GUIDE.md, docs/GITHUB_PAGES_DOXYGEN.md, docs/RANGES_GUIDE.md, docs/TIKZGEOM_GUIDE.md, docs/doxygen_warnings.md, docs/header_audit.md
Deleted multiple documentation files (guides, audits, warnings); no code/public API changes tied to these deletions.
Minor example tweak
Examples/k_shortest_paths_example.cc
Small reordering of namespace closing and comment; no functional change.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and specifically summarizes the main change: adding a comprehensive dynamic programming toolkit with six named algorithm implementations (knapsack, LIS, matrix chain, subset sum, tree DP, and optimizations).
Header Docstring Coverage ✅ Passed Documentation coverage for library headers is 83.8% (62 of 74 declarations), exceeding the required 80% threshold.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch dp

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a comprehensive Dynamic Programming (DP) toolkit to Aleph-w, providing classical DP algorithms and advanced optimization techniques.

Changes:

  • Adds six new header files implementing common DP problems (Knapsack, LIS, Matrix-Chain, Subset-Sum, Tree DP) and optimization patterns (Divide & Conquer DP, Knuth, CHT, Li Chao, monotone queue)
  • Includes extensive test coverage with randomized testing against brute-force implementations
  • Provides documented example programs for each DP category
  • Updates README files in English and Spanish with API documentation and usage examples

Reviewed changes

Copilot reviewed 23 out of 23 changed files in this pull request and generated no comments.

Show a summary per file
File Description
doxygen_groups.H Adds DP algorithms to the Algorithms module documentation
Tree_DP.H Generic tree DP and rerooting DP implementation for computing node metrics
Tests/tree_dp_test.cc Comprehensive tests for tree DP including subtree sizes, max distance, and rerooting
Tests/subset_sum_test.cc Tests for classical DP subset-sum and meet-in-the-middle variants
Tests/matrix_chain_test.cc Tests for optimal matrix-chain multiplication ordering
Tests/lis_test.cc Tests for longest increasing/non-decreasing subsequence with custom comparators
Tests/knapsack_test.cc Tests for 0/1, unbounded, and bounded knapsack variants
Tests/dp_optimizations_test.cc Tests for D&C DP, Knuth, CHT, Li Chao, and monotone queue optimizations
Subset_Sum.H Classical DP and MITM algorithms for subset-sum problems
README.md Adds DP algorithms section with API reference and usage examples
README.es.md Spanish translation of DP toolkit documentation
Matrix_Chain.H Optimal matrix-chain multiplication via interval DP
LIS.H O(n log n) longest increasing subsequence using patience sorting
Knapsack.H Three knapsack variants with reconstruction support
Examples/tree_dp_example.cc Example demonstrating tree DP metrics computation
Examples/subset_sum_example.cc Example showing DP and MITM subset-sum approaches
Examples/matrix_chain_example.cc Example illustrating matrix-chain optimization scenarios
Examples/lis_example.cc Example demonstrating LIS/LNDS with various input patterns
Examples/knapsack_example.cc Example showcasing all three knapsack variants
Examples/dp_optimizations_example.cc Example illustrating advanced DP optimization techniques
Examples/CMakeLists.txt Registers new DP example executables
DP_Optimizations.H Advanced DP optimization patterns including D&C, Knuth, CHT, Li Chao, monotone queue
CMakeLists.txt Adds new DP headers to the build configuration

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3abcdadd14

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (3)
Examples/tree_dp_example.cc (1)

46-52: Consider extracting shared print_rule() utility.

The print_rule() function is duplicated across multiple example files (knapsack_example.cc, lis_example.cc, matrix_chain_example.cc, subset_sum_example.cc). While acceptable for self-contained examples, consider extracting to a shared header if more examples are added.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Examples/tree_dp_example.cc` around lines 46 - 52, The print_rule() helper is
duplicated across examples; extract it into a shared header (e.g.,
examples/print_rule.h) as a compact inline/static function (keeping the
namespace or anonymous/internal linkage to avoid ODR issues), then replace the
local definitions in tree_dp_example.cc and the other example files
(knapsack_example.cc, lis_example.cc, matrix_chain_example.cc,
subset_sum_example.cc) with `#include` "examples/print_rule.h" and remove their
duplicate print_rule implementations; ensure the header is lightweight and uses
the same symbol name print_rule() so callers need no change.
README.md (1)

2800-2802: Should fix: Add the new DP section to the Table of Contents.

This section is now first-class documentation, but navigation won’t expose it unless the TOC includes #readme-dp-algorithms.

Suggested TOC patch
   - [Matching](`#readme-matching`)
   - [String Algorithms](`#readme-string-algorithms`)
+  - [Dynamic Programming Algorithms](`#readme-dp-algorithms`)
   - [Sorting Algorithms](`#readme-sorting-algorithms`)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 2800 - 2802, Add a Table of Contents entry that links
to the new "Dynamic Programming Algorithms" section by inserting a bullet that
references the anchor id "#readme-dp-algorithms" (or the heading text "Dynamic
Programming Algorithms") into the README.md TOC; update the TOC list near other
top-level sections so the new entry appears in navigation and ensure the link
text matches the heading exactly to jump to the <a
id="readme-dp-algorithms"></a> anchor.
Tests/dp_optimizations_test.cc (1)

159-432: Should fix: add a deterministic performance-regression check for at least one core optimization path.

This file already validates correctness deeply; adding one stable perf guard (size/operation-budget based) would protect against accidental complexity regressions in critical DP optimizers.

Based on learnings: “Include performance regression tests for critical algorithms.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Tests/dp_optimizations_test.cc` around lines 159 - 432, Add a deterministic
performance-regression test that times a core optimizer (e.g., call
divide_and_conquer_partition_dp with a fixed RNG seed, deterministic large n and
groups and the same cost lambda used in DivideConquerPartitionMatchesBruteForce)
using std::chrono::high_resolution_clock, and assert the elapsed wall-clock time
is below a conservative threshold (choose a large-but-stable value to avoid
flakiness); place it as a new TEST (e.g.,
DPOptimizations.DivideConquerPartitionPerf) alongside existing tests, ensure the
input generation is deterministic (fixed seed) and reference
divide_and_conquer_partition_dp and INF64 so reviewers can locate the code to
tweak the threshold if CI performance changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@CMakeLists.txt`:
- Around line 119-123: The header list in CMakeLists.txt is missing
DP_Optimizations.H; update the HLIST (the header export/list that currently
contains Knapsack.H, LIS.H, Matrix_Chain.H, Subset_Sum.H, Tree_DP.H) by
inserting DP_Optimizations.H immediately after Tree_DP.H so the header is
exported/installed along with the other DP headers.

In `@DP_Optimizations.H`:
- Around line 365-368: The arithmetic in Convex_Hull_Trick::value_at,
Li_Chao_Tree<T>::value_at / query and min_weighted_squared_distance_1d<T> does
unchecked signed integer math and can overflow; change these operations to
compute in a wider signed integer promotion type to avoid UB: introduce a
PromotedT (e.g. std::conditional_t<sizeof(T)<8, std::int64_t, __int128> or
std::common_type_t<T,long long>), static_assert T is integral and signed where
needed, perform all intermediate multiplications/additions using PromotedT (use
it for -2*xs[j], xs[j]*xs[j], xs[i]*xs[i]+lc.query(xs[i]), and
slope*x+intercept), then cast/convert the final result back to T safely (or
return PromotedT if API allows); update value_at, Li_Chao_Tree::query and
min_weighted_squared_distance_1d to use PromotedT for computations to eliminate
signed overflow UB while preserving semantics.
- Around line 169-170: The DP transitions in divide_and_conquer_partition_dp(),
knuth_optimize_interval(), and monotone_queue_min_dp() perform additions (e.g.,
const Cost cand = prev[k] + transition_cost(k, mid); and expressions like
dp[i][k] + dp[k][j] + w) without overflow checks; update each place to mirror
the safe pattern used in optimal_merge_knuth(): compute a safe limit (e.g.,
const Cost limit = std::numeric_limits<Cost>::max()/4), check before every
addition using the form if (a > limit - b) treat as overflow/sentinel, only
perform a + b when safe, and propagate the sentinel otherwise so comparisons
with best remain correct; apply this to all additive steps that produce cand or
best so no unchecked wraparound or UB occurs.

In `@Matrix_Chain.H`:
- Around line 132-134: The loop in matrix_chain_order iterates over dims (an
Array<size_t>) but binds each element as unsigned long, causing a narrowing/ABI
mismatch on LLP64; change the loop to iterate using size_t (e.g., for (const
size_t dim : dims)) so the element type matches dims, and keep the existing
ah_domain_error_if(dim == 0) check and message intact.

In `@README.md`:
- Line 2821: Update the API name that currently uses a hyphen to use
underscores: replace the incorrect symbol min-weighted_squared_distance_1d with
min_weighted_squared_distance_1d everywhere it appears (e.g., in the DP
optimizations list and any subsequent references) so it matches the later API
naming; ensure exact spelling and underscores are used to maintain consistency
with the documented function name.

In `@Subset_Sum.H`:
- Around line 119-125: The MITM variant can overflow signed arithmetic at the
per-subset accumulator and when computing need; change the local accumulator and
difference calculations to use a wider, safe arithmetic type rather than T
directly: inside subset_sum_mitm (the mask loop where s is declared and where
result entries are formed) use a wider integer type (e.g. std::common_type_t<T,
std::intmax_t> or a conditional __int128 when supported) to perform s +=
arr[start + j] and store sums1/sums2 values in that wider type for matching;
likewise compute const need using the same wider signed type (instead of T) and
perform range checks before casting back to T when forming result pairs or
comparing keys. Ensure these changes reference the local accumulator variable s,
the sums1/sums2 storage (pairs of .first), and the computation of need so all
intermediate arithmetic is done in the widened type and validated to avoid
signed overflow/underflow.
- Around line 283-284: The accumulation dp(s) += dp[s - vi] can overflow and
must be guarded like other patterns in DP_Optimizations.H; before updating dp at
index s (inside the for loop that iterates s from tgt down to vi), check that
adding dp[s - vi] to dp(s) won’t exceed the container's maximum representable
value (use the same overflow-detection approach/type and max constant used in
DP_Optimizations.H for the dp element type) and handle the overflow case (e.g.,
saturate, cap, or return an error) instead of performing the unchecked addition.

In `@Tests/subset_sum_test.cc`:
- Around line 65-77: The helper valid_selection(vals, selected, target)
currently allows duplicate indices so a multiset like [2,2,2] can incorrectly
pass; modify valid_selection to enforce true subset semantics by rejecting
repeated indices: after the existing bounds check, track seen indices (e.g.,
with a std::vector<bool> seen(vals.size()) or an std::unordered_set<size_t>) and
return false if an index is encountered twice, then sum as before and compare to
target; keep the existing out-of-range guard and final sum == target check.

---

Nitpick comments:
In `@Examples/tree_dp_example.cc`:
- Around line 46-52: The print_rule() helper is duplicated across examples;
extract it into a shared header (e.g., examples/print_rule.h) as a compact
inline/static function (keeping the namespace or anonymous/internal linkage to
avoid ODR issues), then replace the local definitions in tree_dp_example.cc and
the other example files (knapsack_example.cc, lis_example.cc,
matrix_chain_example.cc, subset_sum_example.cc) with `#include`
"examples/print_rule.h" and remove their duplicate print_rule implementations;
ensure the header is lightweight and uses the same symbol name print_rule() so
callers need no change.

In `@README.md`:
- Around line 2800-2802: Add a Table of Contents entry that links to the new
"Dynamic Programming Algorithms" section by inserting a bullet that references
the anchor id "#readme-dp-algorithms" (or the heading text "Dynamic Programming
Algorithms") into the README.md TOC; update the TOC list near other top-level
sections so the new entry appears in navigation and ensure the link text matches
the heading exactly to jump to the <a id="readme-dp-algorithms"></a> anchor.

In `@Tests/dp_optimizations_test.cc`:
- Around line 159-432: Add a deterministic performance-regression test that
times a core optimizer (e.g., call divide_and_conquer_partition_dp with a fixed
RNG seed, deterministic large n and groups and the same cost lambda used in
DivideConquerPartitionMatchesBruteForce) using
std::chrono::high_resolution_clock, and assert the elapsed wall-clock time is
below a conservative threshold (choose a large-but-stable value to avoid
flakiness); place it as a new TEST (e.g.,
DPOptimizations.DivideConquerPartitionPerf) alongside existing tests, ensure the
input generation is deterministic (fixed seed) and reference
divide_and_conquer_partition_dp and INF64 so reviewers can locate the code to
tweak the threshold if CI performance changes.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f364851 and 3abcdad.

📒 Files selected for processing (23)
  • CMakeLists.txt
  • DP_Optimizations.H
  • Examples/CMakeLists.txt
  • Examples/dp_optimizations_example.cc
  • Examples/knapsack_example.cc
  • Examples/lis_example.cc
  • Examples/matrix_chain_example.cc
  • Examples/subset_sum_example.cc
  • Examples/tree_dp_example.cc
  • Knapsack.H
  • LIS.H
  • Matrix_Chain.H
  • README.es.md
  • README.md
  • Subset_Sum.H
  • Tests/dp_optimizations_test.cc
  • Tests/knapsack_test.cc
  • Tests/lis_test.cc
  • Tests/matrix_chain_test.cc
  • Tests/subset_sum_test.cc
  • Tests/tree_dp_test.cc
  • Tree_DP.H
  • doxygen_groups.H

coderabbitai[bot]
coderabbitai bot previously approved these changes Feb 26, 2026
Refactor DP optimization helpers to use promoted integral types for intermediate calculations in Li Chao and Convex Hull Trick, preventing potential overflow issues.
Adjust `min_weighted_squared_distance_1d` to explicitly use `promoted_t` for intermediate calculations, enhancing numerical stability.
Update related test cases in `dp_optimizations_test.cc` to include `chrono` and `numeric` headers for the newly added performance test and updated brute-force logic.
Clean up example files (`knapsack_example.cc`, `lis_example.cc`, `matrix_chain_example.cc`, `subset_sum_example.cc`, `tree_dp_example.cc`) by moving `print_rule()` into a common header.
Update `Matrix_Chain.H` to use `size_t` instead of `unsigned long` for dimensions.
Adjust `subset_sum_test.cc`'s `valid_selection` helper to track seen indices to correctly handle duplicate values.
Update CMakeLists files and README to reflect the addition of `DP_Optimizations.H`.
@lrleon lrleon changed the title Dp Add comprehensive dynamic programming toolkit with knapsack, LIS, matrix chain, subset sum, tree DP, and optimizations Feb 26, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
Subset_Sum.H (1)

119-125: ⚠️ Potential issue | 🔴 Critical

Must fix: eliminate UB in MITM accumulator and complement arithmetic.

Line 124 and Line 339 do unchecked long long arithmetic after narrowing from T. This can overflow signed integers (undefined behavior) and produce incorrect matches.

🛠️ Proposed fix
+    template <std::integral T>
+    [[nodiscard]] inline long long
+    to_long_long_checked(const T value, const char *fn_name, const char *field_name)
+    {
+      if constexpr (std::is_signed_v<T>)
+        ah_out_of_range_error_if(value < static_cast<T>(std::numeric_limits<long long>::min()) or
+                                 value > static_cast<T>(std::numeric_limits<long long>::max()))
+          << fn_name << ": " << field_name << " is out of long long range";
+      else
+        ah_out_of_range_error_if(value > static_cast<T>(std::numeric_limits<long long>::max()))
+          << fn_name << ": " << field_name << " is out of long long range";
+      return static_cast<long long>(value);
+    }
+
+    [[nodiscard]] inline long long
+    checked_add_ll(const long long a, const long long b, const char *ctx)
+    {
+      ah_runtime_error_if((b > 0 and a > std::numeric_limits<long long>::max() - b) or
+                          (b < 0 and a < std::numeric_limits<long long>::min() - b))
+        << "subset_sum_mitm: overflow while computing " << ctx;
+      return a + b;
+    }
@@
-              s += static_cast<long long>(arr[start + j]);
+              s = checked_add_ll(
+                s,
+                to_long_long_checked(arr[start + j], "subset_sum_mitm", "value"),
+                "subset accumulation");
@@
-    const long long target_ll = static_cast<long long>(target);
+    const long long target_ll =
+      subset_sum_detail::to_long_long_checked(target, "subset_sum_mitm", "target");
@@
-        const long long need = target_ll - sums1[i].first;
+        const long long need =
+          subset_sum_detail::checked_add_ll(target_ll, -sums1[i].first, "target - partial sum");
#!/bin/bash
# Read-only verification: locate unchecked MITM long long arithmetic.
rg -nP 'long long s = 0|s \+= static_cast<long long>\(arr\[start \+ j\]\)|target_ll = static_cast<long long>\(target\)|need = target_ll - sums1\[i\]\.first' Subset_Sum.H

Based on learnings, "Memory leaks or undefined behavior block merge in code review".

Also applies to: 336-340

🧹 Nitpick comments (1)
README.md (1)

2806-2862: Nice to have: reduce duplicated DP inventories to avoid doc drift.

These three sections repeat overlapping API/header/example lists. Consider one canonical list with links from the other sections.

Also applies to: 3415-3423, 3640-3646

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 2806 - 2862, The README duplicates DP
API/header/example inventories across multiple sections (the table listing
Header/Scope, the "Covered DP APIs" list, and the "DP Examples" list shown here
and also at the other two locations), so consolidate them into a single
canonical section containing the table, the Covered DP APIs list and the DP
Examples list, then replace the other occurrences (the blocks at the other two
locations referenced in the comment) with short links or cross-references to
that canonical section; update labels/titles (e.g., "DP Usage Example", the
table header entries like `Knapsack.H`, `LIS.H`, and API names such as
`knapsack_01`, `longest_increasing_subsequence`, `matrix_chain_order`,
`subset_sum`, `min_weighted_squared_distance_1d`) so the canonical list is
authoritative and the other sections only point to it to avoid drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@Tests/dp_optimizations_test.cc`:
- Around line 436-460: Replace the flaky wall-clock timing/assertion with a
CPU-time based measurement: capture std::clock() before and after calling
divide_and_conquer_partition_dp<long long>(groups, n, cost), compute
cpu_elapsed_ms = (end_clock - start_clock) * 1000 / CLOCKS_PER_SEC, and use
EXPECT_LT(cpu_elapsed_ms, 500) (or a slightly lower threshold) instead of
elapsed_ms; update references to start/end/elapsed_ms accordingly so the test
measures process CPU time rather than wall-clock time and is resilient to CI
load.

In `@Tests/subset_sum_test.cc`:
- Around line 39-41: The test uses std::vector<bool> but does not directly
include <vector>, relying on a transitive include; add a direct `#include`
<vector> near the other includes at the top of the test file so
std::vector<bool> is explicitly provided (ensure the include appears alongside
the existing `#include` <cstdint> and `#include` <random> lines).

---

Nitpick comments:
In `@README.md`:
- Around line 2806-2862: The README duplicates DP API/header/example inventories
across multiple sections (the table listing Header/Scope, the "Covered DP APIs"
list, and the "DP Examples" list shown here and also at the other two
locations), so consolidate them into a single canonical section containing the
table, the Covered DP APIs list and the DP Examples list, then replace the other
occurrences (the blocks at the other two locations referenced in the comment)
with short links or cross-references to that canonical section; update
labels/titles (e.g., "DP Usage Example", the table header entries like
`Knapsack.H`, `LIS.H`, and API names such as `knapsack_01`,
`longest_increasing_subsequence`, `matrix_chain_order`, `subset_sum`,
`min_weighted_squared_distance_1d`) so the canonical list is authoritative and
the other sections only point to it to avoid drift.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3abcdad and 115fba7.

📒 Files selected for processing (13)
  • CMakeLists.txt
  • DP_Optimizations.H
  • Examples/CMakeLists.txt
  • Examples/knapsack_example.cc
  • Examples/lis_example.cc
  • Examples/matrix_chain_example.cc
  • Examples/subset_sum_example.cc
  • Examples/tree_dp_example.cc
  • Matrix_Chain.H
  • README.md
  • Subset_Sum.H
  • Tests/dp_optimizations_test.cc
  • Tests/subset_sum_test.cc
🚧 Files skipped from review as they are similar to previous changes (5)
  • Examples/CMakeLists.txt
  • Examples/matrix_chain_example.cc
  • DP_Optimizations.H
  • Examples/subset_sum_example.cc
  • Examples/tree_dp_example.cc

coderabbitai[bot]
coderabbitai bot previously approved these changes Feb 26, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 26, 2026

Note

Docstrings generation - SUCCESS
Generated docstrings and committed to branch dp (commit: 0c4ff1f7f84f08088858458694823c85924a345f)

Docstrings generation was requested by @lrleon.

The following files were modified:

* `Examples/dp_optimizations_example.cc`
* `Examples/knapsack_example.cc`
* `Examples/lis_example.cc`
* `Examples/matrix_chain_example.cc`
* `Examples/subset_sum_example.cc`
* `Examples/tree_dp_example.cc`

These files were ignored:
* `Tests/dp_optimizations_test.cc`
* `Tests/knapsack_test.cc`
* `Tests/lis_test.cc`
* `Tests/matrix_chain_test.cc`
* `Tests/subset_sum_test.cc`
* `Tests/tree_dp_test.cc`

These file types are not supported:
* `CMakeLists.txt`
* `DP_Optimizations.H`
* `Examples/CMakeLists.txt`
* `Knapsack.H`
* `LIS.H`
* `Matrix_Chain.H`
* `README.es.md`
* `README.md`
* `Subset_Sum.H`
* `Tree_DP.H`
* `doxygen_groups.H`
Copilot AI review requested due to automatic review settings February 26, 2026 21:34
@lrleon lrleon review requested due to automatic review settings February 26, 2026 21:34
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (6)
Examples/tree_dp_example.cc (3)

115-125: Avoid magic number in loop; use std::size or a named constant.

The literal 6 is repeated implicitly (array size) and explicitly (loop bound). Using std::size(nodes) (or nodes.size() with std::array) improves maintainability if the tree structure changes.

Suggested fix (assuming std::array adoption)
-  for (int i = 0; i < 6; ++i)
+  for (size_t i = 0; i < nodes.size(); ++i)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Examples/tree_dp_example.cc` around lines 115 - 125, The for-loop uses a
magic literal 6 when iterating over nodes; replace the hard-coded bound with a
size expression like std::size(nodes) (or nodes.size() if nodes is a
std::array/std::vector) to tie the loop to the actual container length. Update
the loop header that references nodes and keep the body using
id_map.id_of(nodes[i]), subtree_sizes[id], max_dist[id], sum_dist[id], and
value_sum_dp.value(nodes[i]) unchanged so iteration adapts if the nodes
container size changes.

79-79: Prefer std::array over C-style array.

Per coding guidelines, use std::array when size is known at compile-time.

Suggested fix
-  G::Node * nodes[] = {n0, n1, n2, n3, n4, n5};
+  std::array<G::Node *, 6> nodes = {n0, n1, n2, n3, n4, n5};

Add the include at the top:

 # include <iostream>
 # include <iomanip>
+# include <array>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Examples/tree_dp_example.cc` at line 79, Replace the C-style array
declaration G::Node * nodes[] = {n0, n1, n2, n3, n4, n5}; with a std::array to
follow guidelines: add the <array> include and declare a std::array<G::Node*, 6>
named nodes initialized with the same elements; update any subsequent uses if
they rely on decay-to-pointer behavior (use nodes.data() where a raw pointer is
required) and keep the name and element order unchanged.

34-34: Documentation mentions rerooting DP but example doesn't demonstrate it.

The @brief states "rerooting DP" walkthrough, but the code only uses Gen_Tree_DP. If rerooting is intended to be showcased, consider adding a Gen_Reroot_DP example; otherwise, update the documentation to reflect what's actually demonstrated.

Also applies to: 48-48

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Examples/tree_dp_example.cc` at line 34, Update the docstring or the example
to match the actual content: either change the `@brief` line in
Examples/tree_dp_example.cc to remove "rerooting" (e.g., only mention
Gen_Tree_DP) or add a short demonstration using Gen_Reroot_DP alongside
Gen_Tree_DP so the file actually shows rerooting; locate the comment with `@brief`
and the example code that constructs/uses Gen_Tree_DP and modify the text or add
a Gen_Reroot_DP usage snippet accordingly so the documentation and examples stay
consistent.
Examples/knapsack_example.cc (1)

114-123: Formatting issue: namespace closing brace and Doxygen comment are concatenated.

The closing brace of the anonymous namespace and the Doxygen block for main() are joined on the same line, which hurts readability.

🔧 Suggested formatting fix
-} /**
+}
+
+/**
  * `@brief` Runs a console demonstration of multiple knapsack problem variants.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Examples/knapsack_example.cc` around lines 114 - 123, The anonymous namespace
closing brace is concatenated with the Doxygen comment for main(); separate them
so the namespace closing brace (`}` for the anonymous namespace) stands alone on
its own line followed by a blank line (optional) and then the Doxygen comment
block for `main()` begins on the next line; ensure the `main()` Doxygen comment
is not inline with the `}` and that the `main()` function signature (`int
main(...)`) remains immediately after its Doxygen block.
Examples/lis_example.cc (1)

81-155: Consider extracting a small scenario runner to remove repetition.

The five blocks share near-identical print/setup/report scaffolding. A helper/lambda would reduce duplication and make adding new DP scenarios easier.

As per coding guidelines "Refactor duplicated code into reusable functions or templates".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Examples/lis_example.cc` around lines 81 - 155, The repeated example blocks
(scenarios A–E) duplicate setup/printing logic; extract a small helper (free
function or lambda) — e.g., run_scenario(label, seq, compute_fn,
print_one_label) — that encapsulates the calls to print_rule(),
print_array("Input", ...), invoking compute_fn (which will call
longest_increasing_subsequence, longest_nondecreasing_subsequence or lis_length
as needed), printing the returned .subsequence and .length (using print_array
and std::setw output), and final print_rule(); then replace each block with a
call to run_scenario passing the appropriate label, Array<int> sequence, and
function object (or std::function) for the specific LIS/LNDS variant (including
the comparator overloads used in Scenario E such as Aleph::greater<int>).
Examples/dp_optimizations_example.cc (1)

102-107: Consider safer unsigned reverse-loop pattern for maintainability.

The loop for (size_t g = groups; g >= 2; --g) is correct but fragile: it's easy to accidentally break during future edits (e.g., changing >= 2 to > 0 introduces silent infinite-loop risk). Use the post-decrement-in-condition idiom to eliminate this boundary pitfall:

Refactor suggestion
-    for (size_t g = groups; g >= 2; --g)
+    for (auto g = groups; g-- > 1;)
       {
         const size_t k = res.split[g][i];
         bounds.append(k);
         i = k;
       }

This pattern is the standard guard for unsigned reverse loops and has been used elsewhere in the codebase (e.g., tpl_arrayHeap.H). Also note that similar patterns in grid.H:225 exhibit this same fragility—consider applying the same refactoring there as well.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Examples/dp_optimizations_example.cc` around lines 102 - 107, Replace the
fragile unsigned reverse loop "for (size_t g = groups; g >= 2; --g)" with the
post-decrement-in-condition idiom used elsewhere: "for (size_t g = groups; g-- >
1; )" and adjust the body to account for the fact that g is decremented before
use (e.g. replace uses of res.split[g] with res.split[g+1] or otherwise shift
indexing so the loop visits the same g values), keeping the same logic in
bounds.append(k) and updates to i; apply the same pattern to similar loops
(e.g., in grid.H at the noted site) for maintainability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@Examples/lis_example.cc`:
- Around line 67-76: The Doxygen comment for main is currently appended to a
closing brace ("} /**"), which can confuse parsers; move the docblock so the
closing brace stands alone and the "/**" of the main function comment starts on
the next line (i.e., separate the namespace/closing '}' and the Doxygen block
for main), ensuring the /**...*/ block immediately precedes the main function
definition (symbol: main) on its own line so Doxygen emits no warnings.

---

Nitpick comments:
In `@Examples/dp_optimizations_example.cc`:
- Around line 102-107: Replace the fragile unsigned reverse loop "for (size_t g
= groups; g >= 2; --g)" with the post-decrement-in-condition idiom used
elsewhere: "for (size_t g = groups; g-- > 1; )" and adjust the body to account
for the fact that g is decremented before use (e.g. replace uses of res.split[g]
with res.split[g+1] or otherwise shift indexing so the loop visits the same g
values), keeping the same logic in bounds.append(k) and updates to i; apply the
same pattern to similar loops (e.g., in grid.H at the noted site) for
maintainability.

In `@Examples/knapsack_example.cc`:
- Around line 114-123: The anonymous namespace closing brace is concatenated
with the Doxygen comment for main(); separate them so the namespace closing
brace (`}` for the anonymous namespace) stands alone on its own line followed by
a blank line (optional) and then the Doxygen comment block for `main()` begins
on the next line; ensure the `main()` Doxygen comment is not inline with the `}`
and that the `main()` function signature (`int main(...)`) remains immediately
after its Doxygen block.

In `@Examples/lis_example.cc`:
- Around line 81-155: The repeated example blocks (scenarios A–E) duplicate
setup/printing logic; extract a small helper (free function or lambda) — e.g.,
run_scenario(label, seq, compute_fn, print_one_label) — that encapsulates the
calls to print_rule(), print_array("Input", ...), invoking compute_fn (which
will call longest_increasing_subsequence, longest_nondecreasing_subsequence or
lis_length as needed), printing the returned .subsequence and .length (using
print_array and std::setw output), and final print_rule(); then replace each
block with a call to run_scenario passing the appropriate label, Array<int>
sequence, and function object (or std::function) for the specific LIS/LNDS
variant (including the comparator overloads used in Scenario E such as
Aleph::greater<int>).

In `@Examples/tree_dp_example.cc`:
- Around line 115-125: The for-loop uses a magic literal 6 when iterating over
nodes; replace the hard-coded bound with a size expression like std::size(nodes)
(or nodes.size() if nodes is a std::array/std::vector) to tie the loop to the
actual container length. Update the loop header that references nodes and keep
the body using id_map.id_of(nodes[i]), subtree_sizes[id], max_dist[id],
sum_dist[id], and value_sum_dp.value(nodes[i]) unchanged so iteration adapts if
the nodes container size changes.
- Line 79: Replace the C-style array declaration G::Node * nodes[] = {n0, n1,
n2, n3, n4, n5}; with a std::array to follow guidelines: add the <array> include
and declare a std::array<G::Node*, 6> named nodes initialized with the same
elements; update any subsequent uses if they rely on decay-to-pointer behavior
(use nodes.data() where a raw pointer is required) and keep the name and element
order unchanged.
- Line 34: Update the docstring or the example to match the actual content:
either change the `@brief` line in Examples/tree_dp_example.cc to remove
"rerooting" (e.g., only mention Gen_Tree_DP) or add a short demonstration using
Gen_Reroot_DP alongside Gen_Tree_DP so the file actually shows rerooting; locate
the comment with `@brief` and the example code that constructs/uses Gen_Tree_DP
and modify the text or add a Gen_Reroot_DP usage snippet accordingly so the
documentation and examples stay consistent.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 115fba7 and 0c4ff1f.

📒 Files selected for processing (6)
  • Examples/dp_optimizations_example.cc
  • Examples/knapsack_example.cc
  • Examples/lis_example.cc
  • Examples/matrix_chain_example.cc
  • Examples/subset_sum_example.cc
  • Examples/tree_dp_example.cc
🚧 Files skipped from review as they are similar to previous changes (2)
  • Examples/subset_sum_example.cc
  • Examples/matrix_chain_example.cc

coderabbitai[bot]
coderabbitai bot previously approved these changes Feb 26, 2026
Refactor C++ test code for clarity and maintainability.
Add missing Doxygen tags in DP_Optimizations.H.
Remove outdated or redundant markdown documentation.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
Tests/dp_optimizations_test.cc (1)

437-460: ⚠️ Potential issue | 🟠 Major

Must fix: make this perf assertion opt-in to avoid non-deterministic CI failures.

Even with CPU time, Line 458’s fixed threshold is machine/profile dependent and can still flap across CI runners.

🔧 Proposed hardening
 # include <limits>
 # include <random>
 # include <chrono>
 # include <numeric>
 # include <ctime>
+# include <cstdlib>
@@
 TEST(DPOptimizations, DivideConquerPartitionPerf)
 {
+  if (std::getenv("ALEPH_RUN_PERF_TESTS") == nullptr)
+    GTEST_SKIP() << "Set ALEPH_RUN_PERF_TESTS=1 to run performance-sensitive checks.";
+
   // Deterministic performance check for D&C DP optimizer.

As per coding guidelines, “Tests must be deterministic and repeatable.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Tests/dp_optimizations_test.cc` around lines 437 - 460, The perf test
DivideConquerPartitionPerf uses a fixed CPU-time threshold (cpu_elapsed_ms /
EXPECT_LT) which can flake on CI; make the test opt-in by skipping it unless a
runtime flag/env var is present (e.g., check getenv("RUN_PERF_TESTS") or a GTest
flag) and call GTEST_SKIP() early when not enabled so timing and the EXPECT_LT
on cpu_elapsed_ms only run when opted-in; keep the existing timing around
divide_and_conquer_partition_dp<long long>(groups, n, cost) and the EXPECT_LT
assertion unchanged when the opt-in is true.
🧹 Nitpick comments (1)
Tests/dp_optimizations_test.cc (1)

461-463: Nice to have: tighten the explanatory comment for expected optimum.

The note mentions “split into n intervals,” but this test splits into groups intervals. Clarifying that avoids confusion for future maintenance.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Tests/dp_optimizations_test.cc` around lines 461 - 463, The explanatory
comment after the EXPECT_EQ should be tightened to reference the actual
variables used: clarify that we split the range into "groups" intervals (not
"n"), so change the comment to mention groups and that the expected optimal cost
is roughly n^2 / groups given this cost model; update the comment next to
EXPECT_EQ(res.optimal_cost, cost(0, n) / groups) and reference the variables
res.optimal_cost, cost(0, n), n, and groups for clarity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@Tests/subset_sum_test.cc`:
- Around line 71-77: In valid_selection, replace the std::vector<bool> seen
initialization with the Aleph equivalent (use the repository's Aleph bit/vector
container that provides boolean indexing) and update any includes to pull in the
appropriate Aleph header; keep the same allocation size and boolean access
semantics (the variable seen and its uses in valid_selection should remain
identical), ensuring the code uses Aleph::... (or aleph::...) container instead
of std::vector<bool>.

---

Duplicate comments:
In `@Tests/dp_optimizations_test.cc`:
- Around line 437-460: The perf test DivideConquerPartitionPerf uses a fixed
CPU-time threshold (cpu_elapsed_ms / EXPECT_LT) which can flake on CI; make the
test opt-in by skipping it unless a runtime flag/env var is present (e.g., check
getenv("RUN_PERF_TESTS") or a GTest flag) and call GTEST_SKIP() early when not
enabled so timing and the EXPECT_LT on cpu_elapsed_ms only run when opted-in;
keep the existing timing around divide_and_conquer_partition_dp<long
long>(groups, n, cost) and the EXPECT_LT assertion unchanged when the opt-in is
true.

---

Nitpick comments:
In `@Tests/dp_optimizations_test.cc`:
- Around line 461-463: The explanatory comment after the EXPECT_EQ should be
tightened to reference the actual variables used: clarify that we split the
range into "groups" intervals (not "n"), so change the comment to mention groups
and that the expected optimal cost is roughly n^2 / groups given this cost
model; update the comment next to EXPECT_EQ(res.optimal_cost, cost(0, n) /
groups) and reference the variables res.optimal_cost, cost(0, n), n, and groups
for clarity.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0c4ff1f and 9bb750d.

📒 Files selected for processing (12)
  • DP_Optimizations.H
  • Examples/k_shortest_paths_example.cc
  • README.md
  • Tests/dp_optimizations_test.cc
  • Tests/subset_sum_test.cc
  • docs/ERROR_HANDLING.md
  • docs/GEOMETRY_MODULE_GUIDE.md
  • docs/GITHUB_PAGES_DOXYGEN.md
  • docs/RANGES_GUIDE.md
  • docs/TIKZGEOM_GUIDE.md
  • docs/doxygen_warnings.md
  • docs/header_audit.md
💤 Files with no reviewable changes (7)
  • docs/RANGES_GUIDE.md
  • docs/TIKZGEOM_GUIDE.md
  • docs/GEOMETRY_MODULE_GUIDE.md
  • docs/header_audit.md
  • docs/doxygen_warnings.md
  • docs/ERROR_HANDLING.md
  • docs/GITHUB_PAGES_DOXYGEN.md
✅ Files skipped from review due to trivial changes (1)
  • Examples/k_shortest_paths_example.cc
🚧 Files skipped from review as they are similar to previous changes (1)
  • DP_Optimizations.H

coderabbitai[bot]
coderabbitai bot previously approved these changes Feb 26, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 31 out of 31 changed files in this pull request and generated 11 comments.

Refactor Tree_Topology for clarity and add extensive Doxygen comments to Tree_DP classes.
Also, replace std::vector<bool> with BitArray in subset_sum_test.cc.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
DP_Optimizations.H (1)

388-393: ⚠️ Potential issue | 🔴 Critical

Must fix: overflow-sensitive arithmetic is still unsafe in CHT/Li Chao paths

Line 388 and Line 531 compute with PromotedT but then narrow back to T without range checks; out-of-range narrowing can corrupt ordering/results. Also, Line 404 performs subtraction in T before widening, so overflow can happen before conversion. The same narrowing risk appears at Line 784, Line 785, and Line 793.

#!/bin/bash
set -euo pipefail

file="$(fd -a '^DP_Optimizations\.H$' | head -n1)"
echo "Inspecting: ${file}"

echo "=== Risky narrowing and pre-widen arithmetic sites ==="
rg -n 'value_at\(|intersection_x|static_cast<T>\(|px \* px|-2\)\s*\*\s*px' "$file"

echo
echo "Expected verification outcome:"
echo "- value_at() returns narrowed T from promoted intermediates"
echo "- intersection_x subtracts in T before cast to long double"
echo "- weighted squared-distance helper narrows promoted expressions to T"

Also applies to: 404-405, 531-536, 784-786, 793-793

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@DP_Optimizations.H` around lines 388 - 393, All arithmetic that currently
promotes to PromotedT then narrows back to T (e.g., value_at) or that performs
subtraction in T before widening (e.g., intersection_x) must be done entirely in
the wider type and only narrowed with explicit bounds checking or clamping if a
T result is required; update value_at to compute the expression in PromotedT and
either return PromotedT or, if API must return T, clamp the final PromotedT
result to numeric_limits<T>::min/max before static_cast<T>; likewise change
intersection_x to cast operands to long double/PromotedT before subtracting to
avoid pre-widen overflow; apply the same pattern to the
weighted-squared-distance helper (the px*px, (px-2)*px sites) and the other
occurrences noted so that no arithmetic is performed in T that can overflow and
all narrowings are guarded by explicit range checks or API-level type changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@Tree_DP.H`:
- Around line 94-95: The public accessor children() currently returns the
undirected adjacency stored in adj_, causing a mismatch with the API contract;
either rename the accessor to neighbors() and update docs/usages to reflect
undirected adjacency, or change the stored topology so adj_ contains directed
children after rooting (filter out parent_ when building/storing adj_ or when
returning children via children()); update the symbol references adj_, parent_,
and children() accordingly and adjust any docs/comments mentioning "children" to
match the chosen behavior.

---

Duplicate comments:
In `@DP_Optimizations.H`:
- Around line 388-393: All arithmetic that currently promotes to PromotedT then
narrows back to T (e.g., value_at) or that performs subtraction in T before
widening (e.g., intersection_x) must be done entirely in the wider type and only
narrowed with explicit bounds checking or clamping if a T result is required;
update value_at to compute the expression in PromotedT and either return
PromotedT or, if API must return T, clamp the final PromotedT result to
numeric_limits<T>::min/max before static_cast<T>; likewise change intersection_x
to cast operands to long double/PromotedT before subtracting to avoid pre-widen
overflow; apply the same pattern to the weighted-squared-distance helper (the
px*px, (px-2)*px sites) and the other occurrences noted so that no arithmetic is
performed in T that can overflow and all narrowings are guarded by explicit
range checks or API-level type changes.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9bb750d and affce97.

📒 Files selected for processing (4)
  • DP_Optimizations.H
  • Tests/dp_optimizations_test.cc
  • Tests/subset_sum_test.cc
  • Tree_DP.H
✅ Files skipped from review due to trivial changes (1)
  • Tests/subset_sum_test.cc

coderabbitai[bot]
coderabbitai bot previously approved these changes Feb 26, 2026
Refactor DP optimization and tree DP headers, enhancing readability and consistency.
Update examples for knapsack, LIS, matrix chain, subset sum, and tree DP to align with new structures and use more modern C++ practices (e.g., range-based for loops, explicit loop bounds).
Address C++ casting best practices with clamped_cast.
Normalize main function signatures across examples.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
Examples/lis_example.cc (1)

75-88: Consider constraining the template parameter with a concept.

The Fn parameter is unconstrained. Per coding guidelines, template parameters should use concepts for clearer constraints and better error messages at instantiation time.

♻️ Proposed fix using std::invocable
-  template <typename Fn>
+  template <std::invocable<const Array<int>&> Fn>
   void run_scenario(const char * label,
                     const Array<int> & seq,
                     Fn compute_fn,
                     const char * res_label,
                     const char * len_label)

This requires adding #include <concepts> at the top of the file.

As per coding guidelines: "Use concepts to constrain template parameters" and "Use requires clauses for complex constraints".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Examples/lis_example.cc` around lines 75 - 88, The template parameter Fn in
run_scenario is unconstrained; constrain it with a concept such as
std::invocable to ensure compute_fn is callable with (const Array<int>&) and to
produce clearer error messages: add `#include` <concepts> and change the template
signature to require Fn to be invocable with const Array<int>& (or use a
requires clause expressing the same) so the compiler enforces that
compute_fn(seq) is valid; update references to Fn/compute_fn accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@Examples/lis_example.cc`:
- Around line 75-88: The template parameter Fn in run_scenario is unconstrained;
constrain it with a concept such as std::invocable to ensure compute_fn is
callable with (const Array<int>&) and to produce clearer error messages: add
`#include` <concepts> and change the template signature to require Fn to be
invocable with const Array<int>& (or use a requires clause expressing the same)
so the compiler enforces that compute_fn(seq) is valid; update references to
Fn/compute_fn accordingly.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between affce97 and b1ea26d.

📒 Files selected for processing (10)
  • DP_Optimizations.H
  • Examples/dp_optimizations_example.cc
  • Examples/knapsack_example.cc
  • Examples/lis_example.cc
  • Examples/matrix_chain_example.cc
  • Examples/subset_sum_example.cc
  • Examples/tree_dp_example.cc
  • Tests/subset_sum_test.cc
  • Tree_DP.H
  • grid.H
🚧 Files skipped from review as they are similar to previous changes (2)
  • Examples/tree_dp_example.cc
  • Examples/knapsack_example.cc

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 32 out of 32 changed files in this pull request and generated 3 comments.

Comment on lines +225 to 227
for (size_t k = i; k-- > 0; )
delete [] map[k];
delete [] map;
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this exception cleanup block, map[i] is not deleted when the exception happens after map[i] = new Node*[width] succeeds (e.g., insert_node/insert_arc throws inside the inner loop). The current loop only deletes rows [0, i) and leaks the partially built map[i] row. Consider initializing map entries to nullptr and deleting map[i] as well (when allocated), or switching the whole allocation to RAII containers to make this exception-safe.

Copilot uses AI. Check for mistakes.
Comment on lines +327 to +339
auto sums1 = subset_sum_detail::enumerate_sums(values, 0, half1);
auto sums2 = subset_sum_detail::enumerate_sums(values, half1, half2);

// sort sums2 by sum value
introsort(sums2, [](const auto & a, const auto & b)
{
return a.first < b.first;
});

const long long target_ll = static_cast<long long>(target);
for (size_t i = 0; i < sums1.size(); ++i)
{
const long long need = target_ll - sums1[i].first;
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

subset_sum_mitm() (and enumerate_sums()) accumulates subset sums into long long and also casts target to long long. Since the template accepts any std::integral T, passing a 64-bit unsigned type (or wider integral types) can overflow these casts and silently produce incorrect results. Either constrain MITM to a bounded signed range (e.g., std::signed_integral with sizeof(T) <= sizeof(long long)), or switch the internal sum type to a safe promoted type (e.g., __int128 when available) and validate conversions.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants