Add adversarial game search framework with Negamax and Alpha-Beta engines#49
Add adversarial game search framework with Negamax and Alpha-Beta engines#49
Conversation
… and adversarial search support
Adds the Iterative Deepening A* (IDA*) search algorithm, including its domain contract and result structure. Introduces generic move ordering support with killer and history heuristics. Updates backtracking and branch-and-bound to optionally use visited state tracking. Includes an IDA* grid pathfinding example.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a complete implicit state‑space search framework: core contracts and stats, move‑ordering and history/killer heuristics, a generic transposition table, backtracking DFS, IDA*, branch‑and‑bound optimizer, adversarial engines (Negamax, Alpha‑Beta) with iterative deepening/aspiration and tracing, many examples/tests, CMake integration, and documentation. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant AlphaBeta as Alpha_Beta<Domain>
participant Domain
participant TT as Transposition_Table
participant Tracer as AdversarialSearchTracer
Client->>AlphaBeta: search(root_state, policy, limits)
AlphaBeta->>TT: probe(state_key, depth)
TT-->>AlphaBeta: hit (Exact/Lower/Upper) or miss
AlphaBeta->>Tracer: Event(NodeEntry)
alt transposition exact hit
AlphaBeta->>Tracer: Event(TT_Hit)
AlphaBeta-->>Client: return cached result
else expand node
AlphaBeta->>Domain: for_each_successor(state)
Domain-->>AlphaBeta: successor move(s)
AlphaBeta->>Domain: apply(move)
AlphaBeta->>AlphaBeta: search_node(child, -beta, -alpha)
AlphaBeta->>Domain: undo(move)
alt cutoff
AlphaBeta->>Tracer: Event(AlphaBetaCutoff)
AlphaBeta->>TT: store(key, entry with Bound=Lower/Upper)
else no cutoff
AlphaBeta->>TT: store(key, entry with Bound=Exact)
end
end
AlphaBeta->>Tracer: Event(NodeExit)
AlphaBeta-->>Client: result (value + PV)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a7c465e9f9
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Pull request overview
This PR introduces a reusable implicit state-space search framework (DFS/backtracking, branch-and-bound, adversarial Negamax/Alpha-Beta), plus shared move ordering and generic transposition table support, complemented by docs, examples, and tests.
Changes:
- Added common search vocabulary (
SearchResult, limits/stats/policy, solution/incumbent helpers, domain concepts) and shared move-ordering utilities (killer/history tables + batch sorting). - Added generic
Transposition_Tableplus framework docs/examples and new gtest coverage; wired the new headers intoState_Search.Hand install/CMake plumbing. - Added an IDA* engine and a grid example, and added CTest-based validation for reference examples.
Reviewed changes
Copilot reviewed 35 out of 35 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| state_search_common.H | Defines shared search types (limits/stats/policy/result), solution tracking, and core domain/move/state concepts. |
| search_move_ordering.H | Adds shared move-ordering vocabulary, stats, killer/history tables, and sort helper. |
| Transposition_Table.H | Implements a generic hash-backed transposition/memo table with replacement policy + stats. |
| Backtracking.H | Adds a reusable depth-first backtracking engine built on the common vocabulary. |
| Branch_And_Bound.H | Adds reusable DFS/best-first branch-and-bound with incumbent and optional bound-based ordering. |
| State_Search_IDA_Star.H | Adds an IDA* engine layered on the shared backtracking-style domain contract. |
| State_Search.H | Umbrella include + Aleph::Search facade wiring together the framework modules. |
| docs/state_search_framework.md | Documents module layout and the core DFS/backtracking domain contract and usage. |
| docs/move_ordering_framework.md | Documents move-ordering modes and heuristics integration points. |
| docs/branch_and_bound_framework.md | Documents branch-and-bound domain contract, policies, and usage. |
| docs/adversarial_search_framework.md | Documents adversarial domain contract, move ordering, TT integration, and iterative deepening. |
| docs/transposition_table_framework.md | Documents TT key validity, entry semantics, replacement, and tradeoffs. |
| README.md | Adds a top-level section describing the state-space search framework and how to run examples/tests. |
| README.es.md | Spanish version of the new README section. |
| Tests/state_search_framework_test.cc | Adds tests for the common types and the DFS/backtracking engine behavior. |
| Tests/branch_and_bound_test.cc | Adds tests covering branch-and-bound strategies and ordering behavior. |
| Tests/adversarial_search_test.cc | Adds tests for Negamax/Alpha-Beta, ordering, TT integration, iter-deepening, aspiration, and tracing. |
| Tests/transposition_table_test.cc | Adds unit tests for TT replacement behavior, stats, and collision tolerance. |
| Examples/state_search_framework_example.cc | Adds a minimal example for DFS/backtracking + solution enumeration. |
| Examples/backtracking_nqueens_example.cc | Adds N-Queens example using DFS/backtracking. |
| Examples/backtracking_subset_sum_example.cc | Adds subset-sum example demonstrating domain-side pruning and enumeration. |
| Examples/branch_and_bound_artificial_example.cc | Adds a small artificial B&B example comparing strategies. |
| Examples/branch_and_bound_knapsack_example.cc | Adds 0/1 knapsack example with B&B (best-first). |
| Examples/branch_and_bound_assignment_example.cc | Adds assignment problem example with B&B (best-first). |
| Examples/adversarial_artificial_example.cc | Adds adversarial example comparing Negamax vs Alpha-Beta. |
| Examples/alpha_beta_connect3_example.cc | Adds Connect-3 example showing Alpha-Beta ordering effects. |
| Examples/negamax_tictactoe_example.cc | Adds Tic-Tac-Toe example with TT and iterative Alpha-Beta + tracing. |
| Examples/ida_star_grid_example.cc | Adds an IDA* grid pathfinding reference example. |
| Examples/verify_reference_example.cmake | Adds a helper script to validate example output in CTest. |
| Examples/CMakeLists.txt | Registers new CTest validations for selected reference examples and benchmark validate run. |
| CMakeLists.txt | Installs the new public headers and adjusts testing enablement during configuration. |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (14)
README.md (1)
3630-3633: Nice to have: align the roadmap sentence with what this PR already ships.
Negamax.Hin this PR already exposesiterative_deepening_negamax_search(...), so keeping iterative deepening in the “remaining roadmap” list undersells the current API.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 3630 - 3633, Update the roadmap sentence to reflect that iterative deepening is already shipped by this PR: remove "iterative deepening" from the remaining roadmap list and, if useful, add a parenthetical noting that Negamax.H exposes iterative_deepening_negamax_search(...) so readers know it is included in the current release; keep the rest of the listed items unchanged.Negamax.H (1)
54-58: Should fix: add the missing<optional>include.
Negamax.Husesstd::optionalat Lines 239, 591, and 611, but the header does not include<optional>itself. As a public header, it should not rely on transitive includes for that.Patch
`#include` <concepts> `#include` <limits> +#include <optional> `#include` <type_traits> `#include` <utility>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Negamax.H` around lines 54 - 58, Negamax.H is missing the direct include for <optional> even though it uses std::optional in several places (occurrences of std::optional around the file); add `#include` <optional> alongside the other headers (e.g., with <concepts>, <limits>, <type_traits>, <utility>) so the public header doesn't rely on transitive includes and all uses of std::optional in this header compile correctly.Examples/CMakeLists.txt (1)
60-95: Should fix: don't silently skip these reference-example tests.Lines 60-95 turn a missing or renamed fixture target into a green configure, because the whole
add_test()block is skipped. Since these examples are the regression surface for the new search framework, missing targets should fail fast instead of quietly dropping coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Examples/CMakeLists.txt` around lines 60 - 95, The configure currently silently skips reference-example tests by wrapping each add_test() in if(TARGET ...), so missing or renamed targets don't fail CI; change each guarded block to explicitly error when the expected targets are absent (e.g., check for TARGET and call a fatal/configure error) so that missing targets like backtracking_nqueens_example, branch_and_bound_knapsack_example, negamax_tictactoe_example and state_search_benchmark cause an immediate configure failure rather than silently omitting the add_test() calls; update the checks around add_test() to validate target presence and fail fast.docs/state_search_framework.md (1)
204-207: Nice to have — Reword one bullet to avoid repetitive sentence starts.The three consecutive bullets starting with “Add …” read a bit repetitive; a small rephrase would improve flow.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/state_search_framework.md` around lines 204 - 207, Reword one of the consecutive bullets that start with "Add …" to avoid repetitive sentence openings; for example change the third bullet "Add branch-and-bound memoization once incumbent-aware reuse rules are explicit." to a variant like "Introduce branch-and-bound memoization once incumbent-aware reuse rules are explicit." or "Enable branch-and-bound memoization once incumbent-aware reuse rules are explicit." Update the corresponding bullet in the list so the three items no longer all begin with "Add".Examples/backtracking_subset_sum_example.cc (1)
64-71: Should fix — Make the pruning precondition explicit (values >= 0).The pruning in
SubsetSumDomain::should_pruneis correct only when allvalues_are non-negative. Please either validate this in the constructor or document the precondition next to the domain to avoid silent false negatives when negatives are used.As per coding guidelines "Validate input preconditions (e.g., non-negative weights for Dijkstra)".
Also applies to: 83-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Examples/backtracking_subset_sum_example.cc` around lines 64 - 71, The pruning logic in SubsetSumDomain::should_prune assumes all values_ are non-negative but the constructor (SubsetSumDomain) does not enforce or document this; update the constructor to validate the precondition by checking that every element of values_ is >= 0 (or alternatively add a clear comment on the class/domain and function should_prune documenting "requires values >= 0"), and if you choose validation, throw or assert (consistent with project error handling) when a negative value is found; reference SubsetSumDomain, values_, suffix_remaining_, target_, and should_prune when making the change so the check/documentation is colocated with the existing suffix_remaining_ computation and the pruning implementation.docs/adversarial_search_framework.md (1)
249-251: Minor style: Consider rewording "very large".Static analysis flagged "very large" as an overused intensifier. Consider alternatives like "extensive" or "deep" searches, though this is purely stylistic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/adversarial_search_framework.md` around lines 249 - 251, The phrase "very large searches" in the documentation line should be reworded to avoid the overused intensifier; update the sentence that currently reads "It is not meant to be enabled indiscriminately on very large searches." (search for the exact phrase "very large searches") and replace with a clearer alternative such as "extensive searches" or "deep searches" so the intent remains the same but the wording is stylistically improved.Examples/branch_and_bound_knapsack_example.cc (1)
106-125: The fractional bound is valid but not optimal without sorting by value-density.The
bound()function computes a fractional relaxation by greedily taking items in their original order. For the tightest upper bound (which maximizes pruning), items should be processed in decreasing value/weight ratio order. With the current unsorted items, the bound is still valid (never underestimates the optimal) but may be looser than necessary.For this small example it works fine, but a comment noting this or sorting items in the constructor would improve the bound quality for larger instances.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Examples/branch_and_bound_knapsack_example.cc` around lines 106 - 125, The fractional bound computed in bound(const State &state) uses items_ in original order, producing a valid but potentially loose upper bound; modify the code so items_ are sorted once by decreasing value/weight ratio (value-density) before branch-and-bound runs (e.g., in the class constructor or an init method) so bound() iterates over items_ in density order, or alternatively add a clear comment in bound() noting the current greedy order and the benefit of sorting; reference the bound(), items_, capacity_, State and Objective symbols when making the change.Examples/backtracking_nqueens_example.cc (1)
147-153: Minor:signature()assumes single-digit column indices.The signature function casts queen positions directly to characters via
'0' + state.queens[row]. For board sizes n > 9, this produces non-digit characters. Since this is a reference example with n=4, this is acceptable, but a comment noting the assumption would help readers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Examples/backtracking_nqueens_example.cc` around lines 147 - 153, The signature(const NQueensState &state) function assumes single-digit column indices by using '0' + state.queens[row], which breaks for n > 9; either replace the char arithmetic with a safe conversion (e.g., append std::to_string(state.queens[row]) for each row) to support multi-digit columns, or add a concise comment above signature noting the single-digit (n <= 9) assumption; reference NQueensState, its members n and queens, and the signature function when making the change.Tests/state_search_framework_test.cc (1)
276-283: Template functionpath_signatureassumesMovehas alabelmember.This template works for
ArtificialDecisionTreeDomain::Movewhich haslabel, but won't compile for other move types. Consider either:
- Keeping it non-templated since it's only used with one type, or
- Adding a concept constraint
This is minor since it's in an anonymous namespace for tests only.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/state_search_framework_test.cc` around lines 276 - 283, The template path_signature assumes Move has a member label and thus fails for other Move types; to fix, either (A) make it non-templated by changing its signature to accept SearchPath<ArtificialDecisionTreeDomain::Move> (since tests only use that type) and remove the template parameter, or (B) add a C++20 concept/constraint to the template (e.g., require that Move has a char-like label member) and apply it to template<typename Move> std::string path_signature(const SearchPath<Move>&) so compilation errors are clear; update references to path_signature and keep it in the anonymous namespace.Examples/state_search_benchmark.cc (2)
533-533: Intentional non-standard move ordering for Tic-Tac-Toe.The move ordering
{1, 3, 5, 7, 0, 2, 6, 8, 4}prioritizes edges over corners and center, which is intentionally suboptimal. This is fine for benchmarking move-ordering improvements, but a brief comment explaining this design choice would help future readers understand why center (cell 4) is explored last.📝 Suggested documentation
template <typename Visitor> bool for_each_successor(const State &state, Visitor visit) const { + // Intentionally suboptimal order (edges first, center last) to highlight + // move-ordering improvements in benchmark comparisons. static constexpr size_t order[] = {1, 3, 5, 7, 0, 2, 6, 8, 4};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Examples/state_search_benchmark.cc` at line 533, Add a brief explanatory comment above the static constexpr size_t order[] definition (symbol: order) stating that the move ordering {1,3,5,7,0,2,6,8,4} intentionally prioritizes edges over corners and center to create a suboptimal ordering for benchmarking move-ordering improvements, and explicitly note that cell 4 (center) is explored last for that reason so future readers understand this design choice.
127-132: Consider validating numeric argument parsing.The
std::stoullcalls can throwstd::invalid_argumentorstd::out_of_rangeif the user provides malformed input. While this is an example/benchmark tool, adding basic error handling would improve robustness.🛡️ Suggested improvement
else if (arg == "--repetitions" and i + 1 < argc) - options.repetitions = static_cast<size_t>(std::stoull(argv[++i])); + { + try { + options.repetitions = static_cast<size_t>(std::stoull(argv[++i])); + } catch (const std::exception &) { + std::cerr << "Invalid value for --repetitions\n"; + std::exit(1); + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Examples/state_search_benchmark.cc` around lines 127 - 132, The numeric parsing using std::stoull for options.repetitions, options.warmup, and options.samples can throw exceptions for malformed input; wrap each std::stoull call in a try-catch that catches std::invalid_argument and std::out_of_range, log a clear error referencing the offending flag (e.g., "--repetitions", "--warmup", "--samples") and the provided value, and then either exit with a non-zero status or set a safe default; alternatively factor parsing into a helper like ParseSizeTOrFail(const char* str, const char* flag) that does the try/catch and returns the parsed size_t, and replace the direct std::stoull calls with that helper to centralize validation.Tests/branch_and_bound_test.cc (1)
711-711: Trivially true assertion.
EXPECT_GE(best_result.stats.pruned_by_bound, 0u)is always true for an unsigned type. If best-first is expected to prune, considerEXPECT_GTinstead. If pruning is not guaranteed, remove this assertion or add a comment explaining the intent.🔧 Suggested fix
- EXPECT_GE(best_result.stats.pruned_by_bound, 0u); + // Best-first may or may not prune depending on traversal order + // Just verify the search completes correctly🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Tests/branch_and_bound_test.cc` at line 711, The assertion uses an unsigned counter best_result.stats.pruned_by_bound so EXPECT_GE(..., 0u) is vacuous; either replace EXPECT_GE with EXPECT_GT(best_result.stats.pruned_by_bound, 0u) if you intend to assert that best-first prunes at least one node, or remove the assertion (or add a clarifying comment) if pruning is not guaranteed; update the check in Tests::branch_and_bound_test (the EXPECT_* assertion referencing best_result.stats.pruned_by_bound) accordingly to reflect the true test intent.Alpha_Beta.H (2)
918-1072: Consider extracting common iterative deepening logic.The iterative deepening implementations (lines 918-1072 and 1074-1242) share substantial code. While the current approach works correctly, extracting the common aspiration window retry logic into a helper could reduce duplication.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Alpha_Beta.H` around lines 918 - 1072, The two iterative deepening functions duplicate the aspiration-window retry loop; extract that common logic into a helper (e.g., a function like perform_aspiration_retries or handle_aspiration_window) that takes the engine, initial_state, current depth, center/half_window/options.aspiration, full_alpha/full_beta, tracer and returns the final iteration.result, aspiration_researches, aspiration_alpha and aspiration_beta (and accumulates stats via adversarial_search_detail::accumulate_adversarial_stats when appropriate); replace the inline retry loop in iterative_deepening_alpha_beta_search with a call to this helper and do the same for the other iterative deepening implementation to remove duplication while preserving uses of grow_aspiration_half_window, emit_trace, first_move_of, saturating_add/saturating_subtract and iteration/result field assignments.
1264-1277: Verify domain ownership in iterative deepening helper.This overload moves
domaininto a newAlpha_Betaengine, then calls another overload passingengine.domain(). The recursive call receives a reference to the engine's domain, which is correct. However, the lambda captures&engineand callsengine.domain().state_key(state), which works but could be clearer by capturing the domain reference directly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Alpha_Beta.H` around lines 1264 - 1277, The lambda currently captures &engine to call engine.domain().state_key(state); make this clearer and safer by taking a reference to the engine's domain once (e.g. auto &domain_ref = engine.domain()) and use domain_ref both when calling iterative_deepening_alpha_beta_search and in the lambda capture (e.g. [&domain_ref](const typename Domain::State &state){ return domain_ref.state_key(state); }); keep the Alpha_Beta<Domain> engine creation as-is so the domain_ref lifetime matches the lambda.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Backtracking.H`:
- Around line 403-413: The visited-set insertion happens too early: delay
calling visited.insert(domain_.state_key(state)) until after depth/limit/prune
checks and after confirming the node will actually be expanded (i.e., after
updating result.stats.max_depth_reached and checking max_depth/prune
conditions); keep the visited.contains check only to detect true duplicates at
expansion time or move duplicate-goal suppression into the goal-handling branch
(handle suppression explicitly in the goal check in the same function).
Specifically, remove or defer the visited.insert call paired with
visited.contains and only insert the state key once all early-return conditions
have passed and the node is accepted for expansion (apply the same fix for the
analogous block around the other region that mirrors lines 435-448).
In `@Branch_And_Bound.H`:
- Around line 591-598: The code instantiates RankedMove<Move, Objective> and
Solution using member initializers like Priority priority = Priority{} and
Objective objective_value = Objective{}, which imposes an unintended
std::default_constructible requirement on Objective; change these to direct
initialization at use sites instead of relying on default member initializers
(e.g., construct RankedMove 'ranked' by setting its fields explicitly after
declaration or provide a constructor that accepts move/ordinal/priority, and
initialize Solution.objective_value when you have a computed value rather than
via Objective{}), updating symbols RankedMove, Priority, Solution, and
objective_value so Objective only needs to satisfy std::totally_ordered.
In `@Examples/ida_star_grid_example.cc`:
- Around line 51-55: The code uses std::abs when computing the Manhattan
distance (the std::abs call in the Manhattan distance calculation) but the
translation unit does not include the direct header; add the appropriate header
to ensure portability by inserting `#include` <cstdlib> (or `#include` <cmath> if
you prefer floating overloads) at the top of Examples/ida_star_grid_example.cc
so std::abs is declared before use.
In `@Negamax.H`:
- Around line 1043-1047: The negation of child.value into candidate can overflow
when child.value == std::numeric_limits<Score>::lowest(); update the logic in
Negamax (where Score candidate = -child.value, has_best, best.value,
best.principal_variation, prepend_move(move, child.principal_variation)) to
guard this case: detect if child.value equals numeric_limits<Score>::lowest()
and handle it explicitly (e.g., map to numeric_limits<Score>::max() or use a
conditional branch that compares without performing the negation) before
assigning/comparing to best.value and updating principal variation; apply the
identical fix in Alpha_Beta.H where the same pattern occurs.
- Around line 1452-1495: The lambda keyer captures domain by reference but
domain is moved afterwards causing UB; change the capture to own/move the domain
into the lambda (e.g., capture domain by value via a move-capture) so keyer
holds a valid Domain instance when called, and apply the same fix in the
no-tracer overload that forwards here; update the lambda in
iterative_deepening_negamax_search (symbol: keyer, template
iterative_deepening_negamax_search) to use a move-capture of domain instead of
[&domain].
In `@README.md`:
- Around line 3647-3650: The README currently shows building and running the
state_search_benchmark and tests but the earlier configure example uses
-DBUILD_EXAMPLES=OFF; update the docs to note that the cmake --build target
state_search_benchmark and ctest --test-dir build/Examples -R StateSearch
require examples to be enabled at configure time by adding or replacing the
configure step with cmake -S . -B build -DBUILD_EXAMPLES=ON -DBUILD_TESTS=ON (or
explicitly call out that BUILD_EXAMPLES must be ON before running the shown
commands).
In `@state_search_common.H`:
- Around line 253-257: The mutable accessor to solutions_ is causing
desynchronization with the cached count_; either remove/make solutions() return
a const reference or stop using the cached count_ (count_) entirely and derive
bookkeeping from solutions_. Replace all uses of count_ in operator(), size(),
is_empty(), and the stop-after-N logic (e.g., operator(), size(), is_empty(),
and any code around solutions() access) with solutions_.size() (and compare
against limit_) or update count_ immediately after any external mutation; my
recommendation: make solutions() non-mutable (return const &) and compute stop
condition as (solutions_.size() < limit_) while removing the separate count_
updates to eliminate the desync.
In `@State_Search_IDA_Star.H`:
- Around line 152-159: The goal handling currently always returns immediately
after finding a goal (ignoring stop_at_first_solution and the on_solution
visitor); update the goal-hit logic in the block that creates Solution{state,
path, depth} (and the surrounding code that aborts the current threshold pass
and the outer loop) to respect the stop_at_first_solution flag and the boolean
result from on_solution: if stop_at_first_solution is true or on_solution
indicates to stop, behave as now and return/abort; otherwise do not return and
continue scanning the rest of the current threshold pass (do not set the
outer-loop termination flag or break out early). Ensure you check
stop_at_first_solution and the visitor return value where result.total_cost = g
/ on_solution(solution) is called, and remove/guard the early-abort logic in the
threshold-pass/outer-loop sections so enumeration within the same threshold
continues when stop_at_first_solution == false.
- Around line 162-179: The DFS in State_Search_IDA_Star.H doesn't enforce
SearchLimits::max_depth; add a depth check before expanding: if depth >=
result.limits.max_depth (and max_depth is finite) then increment
result.stats.pruned_by_depth, return {false,
std::numeric_limits<Distance>::max()} just like the other early-return cases;
place this check alongside the existing terminal/prune/limit checks so it uses
the same result/status behavior and references the existing depth parameter and
result.limits.max_depth.
In `@Tests/transposition_table_test.cc`:
- Around line 60-124: Add a deterministic stress test and edge-size cases for
Transposition_Table to exercise heavy-collision behavior and counters: create a
new test that uses CollidingKey (or a custom key that forces identical hashes)
to insert a large number (e.g., hundreds) of distinct keys that collide, verify
each is probe()-reachable and that size() equals the number inserted, then
update some entries to trigger Replaced/Rejected paths and assert stats()
(probes, hits, misses, stores, replacements, rejected_updates) reflect those
operations; also add small edge tests for empty and single-entry tables to
assert is_empty(), size(), probe(nullptr) behavior and stats remain correct.
Ensure tests reference Transposition_Table, store, probe, size, is_empty, and
stats so they cover collision-heavy and edge-size scenarios deterministically.
---
Nitpick comments:
In `@Alpha_Beta.H`:
- Around line 918-1072: The two iterative deepening functions duplicate the
aspiration-window retry loop; extract that common logic into a helper (e.g., a
function like perform_aspiration_retries or handle_aspiration_window) that takes
the engine, initial_state, current depth, center/half_window/options.aspiration,
full_alpha/full_beta, tracer and returns the final iteration.result,
aspiration_researches, aspiration_alpha and aspiration_beta (and accumulates
stats via adversarial_search_detail::accumulate_adversarial_stats when
appropriate); replace the inline retry loop in
iterative_deepening_alpha_beta_search with a call to this helper and do the same
for the other iterative deepening implementation to remove duplication while
preserving uses of grow_aspiration_half_window, emit_trace, first_move_of,
saturating_add/saturating_subtract and iteration/result field assignments.
- Around line 1264-1277: The lambda currently captures &engine to call
engine.domain().state_key(state); make this clearer and safer by taking a
reference to the engine's domain once (e.g. auto &domain_ref = engine.domain())
and use domain_ref both when calling iterative_deepening_alpha_beta_search and
in the lambda capture (e.g. [&domain_ref](const typename Domain::State &state){
return domain_ref.state_key(state); }); keep the Alpha_Beta<Domain> engine
creation as-is so the domain_ref lifetime matches the lambda.
In `@docs/adversarial_search_framework.md`:
- Around line 249-251: The phrase "very large searches" in the documentation
line should be reworded to avoid the overused intensifier; update the sentence
that currently reads "It is not meant to be enabled indiscriminately on very
large searches." (search for the exact phrase "very large searches") and replace
with a clearer alternative such as "extensive searches" or "deep searches" so
the intent remains the same but the wording is stylistically improved.
In `@docs/state_search_framework.md`:
- Around line 204-207: Reword one of the consecutive bullets that start with
"Add …" to avoid repetitive sentence openings; for example change the third
bullet "Add branch-and-bound memoization once incumbent-aware reuse rules are
explicit." to a variant like "Introduce branch-and-bound memoization once
incumbent-aware reuse rules are explicit." or "Enable branch-and-bound
memoization once incumbent-aware reuse rules are explicit." Update the
corresponding bullet in the list so the three items no longer all begin with
"Add".
In `@Examples/backtracking_nqueens_example.cc`:
- Around line 147-153: The signature(const NQueensState &state) function assumes
single-digit column indices by using '0' + state.queens[row], which breaks for n
> 9; either replace the char arithmetic with a safe conversion (e.g., append
std::to_string(state.queens[row]) for each row) to support multi-digit columns,
or add a concise comment above signature noting the single-digit (n <= 9)
assumption; reference NQueensState, its members n and queens, and the signature
function when making the change.
In `@Examples/backtracking_subset_sum_example.cc`:
- Around line 64-71: The pruning logic in SubsetSumDomain::should_prune assumes
all values_ are non-negative but the constructor (SubsetSumDomain) does not
enforce or document this; update the constructor to validate the precondition by
checking that every element of values_ is >= 0 (or alternatively add a clear
comment on the class/domain and function should_prune documenting "requires
values >= 0"), and if you choose validation, throw or assert (consistent with
project error handling) when a negative value is found; reference
SubsetSumDomain, values_, suffix_remaining_, target_, and should_prune when
making the change so the check/documentation is colocated with the existing
suffix_remaining_ computation and the pruning implementation.
In `@Examples/branch_and_bound_knapsack_example.cc`:
- Around line 106-125: The fractional bound computed in bound(const State
&state) uses items_ in original order, producing a valid but potentially loose
upper bound; modify the code so items_ are sorted once by decreasing
value/weight ratio (value-density) before branch-and-bound runs (e.g., in the
class constructor or an init method) so bound() iterates over items_ in density
order, or alternatively add a clear comment in bound() noting the current greedy
order and the benefit of sorting; reference the bound(), items_, capacity_,
State and Objective symbols when making the change.
In `@Examples/CMakeLists.txt`:
- Around line 60-95: The configure currently silently skips reference-example
tests by wrapping each add_test() in if(TARGET ...), so missing or renamed
targets don't fail CI; change each guarded block to explicitly error when the
expected targets are absent (e.g., check for TARGET and call a fatal/configure
error) so that missing targets like backtracking_nqueens_example,
branch_and_bound_knapsack_example, negamax_tictactoe_example and
state_search_benchmark cause an immediate configure failure rather than silently
omitting the add_test() calls; update the checks around add_test() to validate
target presence and fail fast.
In `@Examples/state_search_benchmark.cc`:
- Line 533: Add a brief explanatory comment above the static constexpr size_t
order[] definition (symbol: order) stating that the move ordering
{1,3,5,7,0,2,6,8,4} intentionally prioritizes edges over corners and center to
create a suboptimal ordering for benchmarking move-ordering improvements, and
explicitly note that cell 4 (center) is explored last for that reason so future
readers understand this design choice.
- Around line 127-132: The numeric parsing using std::stoull for
options.repetitions, options.warmup, and options.samples can throw exceptions
for malformed input; wrap each std::stoull call in a try-catch that catches
std::invalid_argument and std::out_of_range, log a clear error referencing the
offending flag (e.g., "--repetitions", "--warmup", "--samples") and the provided
value, and then either exit with a non-zero status or set a safe default;
alternatively factor parsing into a helper like ParseSizeTOrFail(const char*
str, const char* flag) that does the try/catch and returns the parsed size_t,
and replace the direct std::stoull calls with that helper to centralize
validation.
In `@Negamax.H`:
- Around line 54-58: Negamax.H is missing the direct include for <optional> even
though it uses std::optional in several places (occurrences of std::optional
around the file); add `#include` <optional> alongside the other headers (e.g.,
with <concepts>, <limits>, <type_traits>, <utility>) so the public header
doesn't rely on transitive includes and all uses of std::optional in this header
compile correctly.
In `@README.md`:
- Around line 3630-3633: Update the roadmap sentence to reflect that iterative
deepening is already shipped by this PR: remove "iterative deepening" from the
remaining roadmap list and, if useful, add a parenthetical noting that Negamax.H
exposes iterative_deepening_negamax_search(...) so readers know it is included
in the current release; keep the rest of the listed items unchanged.
In `@Tests/branch_and_bound_test.cc`:
- Line 711: The assertion uses an unsigned counter
best_result.stats.pruned_by_bound so EXPECT_GE(..., 0u) is vacuous; either
replace EXPECT_GE with EXPECT_GT(best_result.stats.pruned_by_bound, 0u) if you
intend to assert that best-first prunes at least one node, or remove the
assertion (or add a clarifying comment) if pruning is not guaranteed; update the
check in Tests::branch_and_bound_test (the EXPECT_* assertion referencing
best_result.stats.pruned_by_bound) accordingly to reflect the true test intent.
In `@Tests/state_search_framework_test.cc`:
- Around line 276-283: The template path_signature assumes Move has a member
label and thus fails for other Move types; to fix, either (A) make it
non-templated by changing its signature to accept
SearchPath<ArtificialDecisionTreeDomain::Move> (since tests only use that type)
and remove the template parameter, or (B) add a C++20 concept/constraint to the
template (e.g., require that Move has a char-like label member) and apply it to
template<typename Move> std::string path_signature(const SearchPath<Move>&) so
compilation errors are clear; update references to path_signature and keep it in
the anonymous namespace.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fbc959c7-35a1-4338-a0ab-d5fbcd796cb6
📒 Files selected for processing (35)
Alpha_Beta.HBacktracking.HBranch_And_Bound.HCMakeLists.txtExamples/CMakeLists.txtExamples/adversarial_artificial_example.ccExamples/alpha_beta_connect3_example.ccExamples/backtracking_nqueens_example.ccExamples/backtracking_subset_sum_example.ccExamples/branch_and_bound_artificial_example.ccExamples/branch_and_bound_assignment_example.ccExamples/branch_and_bound_knapsack_example.ccExamples/ida_star_grid_example.ccExamples/negamax_tictactoe_example.ccExamples/state_search_benchmark.ccExamples/state_search_framework_example.ccExamples/verify_reference_example.cmakeNegamax.HREADME.es.mdREADME.mdState_Search.HState_Search_IDA_Star.HTests/adversarial_search_test.ccTests/branch_and_bound_test.ccTests/state_search_framework_test.ccTests/transposition_table_test.ccTransposition_Table.Hah-zip.Hdocs/adversarial_search_framework.mddocs/branch_and_bound_framework.mddocs/move_ordering_framework.mddocs/state_search_framework.mddocs/transposition_table_framework.mdsearch_move_ordering.Hstate_search_common.H
Refactor codebase to use new Aleph namespace for search concepts, update Doxygen comments, and remove unnecessary blank lines in headers.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
Negamax.H (2)
1330-1337:⚠️ Potential issue | 🔴 CriticalLambda captures
domainby reference butdomainis moved beforekeyeris invoked.At line 1330,
keyercapturesdomainby reference. At line 1335-1336,domainis moved intoiterative_deepening_negamax_search, leaving the captured reference dangling. When the downstream function invokeskeyer(state), it accesses a moved-from object, causing undefined behavior.🐛 Proposed fix: capture by value or pass keyer before moving domain
- auto keyer = [&domain](const typename Domain::State &state) + auto keyer = [](const Domain &d, const typename Domain::State &state) { - return domain.state_key(state); + return d.state_key(state); }; - return iterative_deepening_negamax_search( - std::move(domain), std::move(initial_state), table, keyer, tracer, policy, limits, options); + // Create keyer that will be bound to the domain inside the engine + Negamax<Domain> engine(std::move(domain), policy, limits); + auto bound_keyer = [&engine](const typename Domain::State &state) + { + return engine.domain().state_key(state); + }; + // ... or refactor to avoid this pattern entirelyAlternative: restructure the overload to not use a lambda that outlives the moved object.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Negamax.H` around lines 1330 - 1337, The lambda keyer currently captures domain by reference but domain is moved into iterative_deepening_negamax_search, leaving a dangling reference; fix by changing the capture to capture domain by value (e.g., capture a copy: [domain] or std::move(domain) into the lambda) or alternatively create the keyer before moving domain and pass that keyer (so keyer owns the needed domain state) to iterative_deepening_negamax_search; update the lambda capture for keyer (and adjust any ownership semantics) to ensure keyer does not reference a moved-from domain when invoked.
981-987:⚠️ Potential issue | 🔴 CriticalGuard score negation against
lowest()overflow.When
Scoreis a signed integral type andchild.value == std::numeric_limits<Score>::lowest(), negating it is undefined behavior (signed overflow). This is a common pattern in adversarial search where loss is represented as the most negative value.🐛 Proposed fix using saturation
+ // Guard against signed overflow when negating the most negative value + Score candidate; + if constexpr (std::signed_integral<Score>) + { + if (child.value == std::numeric_limits<Score>::lowest()) + candidate = std::numeric_limits<Score>::max(); + else + candidate = -child.value; + } + else + candidate = -child.value; - const Score candidate = -child.value; if (not has_best or candidate > best.value)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Negamax.H` around lines 981 - 987, The negation of child.value can overflow when Score is signed and child.value == std::numeric_limits<Score>::lowest(); to fix, replace the direct negation in the block that computes candidate (where Score candidate = -child.value;) with a saturated check: if child.value == std::numeric_limits<Score>::lowest() set candidate to std::numeric_limits<Score>::max(), otherwise set candidate = -child.value; ensure <limits> is available so std::numeric_limits<Score> can be used; keep the rest of the comparison against best.value and PV update logic (best.value, best.principal_variation, prepend_move) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Negamax.H`:
- Around line 1311-1314: The no-tracer overload creates a local
Null_Adversarial_Search_Tracer and forwards to the traced overload, which causes
a dangling reference; fix it by ensuring the tracer object is moved/owned into
the called function rather than referenced temporarily—construct a local
Null_Adversarial_Search_Tracer named (e.g.) tracer and call
iterative_deepening_negamax_search(..., std::move(tracer), ...) so the traced
overload receives ownership (or alternatively allocate tracer and pass a
persistent smart pointer) ensuring Null_Adversarial_Search_Tracer outlives the
call.
In `@search_move_ordering.H`:
- Around line 236-239: The clear() method currently calls table_.empty() (a
non-mutating query) instead of clearing the container; change the implementation
of clear() to call the container's mutator (e.g., table_.clear()) so the hash
map is actually emptied; locate the clear() function in search_move_ordering.H
and replace the table_.empty() call with table_.clear() (ensuring noexcept
signature remains).
In `@Transposition_Table.H`:
- Around line 171-182: Prefer_Deeper_Entry::operator() currently takes
RankedMove<Entry, size_t> parameters but the replacement policy concept and
Transposition_Table::store() pass raw Entry references; change the comparator
signature on Prefer_Deeper_Entry (the operator() method) to accept const Entry&
candidate and const Entry& current (preserve const and noexcept) and compare
candidate.depth > current.depth so it matches the expected policy interface used
by Transposition_Table and the replacement policy concept.
---
Duplicate comments:
In `@Negamax.H`:
- Around line 1330-1337: The lambda keyer currently captures domain by reference
but domain is moved into iterative_deepening_negamax_search, leaving a dangling
reference; fix by changing the capture to capture domain by value (e.g., capture
a copy: [domain] or std::move(domain) into the lambda) or alternatively create
the keyer before moving domain and pass that keyer (so keyer owns the needed
domain state) to iterative_deepening_negamax_search; update the lambda capture
for keyer (and adjust any ownership semantics) to ensure keyer does not
reference a moved-from domain when invoked.
- Around line 981-987: The negation of child.value can overflow when Score is
signed and child.value == std::numeric_limits<Score>::lowest(); to fix, replace
the direct negation in the block that computes candidate (where Score candidate
= -child.value;) with a saturated check: if child.value ==
std::numeric_limits<Score>::lowest() set candidate to
std::numeric_limits<Score>::max(), otherwise set candidate = -child.value;
ensure <limits> is available so std::numeric_limits<Score> can be used; keep the
rest of the comparison against best.value and PV update logic (best.value,
best.principal_variation, prepend_move) unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 05e84318-0f7d-4d92-8713-64ca671d14c8
📒 Files selected for processing (10)
Alpha_Beta.HBacktracking.HBranch_And_Bound.HNegamax.HREADME.mdState_Search.HState_Search_IDA_Star.HTransposition_Table.Hsearch_move_ordering.Hstate_search_common.H
✅ Files skipped from review due to trivial changes (2)
- README.md
- State_Search.H
🚧 Files skipped from review as they are similar to previous changes (4)
- State_Search_IDA_Star.H
- Backtracking.H
- Branch_And_Bound.H
- Alpha_Beta.H
Rework the `search` and `dfs_visited` methods in `Branch_And_Bound.H` to improve clarity and explicitly assert `VisitedBoundMap` concept. Simplify `Transposition_Table.H` operator signature.
Adds state_key method to ArtificialDecisionTreeDomain, NQueensDomain, and SubsetSumDomain to enable state hashing in search algorithms.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
Transposition_Table.H (2)
155-182: Complete the Doxygen for this new public header API.
Replace_Always,Prefer_Deeper_Entry, andTransposition_Tableadd public callables, but most members still only have brief one-liners. The constructors,probe(),clear(),reset_stats(),size(),is_empty(),stats(), and bothoperator()overloads need the required tags/notes for a public header in this repo.As per coding guidelines, "All public member functions must have
@brief,@param,@return, and@throwsDoxygen tags", "Document time complexity using Big-O notation in Doxygen comments", "Document thread-safety guarantees in class/function comments", and "Doxygen must generate no warnings for changed files".Also applies to: 199-327
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Transposition_Table.H` around lines 155 - 182, The public API lacks complete Doxygen: for Replace_Always, Prefer_Deeper_Entry and especially Transposition_Table you must add full Doxygen blocks for all public members (constructors, probe(), clear(), reset_stats(), size(), is_empty(), stats(), and both operator() overloads) including `@brief`, `@param` for each parameter, `@return` for return values, `@throws` for any exceptions, Big-O time/space complexity notes, and explicit thread-safety guarantees; update the class-level comment to state concurrency expectations and ensure all tags conform to the repo’s style so the header produces no Doxygen warnings. Reference the types/names exactly (Replace_Always, Prefer_Deeper_Entry, Transposition_Table, Transposition_Table::probe, Transposition_Table::clear, Transposition_Table::reset_stats, Transposition_Table::size, Transposition_Table::is_empty, Transposition_Table::stats, and both Transposition_Table::operator() overloads) when adding the documentation.
247-275: Factor the duplicate probe/store bookkeeping into a helper.Both
probeoverloads and bothstoreoverloads repeat the same stats updates and replace/reject flow. A small private helper would make future changes much harder to skew between overloads.Also applies to: 286-326
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Transposition_Table.H` around lines 247 - 275, Both probe overloads and both store overloads duplicate stats updates and the replace/reject flow; factor that into small private helpers (e.g., probe_impl and store_impl) that accept a forwarded Key (or a template forwarding parameter) and perform ++stats_.probes, call table_.search(key) / the store logic, update ++stats_.misses or ++stats_.hits and return the Entry* or store result; then have the existing probe(const Key&)/probe(Key&&) and the two store overloads simply forward to these helpers (preserving use of table_.search, pair->second, and the existing replace/reject behavior).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Transposition_Table.H`:
- Around line 117-121: The SearchStateKeyProvider concept must require a
non-mutating key extractor: change the requirement to call provider.state_key
with a const Provider& (or const reference) and a const typename
Provider::State& and require the result be convertible to typename
Provider::State_Key, ensuring state_key is usable on const providers and does
not mutate state; likewise tighten VisitedBoundMap to require that
map.search(const typename Map::Key&) (called on a non-const Map instance where
callers expect to update the stored objective) returns a pointer convertible to
the map's mapped-type pointer (i.e., a mutable pointer to the stored objective)
so callers can dereference and modify the stored objective — reference the
concept names SearchStateKeyProvider and VisitedBoundMap and the members
state_key() and search() when making these changes.
---
Nitpick comments:
In `@Transposition_Table.H`:
- Around line 155-182: The public API lacks complete Doxygen: for
Replace_Always, Prefer_Deeper_Entry and especially Transposition_Table you must
add full Doxygen blocks for all public members (constructors, probe(), clear(),
reset_stats(), size(), is_empty(), stats(), and both operator() overloads)
including `@brief`, `@param` for each parameter, `@return` for return values, `@throws`
for any exceptions, Big-O time/space complexity notes, and explicit
thread-safety guarantees; update the class-level comment to state concurrency
expectations and ensure all tags conform to the repo’s style so the header
produces no Doxygen warnings. Reference the types/names exactly (Replace_Always,
Prefer_Deeper_Entry, Transposition_Table, Transposition_Table::probe,
Transposition_Table::clear, Transposition_Table::reset_stats,
Transposition_Table::size, Transposition_Table::is_empty,
Transposition_Table::stats, and both Transposition_Table::operator() overloads)
when adding the documentation.
- Around line 247-275: Both probe overloads and both store overloads duplicate
stats updates and the replace/reject flow; factor that into small private
helpers (e.g., probe_impl and store_impl) that accept a forwarded Key (or a
template forwarding parameter) and perform ++stats_.probes, call
table_.search(key) / the store logic, update ++stats_.misses or ++stats_.hits
and return the Entry* or store result; then have the existing probe(const
Key&)/probe(Key&&) and the two store overloads simply forward to these helpers
(preserving use of table_.search, pair->second, and the existing replace/reject
behavior).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fd9c202a-27b4-43f2-bf31-b56a04564f8a
📒 Files selected for processing (3)
Branch_And_Bound.HTests/state_search_framework_test.ccTransposition_Table.H
✅ Files skipped from review due to trivial changes (1)
- Tests/state_search_framework_test.cc
Introduce IDA* (Iterative Deepening A*) algorithm for optimal pathfinding in implicit state spaces. This includes a new `State_Search_IDA_Star.H` header for the algorithm implementation and corresponding tests in `state_search_framework_test.cc`. Refactor state search framework: - Improve `Negamax.H` by handling `std::numeric_limits<Score>::lowest()` for `candidate` score calculation. - Optimize key handling in `Backtracking.H` by caching `domain_.state_key(state)` calls. - Enhance `Transposition_Table.H` with improved `SearchStateKeyProvider` concept and clearer documentation for replacement policies and store/probe methods. - Update `tpl_hash.H` and `tpl_olhash.H` to use `OLhashTable` as default for hash maps and improve `[[nodiscard]]` usage. - Adjust `CMakeLists.txt` to include the new `State_Search_IDA_Star.H` in the project.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
Backtracking.H (1)
408-415:⚠️ Potential issue | 🟠 MajorMust fix — defer global visited insertion until the node is accepted for expansion (or explicit goal handling).
Line 414 inserts the key before cutoff checks. With finite
max_depth, a deeper first encounter can be inserted, pruned, and then incorrectly block a shallower revisit that should still be explored.Suggested fix
const auto key = domain_.state_key(state); if (visited.contains(key)) { ++result.stats.pruned_by_domain; return false; } - visited.insert(key); if (domain_.is_goal(state)) { + visited.insert(key); // keep duplicate-goal suppression explicit ++result.stats.solutions_found; Solution solution{state, path, depth}; result.best_solution.consider(solution); @@ if (state_search_detail::should_prune_state(domain_, state, depth)) { ++result.stats.pruned_by_domain; return false; } if (expansion_limit_reached(result)) return true; + visited.insert(key); // insert only when state survives early exits ++result.stats.expanded_states;Based on learnings: Applies to **/*.{H,cc,cpp,cxx,C} : Verify visited set handling in graph traversal (BFS, DFS, topological sort).
Also applies to: 437-450
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backtracking.H` around lines 408 - 415, The code inserts domain_.state_key(state) into visited before cutoff/depth checks, causing a deep-first visit to prune a valid shallower revisit; change the logic in the Backtracking traversal so visited.insert(key) is deferred until the node is actually accepted for expansion (i.e., after cutoff/max_depth checks and any explicit goal acceptance), and only increment result.stats.pruned_by_domain when visited.contains(key) is checked at the start; update the analogous block around the other occurrence (the 437-450 region) to follow the same pattern to avoid prematurely blocking valid re-visits.
🧹 Nitpick comments (2)
tpl_hash.H (1)
60-91: Add a regression test that pins these new defaults.Since Line [60] and Line [91] change default behavior, add/confirm a focused unit test (or static assertion test) to ensure
HashSet<Key>resolves toOLhashTableandHashMap<Key, Data>resolves toMapOLhash.As per coding guidelines, "If the PR changes behavior, require tests or explain why they don't apply."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tpl_hash.H` around lines 60 - 91, Add a focused regression test that asserts the template defaults didn't change by verifying HashSet<Key> instantiates to OLhashTable and HashMap<Key,Data> instantiates to MapOLhash; implement this using compile-time checks (e.g. std::is_same/static_assert) comparing HashSet<int> (or another simple Key) with the explicit HashSet<int, OLhashTable> and HashMap<int, int> with explicit HashMap<int,int,MapOLhash>, placed in the unit test suite so the build fails if defaults are altered.Tests/state_search_framework_test.cc (1)
494-655: Should fix — add a regression test for visited-set + depth-limit interaction.The suite validates tree-search depth behavior, but it does not cover
search(initial_state, visited, ...)where duplicate-state pruning interacts withmax_depth. Add a focused case to prevent regressions in graph-search pruning semantics.As per coding guidelines: “If the PR changes behavior, require tests…” and tests “must verify algorithmic correctness.” Based on learnings: tests should verify algorithmic correctness, not just absence of crashes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Backtracking.H`:
- Around line 134-191: Update the Doxygen for all new public members (explicit
Depth_First_Backtracking constructor, const Domain& domain() const, Domain&
domain(), const ExplorationPolicy& policy(), const SearchLimits& limits(), void
set_policy(...), void set_limits(...), and the execute/search method) to include
full tags: add `@param` for each parameter (name and semantics), `@return`
describing the returned SearchResult and its contents, `@throws` listing all
exceptions (e.g., std::invalid_argument and when it is thrown), and add
complexity (time/space) and thread-safety notes where relevant; ensure the
constructor documents ownership/move semantics for Domain and Solution_Compare,
setters document effects on future runs, and the search method documents how
limits, policy and compare affect results and when errors are raised. Apply the
same complete Doxygen style to the other public methods referenced (lines
~238-289) so all public member functions have `@brief`, `@param`, `@return`, `@throws`,
complexity, and thread-safety sections.
In `@Negamax.H`:
- Around line 280-283: The clear() implementation incorrectly calls
events_.empty() (a predicate) instead of actually clearing the container; update
the method body of clear() to invoke the correct clearing operation on events_
(e.g., events_.clear() or the equivalent DynList clear/reset method provided by
that type) so that events_ is emptied; ensure the function remains noexcept and
references the existing clear() method on the container rather than empty().
---
Duplicate comments:
In `@Backtracking.H`:
- Around line 408-415: The code inserts domain_.state_key(state) into visited
before cutoff/depth checks, causing a deep-first visit to prune a valid
shallower revisit; change the logic in the Backtracking traversal so
visited.insert(key) is deferred until the node is actually accepted for
expansion (i.e., after cutoff/max_depth checks and any explicit goal
acceptance), and only increment result.stats.pruned_by_domain when
visited.contains(key) is checked at the start; update the analogous block around
the other occurrence (the 437-450 region) to follow the same pattern to avoid
prematurely blocking valid re-visits.
---
Nitpick comments:
In `@tpl_hash.H`:
- Around line 60-91: Add a focused regression test that asserts the template
defaults didn't change by verifying HashSet<Key> instantiates to OLhashTable and
HashMap<Key,Data> instantiates to MapOLhash; implement this using compile-time
checks (e.g. std::is_same/static_assert) comparing HashSet<int> (or another
simple Key) with the explicit HashSet<int, OLhashTable> and HashMap<int, int>
with explicit HashMap<int,int,MapOLhash>, placed in the unit test suite so the
build fails if defaults are altered.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 23613c46-4053-47e0-ab88-8bbbea93be5a
📒 Files selected for processing (9)
Backtracking.HCMakeLists.txtNegamax.HState_Search_IDA_Star.HTests/state_search_framework_test.ccTransposition_Table.HhashDry.Htpl_hash.Htpl_olhash.H
✅ Files skipped from review due to trivial changes (2)
- tpl_olhash.H
- hashDry.H
🚧 Files skipped from review as they are similar to previous changes (3)
- CMakeLists.txt
- State_Search_IDA_Star.H
- Transposition_Table.H
Refactor Backtracking search and update documentation with detailed descriptions for parameters, return values, complexity, and thread safety. Add static assertions to tpl_hash_macros to verify default hash table backends. Update IDA* search termination conditions.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 39 out of 39 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (2)
tpl_hash.H:1
- Changing the default backend from
ODhashTabletoOLhashTableis an API/behavior change for users instantiatingHashSet<Key>without specifying a table type (it may affect iteration order, memory behavior, and performance characteristics). If the intent is only to support the new search framework, consider keeping the old default and introducing a new alias for the preferred backend; otherwise, explicitly document this as a breaking change (and ensure release notes/versioning reflect it).
tpl_hash.H:1 - Similarly, changing the default
HashMapTablefromMapODhashtoMapOLhashchanges behavior forHashMap<Key, Data>instantiations site-wide. If this is intended, it should be called out as a breaking change; otherwise, consider preserving the previous default and adding an explicitHashMapOL/HashMapDefaultalias to opt into the new backend.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
State_Search_IDA_Star.H (1)
152-161:⚠️ Potential issue | 🔴 CriticalMust fix:
stop_at_first_solution=falsecan re-run the same threshold foreverAt Line 161
dfsreturns{true, threshold}on goal-hit, and at Line 411 the outer loop reassigns the same threshold. Combined with Line 217 (abort pass on first found child), this can repeatedly rediscover the same first solution whenpolicy_.stop_at_first_solution == falseand the callback keeps returningtrue.🛠️ Minimal safe fix (until full same-threshold enumeration is implemented)
[[nodiscard]] Result search(State initial_state, OnSolution &on_solution) { ah_invalid_argument_if(policy_.strategy != ExplorationPolicy::Strategy::Depth_First) << "IDA_Star_State_Search only supports depth-first strategy"; + ah_invalid_argument_if(not policy_.stop_at_first_solution) + << "IDA_Star_State_Search currently requires stop_at_first_solution=true"; ah_invalid_argument_if(limits_.max_solutions == 0) << "SearchLimits::max_solutions must be positive or Search_Unlimited";Please also add a deterministic regression test for this mode (
stop_at_first_solution=false) so this does not regress silently.
Based on learnings: Unit tests for new public methods must verify the method's contract (preconditions, postconditions, return values), and tests must be deterministic and repeatable.Also applies to: 214-218, 386-412
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@State_Search_IDA_Star.H` around lines 152 - 161, The dfs function currently returns {true, threshold} on a goal hit which lets the outer IDA* loop reuse the same threshold and re-discover the same solution when policy_.stop_at_first_solution == false and on_solution returns true; change dfs so that when a goal is found but the search should continue (policy_.stop_at_first_solution == false and on_solution returned true), it returns a signal to advance the threshold (e.g., {false, threshold+1} or an equivalent "continue but bump threshold") instead of {true, threshold}, and only set result.status = SearchStatus::StoppedOnSolution when we truly must stop; update callers/loop that consume dfs to honor the new return meaning. Also add a deterministic unit test that runs the IDA* entry point with policy_.stop_at_first_solution = false and an on_solution callback that returns true repeatedly, asserting the loop advances thresholds and does not repeatedly report the same threshold/solution (verify postconditions: multiple distinct thresholds attempted or no infinite loop).Backtracking.H (1)
578-600:⚠️ Potential issue | 🟡 MinorShould fix — complete the Doxygen contract on the public wrappers.
Same gap as the earlier public-API Doxygen finding: these two
backtracking_search(...)overloads still only have@brief. Please add@param,@return, and@throwstags, plus the same complexity/thread-safety notes the classsearch(...)overloads already provide.As per coding guidelines: “If a new public method or class is added, it MUST be fully documented with Doxygen” and “All public member functions must have
@brief,@param,@return, and@throwsDoxygen tags”.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backtracking.H` around lines 578 - 600, Add full Doxygen for both backtracking_search template overloads: expand the existing `@brief` to include `@param` entries for domain, initial_state, policy, limits (and on_solution for the overload that takes it), an `@return` describing the search result (i.e., whatever Depth_First_Backtracking<Domain>::search returns), an `@throws` noting that exceptions from Domain or the search engine are propagated, and the same complexity and thread-safety notes used on the class search(...) overloads; target the two backtracking_search(...) template declarations so the documentation mirrors the public search(...) overloads.
🧹 Nitpick comments (2)
State_Search_IDA_Star.H (1)
194-195: Nice to have: remove redundantcan_expandguardLines 194-195 are effectively unreachable because Line 176 already prunes
depth >= result.limits.max_depth. Keeping both checks makes control flow harder to read.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@State_Search_IDA_Star.H` around lines 194 - 195, Remove the redundant expansion guard that declares can_expand and returns out (the `if (const bool can_expand = depth + 1 <= result.limits.max_depth; not can_expand) return out;` statement); pruning for `depth >= result.limits.max_depth` is already handled earlier, so delete this check to simplify control flow and leave the rest of the logic using `depth`, `result.limits.max_depth`, and `out` unchanged.Backtracking.H (1)
514-518: Nice to have — split duplicate-state cuts from domain-pruning stats.
state_search_common.H:134-147documentsSearchStats::pruned_by_domainas domain-side pruning, but this branch increments it forvisited.contains(key). A dedicated duplicate/transposition counter would make the result easier to interpret.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Backtracking.H` around lines 514 - 518, The branch that returns false on visited.contains(key) is incrementing result.stats.pruned_by_domain but per docs that counter should be domain pruning; add a dedicated duplicate/transposition counter in SearchStats (e.g., pruned_by_duplicate or pruned_by_transposition), increment that new counter in the visited.contains(key) branch instead of result.stats.pruned_by_domain, and update any code that constructs or reports SearchStats to include the new field (ensure constructors/merging/printing of SearchStats and any aggregation logic handle the new counter). Use the symbols visited.contains, domain_.state_key, SearchStats::pruned_by_domain, and result.stats to find where to change and to add the new counter usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Backtracking.H`:
- Around line 485-494: The traversal mutates path and state inside
domain_.for_each_successor (where Move is appended via path.append and applied
via domain_.apply then dfs is called) but relies on straight-line domain_.undo
and path.remove_last for cleanup; if apply() or dfs throws those undo/remove
calls are skipped. Fix by introducing an RAII rollback guard (or a
try/catch+rethrow) scoped immediately after path.append and domain_.apply that
in its destructor (or finally block) always calls domain_.undo(state, move) and
path.remove_last(), so undo/remove_last run even if dfs throws; apply the same
change for the other loop at the noted lines (562-571) referencing the same
functions.
- Around line 514-568: The current duplicate suppression uses visited keyed only
by state_key(state), which incorrectly blocks shallower revisits when
depth-sensitive pruning (limits_.max_depth or
state_search_detail::should_prune_state) matters; change visited from a plain
set into a transposition table that stores the best (shallowest) depth seen for
each key or the remaining depth budget, and update dfs_visited to consult and
update that depth map instead of contains/insert; specifically, modify uses of
visited, state_key(state), dfs_visited, should_prune_state, and checks against
limits_.max_depth/expansion_limit_reached so a state is only pruned if the
stored depth is <= current depth (i.e., no better remaining budget), otherwise
overwrite the stored depth and continue exploring.
In `@State_Search_IDA_Star.H`:
- Around line 155-157: The code unconditionally sets result.total_cost = g even
when result.best_solution.consider(solution) did not replace the incumbent;
change the logic so total_cost is updated only when the incumbent actually
changes—e.g., use the boolean return value of
result.best_solution.consider(solution) (or otherwise detect that best_solution
was updated) and set result.total_cost = g inside that conditional branch so
total_cost and best_solution remain consistent.
---
Duplicate comments:
In `@Backtracking.H`:
- Around line 578-600: Add full Doxygen for both backtracking_search template
overloads: expand the existing `@brief` to include `@param` entries for domain,
initial_state, policy, limits (and on_solution for the overload that takes it),
an `@return` describing the search result (i.e., whatever
Depth_First_Backtracking<Domain>::search returns), an `@throws` noting that
exceptions from Domain or the search engine are propagated, and the same
complexity and thread-safety notes used on the class search(...) overloads;
target the two backtracking_search(...) template declarations so the
documentation mirrors the public search(...) overloads.
In `@State_Search_IDA_Star.H`:
- Around line 152-161: The dfs function currently returns {true, threshold} on a
goal hit which lets the outer IDA* loop reuse the same threshold and re-discover
the same solution when policy_.stop_at_first_solution == false and on_solution
returns true; change dfs so that when a goal is found but the search should
continue (policy_.stop_at_first_solution == false and on_solution returned
true), it returns a signal to advance the threshold (e.g., {false, threshold+1}
or an equivalent "continue but bump threshold") instead of {true, threshold},
and only set result.status = SearchStatus::StoppedOnSolution when we truly must
stop; update callers/loop that consume dfs to honor the new return meaning. Also
add a deterministic unit test that runs the IDA* entry point with
policy_.stop_at_first_solution = false and an on_solution callback that returns
true repeatedly, asserting the loop advances thresholds and does not repeatedly
report the same threshold/solution (verify postconditions: multiple distinct
thresholds attempted or no infinite loop).
---
Nitpick comments:
In `@Backtracking.H`:
- Around line 514-518: The branch that returns false on visited.contains(key) is
incrementing result.stats.pruned_by_domain but per docs that counter should be
domain pruning; add a dedicated duplicate/transposition counter in SearchStats
(e.g., pruned_by_duplicate or pruned_by_transposition), increment that new
counter in the visited.contains(key) branch instead of
result.stats.pruned_by_domain, and update any code that constructs or reports
SearchStats to include the new field (ensure constructors/merging/printing of
SearchStats and any aggregation logic handle the new counter). Use the symbols
visited.contains, domain_.state_key, SearchStats::pruned_by_domain, and
result.stats to find where to change and to add the new counter usage.
In `@State_Search_IDA_Star.H`:
- Around line 194-195: Remove the redundant expansion guard that declares
can_expand and returns out (the `if (const bool can_expand = depth + 1 <=
result.limits.max_depth; not can_expand) return out;` statement); pruning for
`depth >= result.limits.max_depth` is already handled earlier, so delete this
check to simplify control flow and leave the rest of the logic using `depth`,
`result.limits.max_depth`, and `out` unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 780dac5d-0b72-486a-a340-7b24af3d512f
📒 Files selected for processing (4)
Backtracking.HNegamax.HState_Search_IDA_Star.HTests/test_tpl_hash_macros.cc
✅ Files skipped from review due to trivial changes (1)
- Tests/test_tpl_hash_macros.cc
🚧 Files skipped from review as they are similar to previous changes (1)
- Negamax.H
Add regression tests for HashSet/HashMap default backends (OLhashTable/MapOLhash) to verify functional equivalence with ODhashTable. Includes compile-time static_asserts to pin default backends. refactor(backtracking): Update doxygen and fix exceptions Refactor Backtracking.H to fix inconsistent doxygen and exception specifications. Update documentation for `search` overloads to correctly reflect `throws None` or `throws std::invalid_argument`. Modify dfs_visited to use VisitedBoundMap. refactor(docs): Update CMakeLists for example command arguments Fix argument escaping in CMakeLists.txt for example execution commands. refactor(adversarial): Add `pruned_by_visited` to AdversarialSearchStats Add `pruned_by_visited` to AdversarialSearchStats to track states skipped by visited-set duplicate suppression. refactor(ida-star): Refine IDA* solution handling and docs Improve IDA_Star_State_Search to explicitly stop at the first optimal solution and update doxygen to reflect this behavior. refactor(hash): Fix parameter name in hashDry.H Correct parameter names in hashDry.H from `__upper_alpha` to `_upper_alpha` for consistency. refactor(search): Add `pruned_by_visited` to SearchStats Introduce `pruned_by_visited` statistic in SearchStats to track states pruned by visited-set during search.
Summary by CodeRabbit
New Features
Examples
Documentation
Tests
Chores