Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions .config/nextest.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# nextest configuration for delaunay
# See: https://nexte.st/book/configuration.html

[profile.default]
# Run tests in parallel by default
test-threads = "num-cpus"

# Show test output for failures
failure-output = "immediate-final"
success-output = "never"

# Continue running tests after first failure
fail-fast = false

# Retry flaky tests (useful for proptests)
retries = 0

[profile.ci]
# CI profile: optimized for GitHub Actions
# Inherits from default profile

# Show more verbose output in CI
failure-output = "immediate-final"
success-output = "never"

# Continue running all tests even if some fail
fail-fast = false

# Retry flaky tests once in CI (helps with proptests and timing-sensitive tests)
retries = 1

# Show slow tests (> 60s) in CI
slow-timeout = { period = "60s", terminate-after = 2 }

# JUnit report configuration
[profile.ci.junit]
# Store reports in test-results directory (matches workflow)
path = "test-results/junit.xml"

# Store test output in JUnit report
store-success-output = false
store-failure-output = true
50 changes: 50 additions & 0 deletions .github/workflows/codecov.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,38 @@ jobs:
with:
tool: just

- name: Install nextest
uses: taiki-e/install-action@cfdb446e391c69574ebc316dfb7d7849ec12b940 # v2.68.8
with:
tool: nextest

- name: Run tests with nextest (for JUnit XML)
run: |
# Note: We run tests twice in this workflow (nextest + tarpaulin).
# This is intentional and necessary because:
# 1. nextest: Fast parallel execution → high-quality JUnit XML for test analytics
# 2. tarpaulin: Instrumented execution → accurate code coverage metrics
# Trade-off: ~1-2 extra minutes of CI time for significantly better data quality.
# Using tarpaulin's experimental JUnit output would save time but produces
# less reliable test analytics data.

echo "::group::Running tests with nextest"
# Generate JUnit XML for Codecov Test Analytics
# JUnit path is configured in .config/nextest.toml [profile.ci.junit]
cargo nextest run --all-features --profile ci
echo "::endgroup::"

# Verify JUnit XML was generated at the expected location
# Nextest outputs to target/nextest/<profile>/<path-from-config>
if [ ! -f target/nextest/ci/test-results/junit.xml ]; then
echo "::error::target/nextest/ci/test-results/junit.xml not found"
exit 2
else
echo "::notice::Test results generated: $(wc -l < target/nextest/ci/test-results/junit.xml) lines"
fi
env:
RUST_BACKTRACE: 1

- name: Run coverage
run: |
# Create coverage directory with proper permissions
Expand Down Expand Up @@ -103,6 +135,17 @@ jobs:
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

- name: Upload test results to Codecov
if: ${{ success() && hashFiles('target/nextest/ci/test-results/junit.xml') != '' }}
uses: codecov/test-results-action@6ba3fdeec616fb91fd6a389b788a2366835a0fa2 # v1.2.1
with:
files: target/nextest/ci/test-results/junit.xml
flags: unittests
name: test-results
fail_ci_if_error: false
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

- name: Upload coverage to Codacy
if: ${{ success() && hashFiles('coverage/cobertura.xml') != '' }}
uses: codacy/codacy-coverage-reporter-action@89d6c85cfafaec52c72b6c5e8b2878d33104c699 # v1.3.0
Expand All @@ -118,3 +161,10 @@ jobs:
with:
name: coverage-report
path: coverage/

- name: Archive test results
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
if: always()
with:
name: test-results
path: target/nextest/ci/test-results/
18 changes: 18 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,24 @@ just examples # Run all examples
- Never edit `CHANGELOG.md` directly - it's auto-generated from git commits
- Use `just changelog` to regenerate

### GitHub Issues

When creating or updating issues:

- **Labels**: Use appropriate labels: `enhancement`, `bug`, `performance`, `documentation`, `rust`, `python`, etc.
- **Milestones**: Assign to the appropriate milestone (e.g., `v0.7.2`, `v0.7.3`, `v0.8.0`)
- **Dependencies**: Document relationships in issue body and comments:
- "Depends on: #XXX" - this issue cannot start until #XXX is complete
- "Blocks: #YYY" - #YYY cannot start until this issue is complete
- "Related: #ZZZ" - related work but not blocking
- **Relationships**: GitHub automatically parses blocking keywords in comments to create visual relationships:
- Use `gh issue comment <number> --body "Blocked by #XXX"` to mark an issue as blocked
- Use `gh issue comment <number> --body "Blocks #YYY"` to mark an issue as blocking another
- GitHub will automatically create the relationship graph in the web UI
- Example: `gh issue comment 217 --body "Blocked by #207"` creates a blocking dependency
- **Issue body format**: Include clear sections: Summary, Current State, Proposed Changes, Benefits, Implementation Notes
- **Cross-referencing**: Always reference related issues/PRs using #XXX notation for automatic linking

Comment on lines +78 to +95
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if just command is available and run spell-check
which just && just spell-check 2>&1 | head -100

Repository: acgetchell/delaunay

Length of output: 108


🏁 Script executed:

#!/bin/bash
# Find justfile and check available commands
find . -name "justfile" -o -name "Justfile" 2>/dev/null | head -5

# Check for typos.toml
find . -name "typos.toml" 2>/dev/null | head -5

# List files in root directory
ls -la | grep -E "^-|justfile|Justfile"

Repository: acgetchell/delaunay

Length of output: 2098


🏁 Script executed:

#!/bin/bash
# Check the spell-check target in justfile
grep -A 20 "spell-check\|lint-docs" justfile | head -40

# Check typos.toml content
cat typos.toml

# Read AGENTS.md lines 70-100 for context
sed -n '70,100p' AGENTS.md

Repository: acgetchell/delaunay

Length of output: 4372


🏁 Script executed:

#!/bin/bash
# Get the full spell-check target definition
grep -A 10 "^spell-check" justfile

# Try to run typos on AGENTS.md to check for spelling issues
# typos-cli is the tool used by spell-check
if command -v typos &> /dev/null; then
    echo "=== Running typos on AGENTS.md ==="
    typos AGENTS.md --diff 2>&1 | head -50
else
    # Try via npm if typos is available globally
    npx typos AGENTS.md --diff 2>&1 | head -50 || echo "typos tool not directly available"
fi

Repository: acgetchell/delaunay

Length of output: 697


🏁 Script executed:

#!/bin/bash
# Find _ensure-typos target
grep -A 10 "^_ensure-typos" justfile

# Check if typos is installed or available
which typos 2>/dev/null || echo "typos not in PATH"

# Check if it's a cargo binary
find ~/.cargo/bin -name "typos" 2>/dev/null || echo "not in cargo bin"

# Look for typos in common locations
find /usr -name "typos" -type f 2>/dev/null | head -5

Repository: acgetchell/delaunay

Length of output: 517


🏁 Script executed:

#!/bin/bash
# Manual spelling review of AGENTS.md lines 78-95
sed -n '78,95p' AGENTS.md | cat -n

# Check for any obvious misspellings using aspell (if available) or basic patterns
sed -n '78,95p' AGENTS.md | grep -i "dependanc\|relat\|referenc\|block" | head -10

Repository: acgetchell/delaunay

Length of output: 2149


Run just spell-check after editing this documentation.

Please run the spell-check command for this added section and add any legitimate technical terms to typos.toml if flagged, as per the documentation guidelines.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AGENTS.md` around lines 78 - 95, Run the repository spell-checker by
executing the "just spell-check" command after editing AGENTS.md, review the
reported flags, and for any legitimate technical terms (e.g., labels like
"enhancement", "rust", command snippets such as `gh issue comment`, or other
domain-specific tokens) add them to the typos.toml dictionary so they are not
flagged in future runs; ensure you commit the updated typos.toml alongside the
documentation change.

## Project Context

- **Rust** d-dimensional Delaunay triangulation library (MSRV 1.93.0, Edition 2024)
Expand Down
15 changes: 15 additions & 0 deletions src/core/boundary.rs
Original file line number Diff line number Diff line change
Expand Up @@ -431,9 +431,24 @@ mod tests {
}

#[test]
#[ignore = "Flaky: Fixed point configuration occasionally produces geometric degeneracies - needs investigation in #204"]
#[cfg(feature = "bench")]
fn test_boundary_analysis_performance_characteristics() {
// Test that boundary analysis methods have reasonable performance characteristics
// FIXME(#204): This test uses a fixed set of 8 points (lines 439-448) that occasionally
// produce geometric degeneracies during triangulation construction. The specific points:
// - 4 corners of a cube at (0,0,0), (3,0,0), (0,3,0), (0,0,3)
// - 4 interior/edge points at (1.5,1.5,0), (1.5,0,1.5), (0,1.5,1.5), (1.0,1.0,1.0)
// can form nearly-coplanar configurations that trigger:
// "Degenerate initial simplex: vertices are collinear/coplanar in 3D space"
//
// This should be fixed by either:
// 1. Using a seeded random generator for reproducible but non-degenerate points
// 2. Carefully choosing fixed points that are known to be in general position
// 3. Making the test robust to construction failures (skip if degenerate)
//
// Related: This is similar to the issue in tests/delaunay_repair_fallback.rs where
// the Hilbert curve rounding change exposed latent degeneracy issues.

// Create a moderately complex triangulation
let points: Vec<Point<f64, 3>> = vec![
Expand Down
6 changes: 5 additions & 1 deletion src/core/delaunay_triangulation.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1226,7 +1226,11 @@ where
.into_iter()
.enumerate()
.map(|(input_index, vertex)| {
let idx = hilbert_index(vertex.point().coords(), bounds, bits_per_coord);
let idx = hilbert_index(vertex.point().coords(), bounds, bits_per_coord)
.unwrap_or_else(|_| {
// On error, fall back to lexicographic ordering based on input index
<u128 as From<u32>>::from(u32::try_from(input_index).unwrap_or(u32::MAX))
});
Comment on lines +1229 to +1233
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fallback path is input-order, not lexicographic.

At Line 1232, the fallback key is derived from input_index, so failed Hilbert entries are sorted by original input order. That contradicts the lexicographic fallback intent and can make results permutation-dependent when Hilbert indexing fails.

💡 Proposed fix
-            let idx = hilbert_index(vertex.point().coords(), bounds, bits_per_coord)
-                .unwrap_or_else(|_| {
-                    // On error, fall back to lexicographic ordering based on input index
-                    <u128 as From<u32>>::from(u32::try_from(input_index).unwrap_or(u32::MAX))
-                });
+            let idx = hilbert_index(vertex.point().coords(), bounds, bits_per_coord)
+                .unwrap_or_else(|_| {
+                    // On error, force fallback ordering to rely on the existing
+                    // lexicographic + input-index tie-breakers in `sort_by`.
+                    u128::MAX
+                });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/delaunay_triangulation.rs` around lines 1229 - 1233, The fallback
currently uses input_index (via <u128 as
From<u32>>::from(u32::try_from(input_index)...)) which yields input-order, not
lexicographic order; change the closure passed to
hilbert_index(...).unwrap_or_else(...) to compute a lexicographic key from the
vertex coordinates instead. Specifically, use vertex.point().coords() with the
same quantization/bit width (bits_per_coord) and bounds used by hilbert_index to
convert each coordinate into an integer, clamp to the valid range, and then pack
those integer coord components into a single u128 lexicographic key (e.g., by
left-shifting and OR-ing components in coordinate order) so that failed Hilbert
entries sort by coordinate lexicographic order rather than input_index.

(idx, vertex, input_index)
})
.collect();
Expand Down
Loading
Loading