Skip to content

Conversation

@robfrank
Copy link
Collaborator

What does this PR do?

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @robfrank, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes the foundational documentation for a new performance benchmark within ArcadeDB. It includes a detailed design specification for an LDBC Social Network Benchmark-inspired graph, outlining its schema, data generation process, and a series of lookup and traversal queries. Complementing this, a comprehensive implementation plan is provided, breaking down the development into manageable tasks, ensuring a structured approach to building a robust and measurable performance testing framework for ArcadeDB's graph capabilities.

Highlights

  • New Graph Benchmark Design: A detailed design for an LDBC Social Network Benchmark-inspired graph benchmark for ArcadeDB has been introduced, outlining schema, data generation, and query phases.
  • Comprehensive Implementation Plan: A step-by-step plan for implementing this benchmark, covering dependency management, schema creation, data generation, and benchmark execution, has been provided.
  • Dual Query Language Support: The benchmark is designed to test both SQL and OpenCypher queries side-by-side for various graph operations.
  • Micrometer Integration: Micrometer is integrated for collecting granular performance metrics, including timing histograms and result counts, for each benchmark query.
  • Database Persistence for Benchmarking: The benchmark database is configured to be preserved between runs, eliminating the need for repeated data generation and speeding up subsequent tests.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • docs/plans/2026-02-11-graph-benchmark-design.md
    • Documented the design of an LDBC-inspired graph benchmark for ArcadeDB.
    • Outlined the schema, data generation process, and benchmark phases.
    • Specified the use of both SQL and OpenCypher for queries.
  • docs/plans/2026-02-11-graph-benchmark-impl.md
    • Detailed a step-by-step implementation plan for the graph benchmark.
    • Included tasks for dependency management, schema creation, data generation, and benchmark execution.
    • Provided code snippets and commit messages for each implementation task.
Activity
  • No human activity (comments, reviews) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify
Copy link
Contributor

mergify bot commented Feb 11, 2026

🧪 CI Insights

Here's what we observed from your CI run for 9d36998.

🟢 All jobs passed!

But CI Insights is watching 👀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive design document and a detailed implementation plan for a new LDBC-inspired graph benchmark. The plans are well-structured and thorough, covering schema, data generation, benchmark phases, and reporting.

My review focuses on ensuring the correctness of the benchmark design and improving the clarity of the implementation plan. I've identified a significant issue in the 'Friends-of-friends' query (4a) where the proposed SQL query is not logically equivalent to its Cypher counterpart, which would skew benchmark results. I've also suggested improvements to the implementation plan regarding hardcoded paths and code readability.

Overall, this is an excellent planning effort that will set a strong foundation for the benchmark implementation.

Comment on lines 1224 to 1227
benchmark("4a", "Friends of friends", COMPLEX_TRAVERSAL_ITERATIONS,
"SELECT expand(both('KNOWS').both('KNOWS')) FROM Person WHERE id = :id",
"MATCH (p:Person {id: $id})-[:KNOWS]-()-[:KNOWS]-(fof) " +
"WHERE fof <> p AND NOT (p)-[:KNOWS]-(fof) RETURN DISTINCT fof");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The implementation plan for query 4a ('Friends of friends') carries over an incorrect SQL query from the design document. The query SELECT expand(both('KNOWS').both('KNOWS')) FROM Person WHERE id = :id does not exclude direct friends, which makes the SQL benchmark not comparable to the Cypher benchmark for the same task. Given the complexity of writing a correct and efficient equivalent in SQL, it might be better to mark it as Cypher-only for now, similar to other complex queries.

Suggested change
benchmark("4a", "Friends of friends", COMPLEX_TRAVERSAL_ITERATIONS,
"SELECT expand(both('KNOWS').both('KNOWS')) FROM Person WHERE id = :id",
"MATCH (p:Person {id: $id})-[:KNOWS]-()-[:KNOWS]-(fof) " +
"WHERE fof <> p AND NOT (p)-[:KNOWS]-(fof) RETURN DISTINCT fof");
benchmark("4a", "Friends of friends", COMPLEX_TRAVERSAL_ITERATIONS,
null, // SQL equivalent is complex and needs to filter out 1-hop neighbors
"MATCH (p:Person {id: $id})-[:KNOWS]-()-[:KNOWS]-(fof) " +
"WHERE fof <> p AND NOT (p)-[:KNOWS]-(fof) RETURN DISTINCT fof");


**Step 3: Compile to verify skeleton**

Run: `cd /Users/frank/projects/arcade/worktrees/ldbc-bechmark && mvn compile test-compile -pl engine -q`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The mvn command contains a hardcoded, user-specific absolute path: /Users/frank/projects/arcade/worktrees/ldbc-bechmark. This makes the command not directly runnable for other developers. It's better to use relative paths or placeholders. This applies to all shell commands in this document.

Suggested change
Run: `cd /Users/frank/projects/arcade/worktrees/ldbc-bechmark && mvn compile test-compile -pl engine -q`
Run: `mvn compile test-compile -pl engine -q`

Comment on lines 1011 to 1028
private void prepareSampleIds() {
final ThreadLocalRandom rnd = ThreadLocalRandom.current();
final int sampleSize = 100;

samplePersonIds = new long[sampleSize];
samplePostIds = new long[sampleSize];
sampleForumIds = new long[sampleSize];
sampleCityNames = new String[sampleSize];
sampleFirstNames = new String[sampleSize];

for (int i = 0; i < sampleSize; i++) {
samplePersonIds[i] = rnd.nextLong(NUM_PERSONS);
samplePostIds[i] = rnd.nextLong(NUM_POSTS);
sampleForumIds[i] = rnd.nextLong(NUM_FORUMS);
sampleCityNames[i] = "City_" + (CONTINENTS.length + Math.min(COUNTRIES.length, NUM_PLACES / 3) + rnd.nextInt(Math.max(1, NUM_PLACES - CONTINENTS.length - Math.min(COUNTRIES.length, NUM_PLACES / 3))));
sampleFirstNames[i] = FIRST_NAMES[rnd.nextInt(FIRST_NAMES.length)];
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to generate a random city name is complex and duplicated from the data generation methods. This can be simplified for readability and maintainability by pre-calculating firstCityId and numCities and reusing them, just like in generatePersons() and generateOrganisations().

  private void prepareSampleIds() {
    final ThreadLocalRandom rnd = ThreadLocalRandom.current();
    final int sampleSize = 100;

    // Pre-compute city ID range for simplicity and consistency
    final int firstCountryId = CONTINENTS.length;
    final int numCountries = Math.min(COUNTRIES.length, NUM_PLACES / 3);
    final int firstCityId = firstCountryId + numCountries;
    final int numCities = NUM_PLACES - firstCityId;

    samplePersonIds = new long[sampleSize];
    samplePostIds = new long[sampleSize];
    sampleForumIds = new long[sampleSize];
    sampleCityNames = new String[sampleSize];
    sampleFirstNames = new String[sampleSize];

    for (int i = 0; i < sampleSize; i++) {
      samplePersonIds[i] = rnd.nextLong(NUM_PERSONS);
      samplePostIds[i] = rnd.nextLong(NUM_POSTS);
      sampleForumIds[i] = rnd.nextLong(NUM_FORUMS);
      if (numCities > 0) {
        sampleCityNames[i] = "City_" + (firstCityId + rnd.nextInt(numCities));
      } else {
        sampleCityNames[i] = "City_0"; // Fallback if no cities
      }
      sampleFirstNames[i] = FIRST_NAMES[rnd.nextInt(FIRST_NAMES.length)];
    }
  }

@claude
Copy link

claude bot commented Feb 11, 2026

Pull Request Review: LDBC Graph Benchmark Design & Implementation Plan

Thank you for this comprehensive planning work! I have reviewed both design and implementation documents against the project CLAUDE.md guidelines and best practices.

Overall Assessment

Strengths:

  • Excellent architectural planning with clear separation of concerns
  • Comprehensive LDBC SNB schema coverage (8 vertex types, 14 edge types)
  • Side-by-side SQL and OpenCypher benchmarking approach
  • Well-structured task breakdown with clear verification steps
  • Database reuse strategy (close vs drop) is smart for iterative testing

Code Quality & Best Practices

Alignment with CLAUDE.md

  1. Dependency Management - Good choice on Micrometer:

    • Apache 2.0 license ✓
    • Test-scoped dependency ✓
    • However, you should verify: Update ATTRIBUTIONS.md to include Micrometer and check if it has a NOTICE file
  2. Test-First Approach: Implementation plan lacks TDD verification

    • CLAUDE.md requires: Write the tests first (TDD approach) whenever possible
    • Suggestion: Add a Task 0 that creates minimal failing tests before implementation
    • Consider adding assertions beyond just running queries
  3. Code Style: Use of final keyword is good, but many System.out calls for progress logging should use a proper logger

Potential Issues

1. Hard-coded Path (Critical)

The implementation plan shows absolute paths like /Users/frank/projects/arcade/worktrees/ldbc-bechmark

Issue: This is specific to your machine and will fail in CI/CD and for other developers.

Fix: Remove absolute paths from all bash commands in the implementation plan.

2. Missing Regression Test Requirement

  • CLAUDE.md requires: write a regression test
  • Current plan only has benchmark tests, not regression tests
  • Suggestion: Add assertions that verify expected graph structure (e.g., KNOWS is bidirectional, edge counts)

Performance Considerations

Good Practices

  1. Parallel insertion with PARALLEL bucket count
  2. Batch commits with COMMIT_EVERY = 5000
  3. Database reuse between runs

Concerns

1. Missing WAL Configuration

Design doc mentions WAL disabled during generation but implementation has no code for this.

Fix: Add WAL disable/enable in populateGraph() method

2. Index Lookups in Tight Loops

Millions of index lookups in KNOWS/LIKES generation could be slow. Consider batching or caching vertex references.

3. Memory Pressure

With 30K persons × 40 KNOWS edges = ~600K bidirectional edges, this could create memory pressure.

Suggestion: Add explicit transaction boundaries or increase commit frequency for KNOWS/LIKES generation.

Test Coverage

Missing Test Assertions

Current implementation has no assertions - it just runs queries and collects metrics.

Recommendation: Add verification tests to verify graph integrity, edge counts, and bidirectional relationships.

Specific Technical Issues

1. Potential NPE in formatRow

If iterations < 20, percentile index calculation could fail. Add validation for small sample sizes.

2. Resource Leaks

IndexCursor might need explicit closing. Check if it implements AutoCloseable.

3. Data Quality

Birthday generation can create invalid dates like 1985-2-31. Consider using Java Time API.

4. Benchmark Warmup

5 warmup iterations may be insufficient for JIT compilation. JMH typically uses 10-20.

Recommendations Summary

Must Fix:

  1. Remove hard-coded absolute paths from implementation plan
  2. Add Micrometer to ATTRIBUTIONS.md
  3. Add WAL disable/enable in populateGraph()
  4. Fix potential NPE in percentile calculation

Should Fix:
5. Add regression assertions to verify graph structure
6. Document verification strategy upfront (CLAUDE.md requirement)
7. Add resource cleanup for IndexCursor if needed
8. Fix date generation to avoid invalid dates

Nice to Have:
9. Implement actual parallel generation with database.async() as designed
10. Add proper logging instead of System.out
11. Increase warmup iterations to 10-20
12. Add batching/caching for index lookups

Conclusion

This is well-thought-out design work with excellent structure. Main issues:

  1. Missing TDD verification steps (required by CLAUDE.md)
  2. Hard-coded paths that will not work in CI/CD
  3. Missing dependency attribution
  4. Lack of regression test assertions

Once these are addressed, this will be a valuable addition to ArcadeDB performance test suite.

Approval Status: ⚠️ Needs revisions before implementation begins.

Please update the plan documents to address the Must Fix items before proceeding with implementation.

@codacy-production
Copy link

codacy-production bot commented Feb 11, 2026

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
-10.97%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (c3d17a5) 83627 59578 71.24%
Head commit (9d36998) 113185 (+29558) 68215 (+8637) 60.27% (-10.97%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#3410) 0 0 ∅ (not applicable)

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

@claude
Copy link

claude bot commented Feb 11, 2026

Pull Request Review: Graph Benchmark Design & Implementation Plan

Overview

This PR adds comprehensive documentation for an LDBC-inspired graph benchmark. The design is well-thought-out and aligns well with ArcadeDB's architecture and the project's guidelines in CLAUDE.md.

✅ Strengths

1. Excellent Design Documentation

  • Clear separation of design and implementation concerns
  • Well-structured schema following LDBC SNB (8 vertex types, 14 edge types)
  • Thoughtful decisions on scale, metrics, and execution model

2. Adherence to Project Guidelines

  • ✅ Follows TDD approach (verification test in phase1_verifyGraphIntegrity)
  • ✅ Uses test-scoped Micrometer dependency (Apache 2.0 compatible)
  • ✅ Plans to update ATTRIBUTIONS.md
  • ✅ Located in correct module (engine/src/test/java/performance/)
  • ✅ Uses JUnit 5 with appropriate tags
  • ✅ Follows existing coding patterns (AssertJ assertions, final keyword usage)

3. Performance Considerations

  • Database reuse via close() instead of drop() - excellent for iterative benchmarking
  • WAL disabled during bulk loading, re-enabled for queries
  • Parallel buckets for insertion performance
  • Commit batching (5,000 records)
  • Warmup iterations before measurement

4. Comprehensive Coverage

  • Both SQL and OpenCypher queries side-by-side
  • Progressive complexity (lookups → simple traversals → complex traversals)
  • Realistic LDBC-inspired social network data model
  • Power-law distribution for KNOWS edges (realistic)

🔍 Issues & Recommendations

CRITICAL: Hardcoded Path in Implementation Plan

Location: Throughout Task 1-11 in implementation plan

Issue: The implementation plan contains hardcoded absolute paths specific to the author's machine:

cd /Users/frank/projects/arcade/worktrees/ldbc-bechmark

This appears in:

  • docs/plans/2026-02-11-graph-benchmark-impl.md:572
  • docs/plans/2026-02-11-graph-benchmark-impl.md:695
  • docs/plans/2026-02-11-graph-benchmark-impl.md:935
  • And many more locations (Tasks 1-11)

Recommendation: Update all commands to use a generic working directory variable or relative paths:

cd ${ARCADEDB_ROOT}  # or just assume proper working directory
mvn compile test-compile -pl engine -q

Medium Priority Issues

1. Typo in Path

The path contains "bechmark" instead of "benchmark":
/Users/frank/projects/arcade/worktrees/ldbc-bechmark

This typo propagates throughout the implementation plan.

2. Git Commit Commands in Implementation Plan

Location: Each task ends with commit commands

The implementation plan includes explicit git commit commands, but CLAUDE.md states:

"do not commit on git, I will do it after a review"

Recommendation: Either:

  • Remove the commit commands and just note completion milestones
  • Add a disclaimer that commits are for illustration only

3. Missing Import in Phase 1 Test

Location: docs/plans/2026-02-11-graph-benchmark-impl.md:1632

The implementation plan correctly notes adding:

import static org.assertj.core.api.Assertions.assertThat;

But this should be mentioned earlier in Task 1 skeleton imports, since it's used in Task 9's phase1_verifyGraphIntegrity test.

Recommendation: Move this import to Task 1's initial skeleton.

4. Micrometer Version Assumption

Location: docs/plans/2026-02-11-graph-benchmark-impl.md:322

The plan assumes ${micrometer.version} is defined as 1.16.2 in parent pom.xml.

Recommendation: Verify this version exists in the parent pom or specify the version explicitly if not defined.

5. Memory Considerations for Large Scale

The medium scale creates ~3M edges and ~780K vertices.

Recommendation: Consider documenting:

  • Minimum JVM heap size recommendations
  • Expected memory footprint
  • Optional environment variable for adjusting scale (e.g., BENCHMARK_SCALE=small|medium|large)

6. Test Isolation Concerns

The test preserves the database between runs which is good for benchmarking, but:

  • Concurrent test runs would conflict
  • Failed runs might leave corrupt data
  • No cleanup mechanism for developers

Recommendation: Add:

  • A @BeforeAll check for database version/schema compatibility
  • A system property to force database recreation: -Dbenchmark.forceCreate=true
  • Documentation on cleaning up: mvn clean will remove target/databases

Minor/Nitpick Issues

7. Claude Attribution in Code

Location: docs/plans/2026-02-11-graph-benchmark-impl.md:292

The implementation plan references:

For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans

This is metadata for AI tooling. While the code correctly excludes Claude as author (per CLAUDE.md guidelines), this comment in the markdown might be confusing for human readers.

Recommendation: Consider moving this to a comment format or removing it from the final documentation.

8. Console Output Formatting

The fancy box-drawing characters in printReport() look great but may not render correctly on all terminals.

Recommendation: Test on Windows command prompt and consider a fallback format using plain ASCII.

9. Missing Edge Count in Stats

printDatasetStats() shows vertex counts but not edge counts by type, which would be valuable for understanding graph density.

Recommendation: Consider adding edge counts (though this may be expensive - so optional).

10. Query 4d: Shortest Path Robustness

Location: docs/plans/2026-02-11-graph-benchmark-impl.md:1560

The shortest path query might fail if no path exists between two random persons (disconnected graph).

Recommendation: Either:

  • Ensure high connectivity during KNOWS generation
  • Wrap in try-catch with logging
  • Use a maximum path length constraint in the query

📋 Testing Recommendations

Before merging, verify:

  1. ✅ Micrometer version is defined in parent pom
  2. ✅ Build succeeds: mvn clean install -pl engine -DskipTests
  3. ✅ No actual code changes in this PR (documentation only)
  4. ✅ All paths are generic/relative

🎯 Security Considerations

No security concerns - this is performance test code with synthetic data.

📊 Final Assessment

Quality Score: 8.5/10

This is high-quality design work with excellent alignment to project standards. The only significant issue is the hardcoded paths in the implementation plan, which makes it less useful as a generic implementation guide.

Recommendation: Approve with changes

  • Fix hardcoded paths before merge
  • Consider the other recommendations for improved usability

The actual implementation following this plan should produce a valuable benchmark tool for ArcadeDB.


Review conducted following CLAUDE.md guidelines for ArcadeDB project standards.

Single JUnit 5 test class that generates an LDBC Social Network
Benchmark-inspired graph (8 vertex types, 14 edge types, ~790K vertices)
and benchmarks ArcadeDB across creation, lookups, and traversals in
both SQL and OpenCypher. Database is preserved between runs so only the
first execution pays the generation cost.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@robfrank robfrank requested a review from lvca February 11, 2026 23:06
@robfrank robfrank added this to the 26.2.1 milestone Feb 11, 2026
@claude
Copy link

claude bot commented Feb 11, 2026

Code Review: Graph Benchmark Implementation

Overview

This PR adds a comprehensive LDBC SNB-inspired graph benchmark for ArcadeDB. The implementation creates a realistic social network graph and benchmarks both SQL and OpenCypher queries.

Strengths ✅

  1. Excellent Documentation: The design document is thorough and well-structured
  2. Comprehensive Coverage: Tests 8 vertex types, 14 edge types, and multiple query patterns
  3. Good Test Structure: Uses JUnit 5 lifecycle properly with @Tag("benchmark") for selective execution
  4. Database Reuse: Smart approach to preserve database between runs
  5. Proper Licensing: Apache 2.0 headers included, Micrometer already in ATTRIBUTIONS.md

Critical Issues 🔴

1. Performance Issue: Repeated Index Lookups (Multiple locations)

Many generation methods perform individual index lookups inside tight loops, resulting in millions of index lookups during generation.

Recommendation: Cache vertex references in arrays to avoid ~3M+ index lookups.

2. Missing Exception Handling (GraphBenchmark.java:914-962)

The benchmark() method doesn't catch query exceptions. Queries could fail silently. Add try-catch blocks.

3. Invalid Date Generation (GraphBenchmark.java:600-604)

Birthday generation can create invalid dates like "2000-2-30". Use proper date formatting.

Major Issues 🟡

4. No Schema Validation on Database Reuse (GraphBenchmark.java:148-161)

If schema changes between runs, opening an existing database could cause errors. Add schema validation.

5. No Result Validation (GraphBenchmark.java:964-984)

The runQuery() method counts results but doesn't validate them. Add sanity checks for zero results on ID lookups.

Recommendations

Must Fix Before Merge:

  1. Add exception handling in benchmark method
  2. Fix date generation bug
  3. Verify edge creation API works correctly

Should Fix:
4. Implement vertex caching for performance
5. Add schema validation on database reuse
6. Add query result sanity checks

Nice to Have:
7. Remove unnecessary curly braces per CLAUDE.md style
8. Add configurable random seed for reproducibility
9. Increase warmup iterations from 5 to 10-20

Overall Assessment

This is a high-quality benchmark implementation with good design and comprehensive coverage. The main concerns are around performance optimizations and error handling. With the required fixes, this will be an excellent addition to ArcadeDB's test suite.

@codecov
Copy link

codecov bot commented Feb 12, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 62.94%. Comparing base (c3d17a5) to head (9d36998).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3410      +/-   ##
==========================================
+ Coverage   62.44%   62.94%   +0.49%     
==========================================
  Files        1251     1251              
  Lines       83627    83627              
  Branches    17118    17118              
==========================================
+ Hits        52223    52639     +416     
+ Misses      24075    23604     -471     
- Partials     7329     7384      +55     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants