Skip to content

Conversation

@BD103
Copy link
Member

@BD103 BD103 commented Dec 29, 2024

Objective

Solution

  • Make all bevy_math benchmarks use the bench! macro for their name.
  • Delete the build_accel_cubic() benchmark, since it was an exact duplicate of build_pos_cubic().
  • Remove collect::<Vec<_>>() call in build_pos_cubic() and replace it with a for loop.
  • Combine all of the benchmarks that measure curve.position() under a single group, curve_position, and extract the common bench routine into a helper function.
  • Move the time calculation for the curve.ease() benchmark into the setup closure so it is not tracked.
  • Rename the benchmarks to be more descriptive on what they do.
    • easing_1000 -> segment_ease
    • cubic_position_Vec2 -> curve_position/vec2
    • cubic_position_Vec3A -> curve_position/vec3a
    • cubic_position_Vec3 -> curve_position/vec3
    • build_pos_cubic_100_points -> curve_iter_positions

Testing

  • cargo test -p benches --bench math
  • cargo bench -p benches --bench math
    • Then open ./target/criterion/report/index.html to see the report!

- Apply `bench!` macro to all names
- Rename benchmarks to be more descriptive of what they do
- Deduplicate `build_pos_cubic()` and `build_accel_cubic()`, since they both benchmark the exact same thing!
- Move calculation of time for curve easing outside main routine
- Move `curve.position()` benchmarks under the same group, and make their routine generic
- Remove unnecessary `Vec` allocation from `build_pos_cubic()`
@BD103 BD103 added A-Math Fundamental domain-agnostic mathematical operations C-Benchmarks Stress tests and benchmarks used to measure how fast things are D-Modest A "normal" level of difficulty; suitable for simple features or challenging fixes S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Dec 29, 2024
Copy link
Contributor

@IQuick143 IQuick143 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not entirely familiar with the benchmarking APIs, but the changes all seem reasonable.

@IQuick143 IQuick143 added S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it and removed S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Dec 29, 2024
@alice-i-cecile alice-i-cecile added this pull request to the merge queue Dec 29, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to a conflict with the base branch Dec 29, 2024
auto-merge was automatically disabled December 29, 2024 20:39

Head branch was pushed to by a user without write access

@alice-i-cecile
Copy link
Member

Thanks for formatting this for me: manual merge conflict resolution often causes that failure.

@BD103
Copy link
Member Author

BD103 commented Dec 29, 2024

Thanks for formatting this for me: manual merge conflict resolution often causes that failure.

No problem!

@alice-i-cecile alice-i-cecile added this pull request to the merge queue Dec 29, 2024
Merged via the queue into bevyengine:main with commit 291cb31 Dec 29, 2024
28 checks passed
ecoskey pushed a commit to ecoskey/bevy that referenced this pull request Jan 6, 2025
# Objective

- Part of bevyengine#16647.
- The benchmarks for bezier curves have several issues and do not yet
use the new `bench!` naming scheme.

## Solution

- Make all `bevy_math` benchmarks use the `bench!` macro for their name.
- Delete the `build_accel_cubic()` benchmark, since it was an exact
duplicate of `build_pos_cubic()`.
- Remove `collect::<Vec<_>>()` call in `build_pos_cubic()` and replace
it with a `for` loop.
- Combine all of the benchmarks that measure `curve.position()` under a
single group, `curve_position`, and extract the common bench routine
into a helper function.
- Move the time calculation for the `curve.ease()` benchmark into the
setup closure so it is not tracked.
- Rename the benchmarks to be more descriptive on what they do.
  - `easing_1000` -> `segment_ease`
  - `cubic_position_Vec2` -> `curve_position/vec2`
  - `cubic_position_Vec3A` -> `curve_position/vec3a`
  - `cubic_position_Vec3` -> `curve_position/vec3`
  - `build_pos_cubic_100_points` -> `curve_iter_positions`

## Testing

- `cargo test -p benches --bench math`
- `cargo bench -p benches --bench math`
  - Then open `./target/criterion/report/index.html` to see the report!

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
mrchantey pushed a commit to mrchantey/bevy that referenced this pull request Feb 4, 2025
# Objective

- Part of bevyengine#16647.
- The benchmarks for bezier curves have several issues and do not yet
use the new `bench!` naming scheme.

## Solution

- Make all `bevy_math` benchmarks use the `bench!` macro for their name.
- Delete the `build_accel_cubic()` benchmark, since it was an exact
duplicate of `build_pos_cubic()`.
- Remove `collect::<Vec<_>>()` call in `build_pos_cubic()` and replace
it with a `for` loop.
- Combine all of the benchmarks that measure `curve.position()` under a
single group, `curve_position`, and extract the common bench routine
into a helper function.
- Move the time calculation for the `curve.ease()` benchmark into the
setup closure so it is not tracked.
- Rename the benchmarks to be more descriptive on what they do.
  - `easing_1000` -> `segment_ease`
  - `cubic_position_Vec2` -> `curve_position/vec2`
  - `cubic_position_Vec3A` -> `curve_position/vec3a`
  - `cubic_position_Vec3` -> `curve_position/vec3`
  - `build_pos_cubic_100_points` -> `curve_iter_positions`

## Testing

- `cargo test -p benches --bench math`
- `cargo bench -p benches --bench math`
  - Then open `./target/criterion/report/index.html` to see the report!

---------

Co-authored-by: Alice Cecile <alice.i.cecile@gmail.com>
@BD103 BD103 deleted the math-benches branch February 11, 2025 19:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-Math Fundamental domain-agnostic mathematical operations C-Benchmarks Stress tests and benchmarks used to measure how fast things are D-Modest A "normal" level of difficulty; suitable for simple features or challenging fixes S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants