Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added initial benchmarks #551

Merged
merged 18 commits into from
Jul 18, 2024
Merged

Added initial benchmarks #551

merged 18 commits into from
Jul 18, 2024

Conversation

Vrixyz
Copy link
Contributor

@Vrixyz Vrixyz commented Jul 2, 2024

Objective

bevy_rapier currently has no benchmarks, making it difficult to be confident with refactoring, or consider performance improvements. This PR aims to implement foundation to address this.

  • objective 1: take inspiration from bevy_xpbd/avian benchmarks, I think it's useful to aim for comparable benchmarks with other bevy physics crates. That being said, bevy_xpbd has simplistic tests.
  • objective 2: implement the same benchmarks as rapier, to have a baseline to compare with.
    • Using criterion for that is not adapted because the benches are very heavy and tailored for 1 run, to check the update rapier step timings. So I dropped criterion:
      • Divan is used for quick benchmarking, running the same App multiple times, and outputting statistics on update times.
      • A custom bench is used for more granular benchmarks, it's able to compute difference between rapier inner update and total bevy update.

Avian comparison

I doubt the benchmarks are comparable with avian's, as the time management from avian is customized, it might not be running full speed in avian.

bevy_rapier

Benchmarking cubes 3x3, 30 steps: Warming up for 3.0000 s

Warning: Unable to complete 100 samples in 20.0s. You may wish to increase target time to 22.0s, enable flat sampling, or reduce sample count to 60.
cubes 3x3, 30 steps     time:   [2.8347 ms 2.8435 ms 2.8534 ms]
Found 11 outliers among 100 measurements (11.00%)
  1 (1.00%) low mild
  3 (3.00%) high mild
  7 (7.00%) high severe

cubes 5x5, 30 steps     time:   [8.0240 ms 8.0784 ms 8.1379 ms]
Found 6 outliers among 100 measurements (6.00%)
  5 (5.00%) high mild
  1 (1.00%) high severe

cubes 10x10, 30 steps   time:   [105.89 ms 107.98 ms 110.40 ms]
Found 8 outliers among 100 measurements (8.00%)
  4 (4.00%) high mild
  4 (4.00%) high severe

Avian

cubes 3x3, 30 steps     time:   [49.782 ms 50.203 ms 50.708 ms]
                        change: [+1638.8% +1656.7% +1676.8%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
  2 (2.00%) high mild
  3 (3.00%) high severe

cubes 5x5, 30 steps     time:   [109.87 ms 110.38 ms 111.03 ms]
                        change: [+1254.3% +1266.3% +1278.4%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
  7 (7.00%) high mild
  5 (5.00%) high severe

Benchmarking cubes 10x10, 30 steps: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 20.0s. You may wish to increase target time to 39.4s, or reduce sample count to 50.
cubes 10x10, 30 steps   time:   [390.36 ms 391.82 ms 393.67 ms]
                        change: [+254.70% +262.86% +270.08%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
  3 (3.00%) high mild
  6 (6.00%) high severe


Suspicious bad performance

I looked into pipeline.counters.step_time.time(), originally to compare with the total update time, in order to get an idea of the overhead from bevy/bevy_rapier, but I had surprising results:

  • on rapier testbed, a rapier step takes 116ms on first iteration then goes steadily towards ~4ms.
  • on bevy_rapier, I had to tune down the pyramids to 3 (opposed to 40 ⚠️), and still got 160ms 😱 for a rapier step.
Flamegraph

flamegraph.zip

flamegraph

^ above numbers reported with f3a3d79

bevy_rapier3d/Cargo.toml Outdated Show resolved Hide resolved
@Vrixyz Vrixyz marked this pull request as ready for review July 4, 2024 13:12
@Vrixyz Vrixyz requested a review from sebcrozet July 4, 2024 13:18
@sebcrozet
Copy link
Member

Both approaches make sense: using divan for smaller benchmarks to quickly detect performance regression; and the custom bench for larger simulations and per-step analysis (ideally outputting some sort of graph).

Is there any way to initialize the scene, and then have divan benchmark each frame of the bevy world update?

benches_common/src/lib.rs Outdated Show resolved Hide resolved
@Vrixyz
Copy link
Contributor Author

Vrixyz commented Jul 10, 2024

Is there any way to initialize the scene, and then have divan benchmark each frame of the bevy world update?

I did implement that with 2cda622

@Vrixyz Vrixyz requested a review from sebcrozet July 10, 2024 14:15
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of benches_common be its own create, it should just be a subdirectory in the bevy_rapier3d/benches folder.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I followed the same hierarchy of avian : https://github.com/Jondolf/avian/tree/main/crates ; I'd have preferred a way to enable feature on benches_common to support either 2d or 3d, but I didn't try that.

Either way, it's not too much code, and we can probably focus on 3d for now.

Copy link
Contributor Author

@Vrixyz Vrixyz Jul 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That actually allows me to share some logic with the custom benchmark.

I did add this logic in the custom benchmark, and call into this custom benchmark library from the "simpler" benches.

custom_benches/src/main.rs Outdated Show resolved Hide resolved
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cargo bench section should be removed to the last position in this file. The Custom benches section should be a bit more wordy by saying that it runs benchmarks manually, without bench harnesses, to obtain more detailed results on long-running simulations.

The cargo bench section should say something like "For short-lived benchmarks based on statistical analysis, benchmarks can be run through the divan bench harness.`

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

side note: I imagine we could use the bench approach (with our own custom bench harness) for better control over custom benchmarks, i.e. run specific custom benches, but I'd need to research that a bit, that can probably be a follow up.

Comment on lines 1 to 13
//! Translated from rapier benchmark.
//!
//! <https://github.com/dimforge/rapier/blob/87ada34008f4a1a313ccf8c3040040bab4f10e69/benchmarks3d/many_pyramids3.rs>

use benches_common::default_app;
use benches_common::wait_app_start;
use bevy::prelude::*;
use bevy_rapier3d::dynamics::RigidBody;
use bevy_rapier3d::geometry::Collider;
use bevy_rapier3d::math::Vect;
use bevy_rapier3d::plugin::RapierContext;

pub fn create_pyramid(commands: &mut Commands, offset: Vect, stack_height: usize, rad: f32) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a fan of code duplication with bevy_rapier3d/benches ; but I'm not sure how to solve it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added shared setup code to the custom benchmark bevy_rapier_benches3d

@Vrixyz Vrixyz requested a review from sebcrozet July 12, 2024 09:19
@Vrixyz Vrixyz removed the request for review from sebcrozet July 12, 2024 09:21
@Vrixyz Vrixyz requested a review from sebcrozet July 12, 2024 12:38
Copy link
Member

@sebcrozet sebcrozet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, thanks!

@Vrixyz Vrixyz merged commit 188dcc2 into dimforge:master Jul 18, 2024
5 checks passed
@Vrixyz Vrixyz changed the title added initial benchmarks, similar to bevy_xpbd added initial benchmarks Jul 18, 2024
@Vrixyz Vrixyz changed the title added initial benchmarks Added initial benchmarks Jul 18, 2024
Vrixyz added a commit to Vrixyz/bevy_rapier that referenced this pull request Aug 8, 2024
@Vrixyz Vrixyz mentioned this pull request Aug 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants