Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate using additional optimizations like LTO and PGO #22

Closed
zamazan4ik opened this issue Sep 12, 2024 · 7 comments
Closed

Evaluate using additional optimizations like LTO and PGO #22

zamazan4ik opened this issue Sep 12, 2024 · 7 comments

Comments

@zamazan4ik
Copy link

Hi!

As I have done many times before, I decided to test the Profile-Guided Optimization (PGO) technique to optimize the application's performance. For reference, results for other projects are available at https://github.com/zamazan4ik/awesome-pgo . Since PGO helped a lot for many other apps, I decided to apply it to the project to see if the performance win can be achieved. Here are my benchmark results.

This information can be interesting for anyone who wants to achieve more performance with the library in their use cases.

Test environment

  • Fedora 40
  • Linux kernel 6.10.7
  • AMD Ryzen 9 5900x
  • 48 Gib RAM
  • SSD Samsung 980 Pro 2 Tib
  • Compiler - Rustc 1.79.0
  • tex-fmt version: main branch on commit f2689ac7e2c713cfb6106220c09a44141770a638
  • Disabled Turbo boost

Benchmark

For benchmark purposes, I use built-in into the project benchmarks. For PGO optimization I use cargo-pgo tool. For all measurements I used the same command but with different binaries - taskset -c 0 tex_fmt tests/source/* tests/target/*.

taskset -c 0 is used for reducing the OS scheduler's influence on the results. All measurements are done on the same machine, with the same background "noise" (as much as I can guarantee).

Results

I got the following results in hyperfine's format:

hyperfine --warmup 25 --min-runs 100 --prepare "cp -r ../tests/* tests" "taskset -c 0 ./tex_fmt_release tests/source/* tests/target/*" "taskset -c 0 ./tex_fmt_lto tests/source/* tests/target/*" "taskset -c 0 ./tex_fmt_optimized tests/source/* tests/target/*" "taskset -c 0 ./tex_fmt_instrumented tests/source/* tests/target/*"

Benchmark 1: taskset -c 0 ./tex_fmt_release tests/source/* tests/target/*
  Time (mean ± σ):      92.3 ms ±   1.2 ms    [User: 72.6 ms, System: 8.5 ms]
  Range (min … max):    90.6 ms …  98.6 ms    100 runs

  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs.

Benchmark 2: taskset -c 0 ./tex_fmt_lto tests/source/* tests/target/*
  Time (mean ± σ):      87.3 ms ±   1.0 ms    [User: 67.5 ms, System: 8.6 ms]
  Range (min … max):    85.5 ms …  91.1 ms    100 runs

Benchmark 3: taskset -c 0 ./tex_fmt_optimized tests/source/* tests/target/*
  Time (mean ± σ):      80.1 ms ±   0.6 ms    [User: 60.2 ms, System: 9.1 ms]
  Range (min … max):    78.3 ms …  81.2 ms    100 runs

Benchmark 4: taskset -c 0 ./tex_fmt_instrumented tests/source/* tests/target/*
  Time (mean ± σ):     133.0 ms ±   1.6 ms    [User: 110.6 ms, System: 9.8 ms]
  Range (min … max):   131.0 ms … 139.4 ms    100 runs

Summary
  taskset -c 0 ./tex_fmt_optimized tests/source/* tests/target/* ran
    1.09 ± 0.01 times faster than taskset -c 0 ./tex_fmt_lto tests/source/* tests/target/*
    1.15 ± 0.02 times faster than taskset -c 0 ./tex_fmt_release tests/source/* tests/target/*
    1.66 ± 0.02 times faster than taskset -c 0 ./tex_fmt_instrumented tests/source/* tests/target/*

where (with binary size information - it's important for some cases too):

  • tex_fmt_release - default Release profile, 2.6 Mib
  • tex_fmt_lto - default Release profile + LTO, 2.4 Mib
  • tex_fmt_optimized - default Release profile + LTO + PGO optimized, 2.4 Mib
  • tex_fmt_instrumented - default Release profile + LTO + PGO instrumented, 4.5 Mib

According to the results, LTO and PGO measurably improve the application's performance.

Further steps

As a first easy step, I suggest enabling LTO only for the Release builds so as not to sacrifice the developers' experience while working on the project since LTO consumes an additional amount of time to finish the compilation routine. If you think that a regular Release build should not be affected by such a change as well, then I suggest adding an additional release-lto profile where additionally to regular release optimizations LTO also will be added. Such a change simplifies life for maintainers and others interested in the project persons who want to build the most performant version of the application. Using ThinLTO also should help).

Also, Post-Link Optimization (PLO) can be tested after PGO. It can be done by applying tools like LLVM BOLT to tex-fmt.

Thank you.

P.S. It's just an idea, not an actual issue. Possibly, Ideas in GitHub's Discussions is a better place to discuss such proposals.

@WGUNDERWOOD
Copy link
Owner

Thanks for this, it is very interesting. I will try to replicate these results myself and will definitely consider including this in release binaries if it makes a noticeable improvement.

@WGUNDERWOOD
Copy link
Owner

Working on this in the lto-pgo branch, see cfdb644.

@WGUNDERWOOD
Copy link
Owner

I have implemented this in extra/build.sh and extra/perf.sh -- could you take a look and let me know if it aligns with what you were thinking? I do get some improvement, but not as much as you seem to attain. This may be because I'm not using taskset (it seems to slow down the benchmark), or just a hardware difference.

Benchmark 1: tex-fmt
  Time (mean ± σ):      93.6 ms ±   0.7 ms    [User: 87.1 ms, System: 6.2 ms]
  Range (min … max):    92.6 ms …  96.3 ms    50 runs

Benchmark 1: tex-fmt (no PGO)
  Time (mean ± σ):      97.9 ms ±   1.6 ms    [User: 92.0 ms, System: 5.4 ms]
  Range (min … max):    96.3 ms … 105.3 ms    50 runs

@zamazan4ik
Copy link
Author

zamazan4ik commented Sep 18, 2024

Excuse me for the so late response.

Thank you a lot for implementing it in the scripts! Yep, that's what exactly I was thinking about. I have several small and neat pieces of advice:

I do get some improvement, but not as much as you seem to attain. This may be because I'm not using taskset (it seems to slow down the benchmark), or just a hardware difference.

I guess just a hardware difference. Anyway, we still have nice user time improvement (since system time cannot be improved with PGO).

@WGUNDERWOOD
Copy link
Owner

Thanks for the response!

  • I am running the script several times before optimization as I think this is what's recommended in the cargo-pgo README -- it says to run the binary for "at least a minute".
  • perf.sh#L33 is a comparison test without PGO -- the test with PGO begins at L19.
  • PGO is not currently applied for GitHub binaries at the moment, though I will include it if consistent performance gains are apparent.

@zamazan4ik
Copy link
Author

I am running the script several times before optimization as I think this is what's recommended in the cargo-pgo README -- it says to run the binary for "at least a minute".

Oh, I see. No worries - in your case you should be able to ignore it. This recommendation is true for larger applications. For example, we want to optimize some large applications, like a database that internally has a lot of different subsystems. When we are waiting for a minute, it increases the chances that all (or almost all) subsystems will be executed at least once during the workload (like running database benchmarks). tex-fmt is a bit different :)

PGO is not currently applied for GitHub binaries at the moment, though I will include it if consistent performance gains are apparent.

Yep, sounds good

@WGUNDERWOOD
Copy link
Owner

There have been some substantial improvements to the performance of tex-fmt over the last few releases, and I'm no longer seeing any advantages when using PGO. As such, I'm going to close this issue for now. Thank you very much for your help and for discussing this; I am more than happy to reopen the issue in the future if necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants