Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow G1 for javac workers. #12598

Closed
wants to merge 1 commit into from
Closed

Conversation

benjaminp
Copy link
Collaborator

G1 has a smoother memory profile than parallelGC, which is desirable for workers
that users run locally like javac.

Unfortunately, completely getting rid of the parallelGC option regresses Java
build performance. This is because Turbine does not run as a worker—see
https://github.com/bazelbuild/bazel/issues/8006—and G1 seems to do worse than
parallelGC for short-lived programs. So, I simply moved the parallelGC option to
the Turbine-specific jvm opts.

I ran bazel-bench to test the performance this change building //src:bazel_nojdk:

  metric          mean                median          stddev      pval
    wall:  475.842s  ( -0.05%)  474.997s  ( -0.09%)   1.203s     0.00000
     cpu:   73.063s  ( -2.28%)   72.980s  ( -2.21%)   1.499s     0.40000
  system:   19.427s  ( +0.66%)   19.450s  ( +1.09%)   0.111s     0.40000
  memory:   89.667MB ( -0.74%)   90.000MB ( +0.00%)   0.471MB    0.00000

G1 has a smoother memory profile than parallelGC, which is desirable for workers
that users run locally like javac.

Unfortunately, completely getting rid of the parallelGC option regresses Java
build performance. This is because Turbine does not run as a worker—see
https://github.com/bazelbuild/bazel/issues/8006—and G1 seems to do worse than
parallelGC for short-lived programs. So, I simply moved the parallelGC option to
the Turbine-specific jvm opts.

I ran bazel-bench to test the performance this change building //src:bazel_nojdk:
```
  metric          mean                median          stddev      pval
    wall:  475.842s  ( -0.05%)  474.997s  ( -0.09%)   1.203s     0.00000
     cpu:   73.063s  ( -2.28%)   72.980s  ( -2.21%)   1.499s     0.40000
  system:   19.427s  ( +0.66%)   19.450s  ( +1.09%)   0.111s     0.40000
  memory:   89.667MB ( -0.74%)   90.000MB ( +0.00%)   0.471MB    0.00000
```
@google-cla google-cla bot added the cla: yes label Dec 2, 2020
@comius
Copy link
Contributor

comius commented Dec 2, 2020

cc @meisterT, @larsrc-google

@meisterT
Copy link
Member

meisterT commented Dec 2, 2020

Benjamin, for your benchmark did you build a new version of java_tools and benchmark with that one? Just changing the code here won't have any immediate effect.

@comius
Copy link
Contributor

comius commented Dec 2, 2020

@meisterT you're not up to date, with split of java_tools #12552, you can configure java_toolchains without rereleasing java_tools (i.e. the whole configuration is in tools/jdk/BUILD).

@meisterT
Copy link
Member

meisterT commented Dec 2, 2020

@comius that's awesome news!

@meisterT
Copy link
Member

meisterT commented Dec 2, 2020

I tested on one of my machines, which gives a similar result:

  metric          mean                median          stddev      pval
    wall:  324.309s  ( -0.01%)  323.933s  ( -0.08%)   2.426s     0.00554
     cpu:  107.229s  ( -0.20%)  107.145s  ( -0.74%)   2.184s     0.21307
  system:   31.461s  ( -0.23%)   31.535s  ( -0.13%)   0.339s     0.21307
  memory:   92.900MB ( +0.11%)   93.000MB ( +0.00%)   0.300MB    0.00000

@meisterT
Copy link
Member

meisterT commented Dec 3, 2020

And on another more powerful machine:

  metric          mean                median          stddev      pval
    wall:   83.013s  ( -2.62%)   83.780s  ( -1.95%)   2.612s     0.96646
     cpu:  124.070s  ( +0.48%)  124.705s  ( +1.56%)   2.621s     0.66441
  system:   50.840s  ( -4.28%)   51.585s  ( -2.42%)   3.068s     0.66441
  memory:   99.200MB ( +0.00%)   99.000MB ( +0.00%)   0.927MB    0.00000

Copy link
Contributor

@larsrc-google larsrc-google left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice improvement. Benchmarks on the compact strings effect would be nice, but can be done afterwards.

@benjaminp
Copy link
Collaborator Author

This is a nice improvement. Benchmarks on the compact strings effect would be nice, but can be done afterwards.

I did play around with compact strings a bit. I seemed to find a tiny regression—hence the code comment to that effect in this CL—but now I'm thinking a macro benchmark like building Bazel is too noisy to tell.

@jin jin added the team-Performance Issues for Performance teams label Dec 4, 2020
@meisterT
Copy link
Member

meisterT commented Dec 4, 2020

but now I'm thinking a macro benchmark like building Bazel is too noisy to tell

Yes, you have to crank up the number of runs in order to reduce the noise. I typically use 20 runs or so. Let's do that separately.

@bazel-io bazel-io closed this in d1f6b01 Dec 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes team-Performance Issues for Performance teams
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants