-
Notifications
You must be signed in to change notification settings - Fork 10.5k
Enable ExistentialSpecializer by default #19820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Enabling Existential Specializer by default. |
@swift-ci Please benchmark |
@swift-ci Please test compiler performance |
@swift-ci Please test source compatibility |
@swift-ci Please test |
Build comment file:Build failed before running benchmark. |
I don’t think this is you:
|
Build failed |
Build failed |
nope, not me. |
@slavapestov anything I need to do here? |
@swift-ci Please test |
@swift-ci Please test compiler performance |
@swift-ci Please test source compatibility |
@swift-ci Please benchmark |
Build failed |
Build comment file:Performance: -O
Code size: -O
Performance: -Osize
Code size: -Osize
Performance: -Onone
How to read the dataThe tables contain differences in performance which are larger than 8% and differences in code size which are larger than 1%.If you see any unexpected regressions, you should consider fixing the regressions before you merge the PR. Noise: Sometimes the performance results (not code size!) contain false alarms. Unexpected regressions which are marked with '(?)' are probably noise. If you see regressions which you cannot explain you can try to run the benchmarks again. If regressions still show up, please consult with the performance team (@eeckstein). Hardware Overview
|
Build failed |
Build comment file:Summary for master fullUnexpected test results, excluded stats for RxSwift, Fluent, Wordy, ReactiveSwift No regressions above thresholds Debug-batchdebug-batch briefRegressed (0)
Improved (1)
Unchanged (delta < 1.0% or delta < 100.0ms) (2)
debug-batch detailedRegressed (0)
Improved (2)
Unchanged (delta < 1.0% or delta < 100.0ms) (92)
Releaserelease briefRegressed (0)
Improved (0)
Unchanged (delta < 1.0% or delta < 100.0ms) (3)
release detailedRegressed (0)
Improved (0)
Unchanged (delta < 1.0% or delta < 100.0ms) (23)
|
@rajbarik Can you see what changed with SuffixAnyCollection and DropLastAnyCollection too? I think an optimization is not kicking in after your specialization pass perhaps. The source compat failures are also swiftpm failing to build, so we can't get more results until that is fixed. |
@slavapestov will have a look today. |
I can not reproduce error on my side:
command: ./swift/utils/build-script -r --assertions
--no-swift-stdlib-assertions --skip-build-benchmarks --build-ninja
--build-subdir=MyBuild --swiftpm --llbuild --swiftpm-build-type=Release
2>&1 | tee /tmp/log.tx
On Mon, Oct 15, 2018 at 10:00 AM Raj Barik ***@***.***> wrote:
@slavapestov <https://github.com/slavapestov> will have a look today.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#19820 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AFBwaVt9TfEJZc6rXILqE9-4pCJbEkeYks5ulL8dgaJpZM4XWCxy>
.
Linking /Users/rajbarik/workspace/swift-master-04132018/build/MyBuild/swiftpm-macosx-x86_64/x86_64-apple-macosx10.10/release/SwiftPMPackageTests.xctest/Contents/MacOS/SwiftPMPackageTests
|
@slavapestov any comment here? |
@swift-ci Please test |
Build failed |
Build failed |
Yes. That file houses a large family of 108 benchmarks, recently refactored to have smaller workloads in #20666. If I understand @atrick correctly, it’s possible that my refactoring might have changed its semantics such that it allows ExistentialSpecializer to kick in, while that should not be a case. |
|
Thank you @shahmishal ! |
@palimondo @atrick I do not see any mistake related to ES.. Is this expected behavior? |
OK. Is it fine then to go ahead with this one until all bugs around ExistentialPerformance is fixed (after it passes all tests)? |
I’m just highlighting possibly related information, the decision is for someone with a lot more knowledge than me… @atrick? |
@swift-ci please test Linux platform (These seem to be quite flaky the past few days… re-running, after checking the error was unrelated to your changes, seems to usually resolve it.) |
@rajbarik As long as Linux tests pass, you can merge this. It's possible that the Linux failure was already broken on master when the PR test ran, but it may be that the specializer is exposing an issue that needs to be fixed before we can enable it. |
Build failed |
@atrick I notice the following test failure on Linux/x86-64[see extracted log in the bottom]:
It is quite likely that ES may have caused this since protocols are passed as arguments (e.g., callDynamicSelfExistential). I think we can pass "-Xllvm -sil-disable-pass=ExistentialSpecializer" to suppress this. Do you know how to pass this option to the interpreter with the following run command?
|
@rajbarik You should be able to add any -Xllvm options, if the option existed: Or I think it would be ok to just do this: |
We could update the test not to rely on unstable destructor order instead. Maybe we could CHECK-DAG: those lines. |
@rajbarik Looking at dynamic_self.swift, it seems that we should be able to fix the test without changing the RUN line. I would think that slapping |
I do not have a linux machine -- so my tests will be on Mac. I am testing @inline(never) right now and will tell you if I run into any issue. |
|
Using optimizer tweaks still feels more brittle to me than changing the test not to rely on the order the cleanups get executed. Maybe the deinits could set a flag variable when they run, and we print the flags at a well-defined point afterward, or something like that. |
@jckarter I think the intention of the test is to correctly IRGen the bodies of those methods. If they get inlined, then the test is broken regardless of whether the checks pass. |
@atrick @rajbarik I have re-examined the performance changes I saw in |
@swift-ci test |
@swift-ci please benchmark |
@swift-ci benchmark |
@swift-ci Please test compiler performance |
Summary for master fullUnexpected test results, excluded stats for Deferred, Tagged, Wordy, ProcedureKit Regressions found (see below) Debug-batchdebug-batch briefRegressed (0)
Improved (0)
Unchanged (delta < 1.0% or delta < 100.0ms) (3)
debug-batch detailedRegressed (0)
Improved (1)
Unchanged (delta < 1.0% or delta < 100.0ms) (105)
Releaserelease briefRegressed (0)
Improved (0)
Unchanged (delta < 1.0% or delta < 100.0ms) (3)
release detailedRegressed (1)
Improved (0)
Unchanged (delta < 1.0% or delta < 100.0ms) (21)
|
@atrick safe to merge this now? |
@swift-ci benchmark. |
Performance: -O
Code size: -O
Performance: -Osize
Code size: -Osize
Performance: -Onone
How to read the dataThe tables contain differences in performance which are larger than 8% and differences in code size which are larger than 1%.If you see any unexpected regressions, you should consider fixing the Noise: Sometimes the performance results (not code size!) contain false Hardware Overview
|
Enable ExistentialSpecializer by default.