-
Notifications
You must be signed in to change notification settings - Fork 2k
Toolset cleanup & remove TE database dependencies #3574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@msmith-techempower FYI, aspnet-mono has been failing but it's not being reported by travis as failed. |
msmith-techempower
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good.
| for the given tests. | ||
| ''' | ||
| docker_buildargs = { | ||
| 'MAX_CONCURRENCY': str(max(benchmarker_config.concurrency_levels)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this no longer needed because we're just leveraging nproc now?
| tag="techempower/tfb.test.%s" % | ||
| test_docker_file.replace(".dockerfile", ""), | ||
| buildargs=docker_buildargs, | ||
| forcerm=True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want forcerm anymore?
@michaelhixson and I broke Travis on Friday by decoupling failure state of the toolset from failure state of any test. This should probably be fixed by switching on the running |
toolset/run-tests.py
Outdated
| benchmarker.run() | ||
| all_passed = benchmarker.run() | ||
| if config.mode == "verify": | ||
| return 0 if all_passed else 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just return all_passed?
toolset/benchmark/benchmarker.py
Outdated
| # Get a list of all known tests that we can run. | ||
| all_tests = gather_remaining_tests(self.config, self.results) | ||
|
|
||
| all_passed = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, change this to something like any_failed and reverse the binary logic so that you can just bubble the return value up. That line of return 0 if all_passed else 1 is so gross.
…ing [ci lang-only Scala]
These are all different for each test, too >_< |
|
Regarding the scala travis failures: I think the |
|
I'll open a separate PR for the scala tests |
We no longer have any tests that use local dependencies. Everything pulls from a dockerhub dependency.
Remove
__build_dependenciesRemove
__gather_dependenciesRemove build arguments (
TFB_DATABASE, MAX_CONCURRENCY)Move
wrkanddatabaseup and removetoolset/setup/dockerandtoolset/setupRemove the techempower database dependencies and build the database images locally. Right now this will only minimally affect travis runs especially since the 3 database images are pulling from
ubuntu:16.04instead of the official database images (some of the official images had some quirks I haven't worked out yet.) For continuous benchmarking, this will have virtually no time impact.toolset cleanupcommit and thebuild database dockerfiles locallycommitYes I hoorayed my own PR
Grouped more tests together with
TESTLANGfrom Group more languages for travis #3575 bringing down the total time impact of building the database tests locallyEdit: JavaScript, which is using all 3 databases, went from 13:52 to 15:10. And this is with using the
ubuntu:16.04base and doingapt updates. I'd say this is a win, especially when we can pull from the official bases for the database.