-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use pyperformance to run the benchmarks. #3
Conversation
This is amazing, thanks so much! I tested and it works with our flow where we use some of these benchmarks as a profiling task during our build. I'm having some trouble getting the
but it fails with Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/macrobenchmark_env/lib/python3.8/site-packages/pyperformance/__main__.py", line 2, in <module>
pyperformance.cli.main()
File "/tmp/macrobenchmark_env/lib/python3.8/site-packages/pyperformance/cli.py", line 306, in main
_main()
File "/tmp/macrobenchmark_env/lib/python3.8/site-packages/pyperformance/cli.py", line 255, in _main
assert not options.inside_venv
AssertionError Also we like that our benchmarks currently record the time for every iteration, which lets us calculate p99 latency as well as warmup time. I'm happy to add back this feature myself once things get merged, but what do you think the best way is? I was just thinking of triggering this behavior from an environment variable so that pyperf wouldn't have to be changed, but it occurred to me that maybe it's possible to do this by recording all of the iterations as separate pyperf results. |
@@ -0,0 +1,10 @@ | |||
[project] | |||
name = "python-macrobenchmarks" | |||
version = "0.9.0" # XXX an arbitrary value; the repo doesn't have one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't we call it 1.0.0 :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't want to steal your thunder. :) I'd be happy to set it to whatever you like.
Great!
After your feedback on #2, I started working on updating the benchmarks to optionally report the same info as before this PR. As part of that I'm mostly reverting my changes to run_all.sh and run_mypy.sh (and removing run_benchmarks.sh). Sorry for the confusion. I'm out for the next two weeks or so, but should have this PR updated soon after that.
This should not be hard to address. For now we can deal with it in the benchmarks code. I expect it would be worth looking into adding some functionality like this to pyperf (or pyperformance). Regardless, using an environment variable as a trigger would work, as would changing the scope of what's recorded. I'll look more closely when I'm back at work. Thanks for the feedback! |
FYI, I added a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah sorry I missed your changes! This looks great, thanks so much for updating it. Just a couple small things I noticed when I tried to run it.
README.md
Outdated
|
||
```shell | ||
# Run the default benchmarks: | ||
python3 -m pyperformance run --manifest ./benchmarks/MANIFEST |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like pyperformance main doesn't support manifests at relative directories:
$ ~/pyston2/build/system_env/bin/python -m pip install git+https://github.com/python/pyperformance
$ ~/pyston2/build/system_env/bin/python -m pyperformance run --manifest ./benchmarks/MANIFEST
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/__main__.py", line 2, in <module>
pyperformance.cli.main()
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/cli.py", line 308, in main
_main()
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/cli.py", line 285, in _main
benchmarks = _benchmarks_from_options(options)
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/cli.py", line 224, in _benchmarks_from_options
manifest = _manifest_from_options(options)
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/cli.py", line 218, in _manifest_from_options
return _manifest.load_manifest(options.manifest)
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_manifest.py", line 28, in load_manifest
return BenchmarksManifest._from_sections(sections, resolve, filename)
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_manifest.py", line 69, in _from_sections
self._add_sections(sections, resolve)
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_manifest.py", line 113, in _add_sections
for filename, section, data in sections:
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_manifest.py", line 279, in _parse_manifest_file
filename = _utils.resolve_file(filename, relroot)
File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_utils.py", line 64, in resolve_file
raise NotImplementedError(relroot)
NotImplementedError: ./benchmarks
But
`pwd`/benchmarks/MANIFEST
seems to work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, that's a bug in pyperformance. For now I've updated the README so the command at least works.
@@ -0,0 +1,21 @@ | |||
import sys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is missing an import json
Thanks for the feedback. I should be able to get those sorted out today. |
Amazing thank you! |
to include support for pyperformance from pyston/python-macrobenchmarks#3
(See #2.)
Changes:
Consequences:
Needs feedback:
Regarding that last one, it's a consequence of the switch to pyperf. I had to figure out what each benchmark was actually measuring, so pyperf would track only that part (e.g. request-response round trip vs. full app execution). For a few of the benchmarks, the subsequent refactor involved subtle changes to the logic of the benchmark script. So there is a chance I didn't get that right in a couple cases. A careful look would be helpful.
Note that until python/pyperformance#109 is merged, you'll have to use my branch of pyperformance in order to try out this change.