Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use pyperformance to run the benchmarks. #3

Merged
merged 33 commits into from
Jan 20, 2022
Merged
Show file tree
Hide file tree
Changes from 31 commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
18ef346
Move the benchmarks into individual projects.
ericsnowcurrently Jul 1, 2021
3b1fbc9
Fix the manifest.
ericsnowcurrently Jul 20, 2021
431d5a3
Add a version to the base metadata.
ericsnowcurrently Jul 20, 2021
440762d
Add basic networking-related utils to share.
ericsnowcurrently Jul 20, 2021
5a683f7
Drop a buch of unnecessary code in netutils.
ericsnowcurrently Jul 20, 2021
1959205
Clean up waitUntilUp().
ericsnowcurrently Jul 22, 2021
e5f8861
Finish updating the benchmarks to the pyperformance format.
ericsnowcurrently Jul 20, 2021
7e73fe0
Introduce metadata for "pre" and "post" scripts.
ericsnowcurrently Jul 22, 2021
50bae09
Drop "pre" and "post" script support.
ericsnowcurrently Jul 22, 2021
4c85a42
Use symlinks instead of the "libsdir" metadata.
ericsnowcurrently Jul 22, 2021
b131693
Don't rely on an internal Runner attr.
ericsnowcurrently Jul 22, 2021
e3a3970
Be clear about what is getting benchmarked.
ericsnowcurrently Jul 23, 2021
0919748
Update for changes to pyperformance.
ericsnowcurrently Nov 3, 2021
ef83537
metabase -> inherits
ericsnowcurrently Nov 5, 2021
8db13db
Ignore data generated during benchmark runs.
ericsnowcurrently Nov 5, 2021
8a6a487
Run pyperformance more uniformly.
ericsnowcurrently Nov 4, 2021
c1bbc79
Make the script better.
ericsnowcurrently Nov 5, 2021
3fd8188
Fix a numeric test in the script.
ericsnowcurrently Nov 5, 2021
18988a9
Print dividers.
ericsnowcurrently Nov 5, 2021
0061184
Prepare mypy even if not resetting it.
ericsnowcurrently Nov 5, 2021
094e170
Add --skip-setup CLI to the runner script.
ericsnowcurrently Nov 5, 2021
52a0858
Make the script executable.
ericsnowcurrently Nov 5, 2021
f810156
Run with mypyc 50 times.
ericsnowcurrently Nov 5, 2021
c71124f
Add --with-mypyc CLI to the runner script.
ericsnowcurrently Nov 5, 2021
c6072d6
Add --clean CLI to the runner script.
ericsnowcurrently Nov 6, 2021
30fe453
Merge branch 'main' into pyperformance
ericsnowcurrently Nov 9, 2021
08c9183
Make kinto_bench a pyperformance benchmark.
ericsnowcurrently Nov 9, 2021
1d82993
Show how to run the benchmarks.
ericsnowcurrently Nov 19, 2021
e6164a8
Allow running the benchmarks the old way.
ericsnowcurrently Dec 6, 2021
c533adb
Fix bm_kinto.
ericsnowcurrently Dec 7, 2021
7523c3f
Run all the benchmarks.
ericsnowcurrently Dec 7, 2021
9828f98
Use an absolute path in the README.
ericsnowcurrently Jan 20, 2022
766642a
Add a missing import.
ericsnowcurrently Jan 20, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -129,3 +129,4 @@ dmypy.json
.pyre/

results
benchmarks/bm_pytorch_alexnet_inference/data/dog.jpg
19 changes: 19 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,21 @@
# python-macrobenchmarks
A collection of macro benchmarks for the Python programming language


## usage

```shell
# Run the default benchmarks:
python3 -m pyperformance run --manifest ./benchmarks/MANIFEST
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like pyperformance main doesn't support manifests at relative directories:

$ ~/pyston2/build/system_env/bin/python -m pip install git+https://github.com/python/pyperformance
$ ~/pyston2/build/system_env/bin/python -m pyperformance run --manifest ./benchmarks/MANIFEST
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/__main__.py", line 2, in <module>
    pyperformance.cli.main()
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/cli.py", line 308, in main
    _main()
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/cli.py", line 285, in _main
    benchmarks = _benchmarks_from_options(options)
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/cli.py", line 224, in _benchmarks_from_options
    manifest = _manifest_from_options(options)
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/cli.py", line 218, in _manifest_from_options
    return _manifest.load_manifest(options.manifest)
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_manifest.py", line 28, in load_manifest
    return BenchmarksManifest._from_sections(sections, resolve, filename)
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_manifest.py", line 69, in _from_sections
    self._add_sections(sections, resolve)
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_manifest.py", line 113, in _add_sections
    for filename, section, data in sections:
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_manifest.py", line 279, in _parse_manifest_file
    filename = _utils.resolve_file(filename, relroot)
  File "/home/kmod/pyston2/build/system_env/lib/python3.8/site-packages/pyperformance/_utils.py", line 64, in resolve_file
    raise NotImplementedError(relroot)
NotImplementedError: ./benchmarks

But

`pwd`/benchmarks/MANIFEST

seems to work

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, that's a bug in pyperformance. For now I've updated the README so the command at least works.

```

The benchmarks can still be run without pyperformance. This will produce
the old results format.

```shell
# Run the benchmarks:
sh ./run_all.sh

# Run the mypy benchmark using mypyc:
sh ./run_mypy.sh
```
21 changes: 21 additions & 0 deletions benchmarks/.libs/legacyutils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
import sys
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this is missing an import json



def maybe_handle_legacy(bench_func, *args, loopsarg='loops', legacyarg=None):
if '--legacy' not in sys.argv:
return
argv = list(sys.argv[1:])
argv.remove('--legacy')

kwargs = {}
if legacyarg:
kwargs[legacyarg] = True
if argv:
assert loopsarg
kwargs[loopsarg] = int(argv[0])

_, times = bench_func(*args, **kwargs)
if len(argv) > 1:
json.dump(times, open(argv[1], 'w'))

sys.exit(0)
88 changes: 88 additions & 0 deletions benchmarks/.libs/netutils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
import contextlib
import ipaddress
import os.path
import socket
import subprocess
import time


@contextlib.contextmanager
def serving(argv, sitedir, addr, *,
pause=None,
kill=False,
quiet=True,
):
if os.path.exists(addr):
sock = addr
addr = None
try:
os.remove(sock)
except FileNotFoundError:
pass
else:
sock = None

p = subprocess.Popen(
argv,
cwd=sitedir,
stdout=subprocess.DEVNULL if quiet else None,
stderr=subprocess.STDOUT if quiet else None,
)
try:
if pause:
time.sleep(pause)
if not sock:
try:
waitUntilUp(addr)
except NotImplementedError:
sock = addr
addr = None
if sock:
while not os.path.exists(sock):
time.sleep(0.001)
assert p.poll() is None, p.poll()
yield
assert p.poll() is None, p.poll()
finally:
p.terminate()
if kill:
p.kill()
p.wait()


def waitUntilUp(addr, timeout=10.0):
end = time.time() + timeout
addr = parse_socket_addr(addr)
started = False
current = time.time()
while not started or current <= end:
try:
with socket.create_connection(addr) as sock:
return
except ConnectionRefusedError:
time.sleep(0.001)
started = True
current = time.time()
raise Exception('Timeout reached when trying to connect')


def parse_socket_addr(addr, *, resolve=True):
if not isinstance(addr, str):
raise NotImplementedError(addr)
host, _, port = addr.partition(':')

if not host:
raise NotImplementedError(addr)
try:
host = ipaddress.ip_address(host)
except ValueError:
raise NotImplementedError(addr)
host = str(host)

if not port:
raise NotImplementedError(addr)
if not port.isdigit():
raise NotImplementedError(addr)
port = int(port)

return (host, port)
19 changes: 19 additions & 0 deletions benchmarks/MANIFEST
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
[benchmarks]

name metafile
aiohttp <local>
djangocms <local>
flaskblogging <local>
gevent_hub <local>
gunicorn <local>
json <local>
kinto <local>
mypy <local>
mypyc <local:mypy>
pycparser <local>
pylint <local>
pytorch_alexnet_inference <local>
thrift <local>

[group default]
-mypyc
41 changes: 0 additions & 41 deletions benchmarks/aiohttp.py

This file was deleted.

1 change: 1 addition & 0 deletions benchmarks/base.toml
File renamed without changes.
1 change: 1 addition & 0 deletions benchmarks/bm_aiohttp/legacyutils.py
1 change: 1 addition & 0 deletions benchmarks/bm_aiohttp/netutils.py
12 changes: 12 additions & 0 deletions benchmarks/bm_aiohttp/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[project]
name = "bm_aiohttp"
dependencies = [
"aiohttp",
"gunicorn",
"requests",
"uvloop",
]
dynamic = ["version"]

[tool.pyperformance]
inherits = ".."
70 changes: 70 additions & 0 deletions benchmarks/bm_aiohttp/run_benchmark.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
import os.path
import requests
import sys

import pyperf
import netutils


DATADIR = os.path.join(
os.path.dirname(__file__),
"data",
)
ARGV = [sys.executable, "serve.py"]


#############################
# benchmarks

def bench_aiohttp_requests(loops=3000):
elapsed, _ = _bench_aiohttp_requests(loops)
return elapsed


def _bench_aiohttp_requests(loops=3000, legacy=False):
"""Measure N HTTP requests to a local server.
Note that the server is freshly started here.
Only the time for requests is measured here. The following are not:
* preparing the site the server will serve
* starting the server
* stopping the server
Hence this should be used with bench_time_func()
insted of bench_func().
"""
start = pyperf.perf_counter()
elapsed = 0
times = []
with netutils.serving(ARGV, DATADIR, "127.0.0.1:8080"):
requests_get = requests.get
for i in range(loops):
# This is a macro benchmark for a Python implementation
# so "elapsed" covers more than just how long a request takes.
t0 = pyperf.perf_counter()
requests_get("http://localhost:8080/blog/").text
t1 = pyperf.perf_counter()

elapsed += t1 - t0
times.append(t0)
if legacy and (i % 100 == 0):
print(i, t0 - start)
times.append(pyperf.perf_counter())
if legacy:
total = times[-1] - start
print("%.2fs (%.3freq/s)" % (total, loops / total))
return elapsed, times


#############################
# the script

if __name__ == "__main__":
from legacyutils import maybe_handle_legacy
maybe_handle_legacy(_bench_aiohttp_requests, legacyarg='legacy')

runner = pyperf.Runner()
runner.metadata['description'] = "Test the performance of aiohttp"
runner.bench_time_func("aiohttp", bench_aiohttp_requests)
1 change: 1 addition & 0 deletions benchmarks/bm_djangocms/legacyutils.py
1 change: 1 addition & 0 deletions benchmarks/bm_djangocms/netutils.py
18 changes: 18 additions & 0 deletions benchmarks/bm_djangocms/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
[project]
name = "bm_djangocms"
dependencies = [
"Django",
"django-cms",
"djangocms-bootstrap4",
"djangocms-file",
"djangocms-googlemap",
"djangocms-installer",
"djangocms-snippet",
"djangocms-style",
"djangocms-video",
"requests",
]
dynamic = ["version"]

[tool.pyperformance]
inherits = ".."
Loading