Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

waitUntilUp() raise Exception('Timeout reached when trying to connect') #13

Open
bigjr-mkkong opened this issue Dec 20, 2022 · 4 comments

Comments

@bigjr-mkkong
Copy link

python version: PyPy 7.3.3-beta0 with GCC 7.3.1 20180303 (Red Hat 7.3.1-5)
OS: self-configured Linux 6.0.0 with network enabled(able to use wget, pip, and ping)

I was trying to execute ./run_all.sh pypy3 after configured all prerequisites, then I encountered this problem. Here is the trace log:

[1/1] flaskblogging...
# /root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/bin/python -u /root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/run_benchmark.py --output /tmp/tmpwxtbfkph --inherit-enviroD
Command failed with exit code 1
Traceback (most recent call last):
  File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/run_benchmark.py", line 76, in <module>
    with context:
  File "/root/pypy3.7-v7.3.3-linux64/lib-python/3/contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/netutils.py", line 36, in serving
    waitUntilUp(addr)
  File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/netutils.py", line 66, in waitUntilUp
    raise Exception('Timeout reached when trying to connect')
Exception: Timeout reached when trying to connect
ERROR: Benchmark flaskblogging failed: Benchmark died
Traceback (most recent call last):
  File "/tmp/benchmark_env/site-packages/pyperformance/run.py", line 147, in run_benchmarks
    verbose=options.verbose,
  File "/tmp/benchmark_env/site-packages/pyperformance/_benchmark.py", line 191, in run
    verbose=verbose,
  File "/tmp/benchmark_env/site-packages/pyperformance/_benchmark.py", line 232, in _run_perf_script
    raise RuntimeError("Benchmark died")
RuntimeError: Benchmark died
@kmod
Copy link
Contributor

kmod commented Dec 20, 2022

This is usually because the server process died so the benchmarking client was unable to connect.

You can change the netutils.serving() call to add quiet=False to get the output from the server process; feel free to post the output if it's not clear what's going wrong.

@bigjr-mkkong
Copy link
Author

I just changed bechmark/bm_flaskblogging/run_benchmark.py line 72

context = netutils.serving(ARGV, DATADIR, "127.0.0.1:8000")

into

context = netutils.serving(ARGV, DATADIR, "127.0.0.1:8000", quiet=False)

but it doesn't help. Here is the new trace log

[1/1] flaskblogging...
# /root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/bin/python -u /root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/run_benchmark.py --output /tmp/tmprzha9q44 --inherit-environ PYPERFORMANCE_RUNID
Traceback (most recent call last):
  File "serve.py", line 9, in <module>
    from flask_blogging import SQLAStorage, BloggingEngine
  File "/root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/site-packages/flask_blogging/__init__.py", line 1, in <module>
    from .engine import BloggingEngine
  File "/root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/site-packages/flask_blogging/engine.py", line 8, in <module>
    from .processor import PostProcessor
  File "/root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/site-packages/flask_blogging/processor.py", line 6, in <module>
    import markdown
  File "/root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/site-packages/markdown/__init__.py", line 29, in <module>
    from .core import Markdown, markdown, markdownFromFile  # noqa: E402
  File "/root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/site-packages/markdown/core.py", line 26, in <module>
    from . import util
  File "/root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/site-packages/markdown/util.py", line 86, in <module>
    INSTALLED_EXTENSIONS = metadata.entry_points().get('markdown.extensions', ())
AttributeError: 'EntryPoints' object has no attribute 'get'
Traceback (most recent call last):
  File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/run_benchmark.py", line 76, in <module>
    with context:
  File "/opt/pypy3.7-v7.3.3-linux64/lib-python/3/contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/netutils.py", line 36, in serving
    waitUntilUp(addr)
  File "/root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/netutils.py", line 66, in waitUntilUp
    raise Exception('Timeout reached when trying to connect')
Exception: Timeout reached when trying to connect
Traceback (most recent call last):
  File "/tmp/benchmark_env/site-packages/pyperformance/run.py", line 147, in run_benchmarks
    verbose=options.verbose,
  File "/tmp/benchmark_env/site-packages/pyperformance/_benchmark.py", line 191, in run
    verbose=verbose,
  File "/tmp/benchmark_env/site-packages/pyperformance/_benchmark.py", line 232, in _run_perf_script
    raise RuntimeError("Benchmark died")
RuntimeError: Benchmark died
Command failed with exit code 1
ERROR: Benchmark flaskblogging failed: Benchmark died

ERROR: No benchmark was run

@kmod
Copy link
Contributor

kmod commented Dec 20, 2022

I'm not sure what's happening in your traceback but I'm guessing it's an issue either with your setup or with pypy. Can you try setting up a CPython (normal python) environment and seeing if the same command works there?

You are also using the old benchmark runner (sorry for the confusion, that should probably be removed), could you try using the new one, something like:

pypy3 -m pip install --user pyperformance
pypy3 -m pyperformance run --manifest pyston-macrobenchmarks/benchmarks/WEB_MANIFEST -b flaskblogging

@bigjr-mkkong
Copy link
Author

Reinstall the importlib-metadata with version 4.13.0 and also change the netutils.serving() call to add quiet=False could solve this problem. Refer to this problem: https://stackoverflow.com/questions/73929564/entrypoints-object-has-no-attribute-get-digital-ocean.

Here is my final output

[1/1] flaskblogging...
# /root/vmshare/python-macrobenchmarks/venv/pypy3.7-37b672b9fc89-compat-84ebb708f58d/bin/python -u /root/vmshare/python-macrobenchmarks/benchmarks/bm_flaskblogging/run_benchmark.py --output /tmp/tmpgmzrc9ys --inherit-enviroD
 * Serving Flask app "serve" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
............
WARNING: the benchmark result may be unstable
* the standard deviation (1.08 ms) is 14% of the mean (7.98 ms)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m pyperf system tune' command to reduce the system jitter.
Use pyperf stats, pyperf dump and pyperf hist to analyze results.
Use --quiet option to hide these warnings.

flaskblogging: Mean +- std dev: 7.98 ms +- 1.08 ms

Performance version: 1.0.5
Python version: PyPy 7.3.3-beta0 (Python 3.7.9) (64-bit) revision 7e6e2bb30ac5
Report on Linux-6.0.0+-x86_64-with-glibc2.2.5
Number of logical CPUs: 2
Start date: 2022-12-20 06:32:51.086244
End date: 2022-12-20 06:34:50.998737

### flaskblogging ###
Mean +- std dev: 7.98 ms +- 1.08 ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants