Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump up the timeout in test_pipes to hopefully reduce spurious CI failures #1715

Merged
merged 1 commit into from
Sep 5, 2020

Conversation

njsmith
Copy link
Member

@njsmith njsmith commented Sep 5, 2020

No description provided.

@codecov
Copy link

codecov bot commented Sep 5, 2020

Codecov Report

Merging #1715 into master will not change coverage.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master    #1715   +/-   ##
=======================================
  Coverage   99.61%   99.61%           
=======================================
  Files         115      115           
  Lines       14445    14445           
  Branches     1106     1106           
=======================================
  Hits        14389    14389           
  Misses         41       41           
  Partials       15       15           
Impacted Files Coverage Δ
trio/tests/test_subprocess.py 100.00% <100.00%> (ø)

Copy link
Member

@altendky altendky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Over in QTrio I had plenty of timeouts on subprocessed pytest tests including ones that did nearly nothing. It seemed they would cluster a bit so there would be a few successful builds and then one job would fail three or four such subprocessed tests. Bad luck getting slow workers? Overloaded? Bumping timeouts from 10 to 40 seconds seemed to end the flakiness so 30 sounds reasonable to me.

@altendky altendky merged commit 301a1df into python-trio:master Sep 5, 2020
@njsmith njsmith deleted the deflake-test_pipes branch September 5, 2020 21:29
@njsmith
Copy link
Member Author

njsmith commented Sep 5, 2020

Yeah, I think these free CI systems are overprovisioned and have really noisy neighbors, so sometimes your whole VM will just like, freeze for a few seconds and then resume. So you need pretty generous timeouts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants