-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate flaky parallel/test-fs-read-stream-concurrent-reads #22339
Comments
Reopen:
|
CI stress test: https://ci.nodejs.org/job/node-stress-single-test/2088/ RUN_TESTS: |
Stress test above confirms flakiness: 15 failures in 100 runs. Stress test without parallelism: https://ci.nodejs.org/job/node-stress-single-test/2090/ RUN_TESTS: |
easily recreated in AIX. Looking at the test case especially the key assertion point:
and wondering what would be the significance of I ran the test several iterations with varying iterations counts, buffer lengths with and without the fix (the test was testing) and have these observations:
So probably the number 3 should be relaxed by taking into these fluctuations into account. /cc @addaleax |
It means that we allocate at most 3 times as much memory as we need. The fix in #25415, where we increase the number to 8, seems somewhat excessive? I think having that kind of memory overhead in Node.js would be considered a bug… |
@addaleax - thanks. If you look at the failure message:
showed that the actual consumption stepped out of the assumed value of 510000 (170 * 1000 * 3) the reason I have identified (empirically) is because few file reads took more than one iterations. as there is no guarantee on which reads can complete atomically (one shot) and which ones in two, I took the worst case of 2, for every reads, and hence arrived at the number 8. If you think this is excessive, or being over-generous to the extend of loosing the meaning of this test, please suggest a better way, I am happy to modify. I have confirmed that the current failures (in AIX and freebsd) do not reveal any issues with the original fix - as with and without the fix I see a drastic difference (in terms of few chunks vs. in terms of 1000 chunks) so it is just a matter of fine tuning the expectation, without causing flakes now and then. |
@gireeshpunathil I think that is a part of the issue, yes – whether a read finishes in one piece does matter, you’re right about that. I think the reason why this matters is that the test, in its current form, starts a new read for each chunk, not when a file stream ends – so it’s possible that the number of concurrent reads increases over time, depending on what the underlying fs operations do, which makes the test less reliable. Also, the main issue for the high memory-to-content ratio that we are already seeing seems to be that we only use a handful of So, what I would suggest is:
|
In parallel/test-fs-read-stream-concurrent-reads.js the number of data chunks used is being tested when few concurrent reads are performed. The number of chunks can fluctuate based on the number of concurrent reads as well as the data that was read in one shot. Accommodate these variations in the test. Fixes: nodejs#22339 PR-URL: nodejs#25415 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
In parallel/test-fs-read-stream-concurrent-reads.js the number of data chunks used is being tested when few concurrent reads are performed. The number of chunks can fluctuate based on the number of concurrent reads as well as the data that was read in one shot. Accommodate these variations in the test. Fixes: #22339 PR-URL: #25415 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
In parallel/test-fs-read-stream-concurrent-reads.js the number of data chunks used is being tested when few concurrent reads are performed. The number of chunks can fluctuate based on the number of concurrent reads as well as the data that was read in one shot. Accommodate these variations in the test. Fixes: #22339 PR-URL: #25415 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
In parallel/test-fs-read-stream-concurrent-reads.js the number of data chunks used is being tested when few concurrent reads are performed. The number of chunks can fluctuate based on the number of concurrent reads as well as the data that was read in one shot. Accommodate these variations in the test. Fixes: #22339 PR-URL: #25415 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
In parallel/test-fs-read-stream-concurrent-reads.js the number of data chunks used is being tested when few concurrent reads are performed. The number of chunks can fluctuate based on the number of concurrent reads as well as the data that was read in one shot. Accommodate these variations in the test. Fixes: #22339 PR-URL: #25415 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Anna Henningsen <anna@addaleax.net> Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
https://ci.nodejs.org/job/node-test-commit-aix/16983/nodes=aix61-ppc64/console
The text was updated successfully, but these errors were encountered: