-
Notifications
You must be signed in to change notification settings - Fork 32
Add basic client/cli test #176
base: master
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## master #176 +/- ##
==========================================
- Coverage 87.41% 87.34% -0.07%
==========================================
Files 45 45
Lines 1486 1486
Branches 229 229
==========================================
- Hits 1299 1298 -1
- Misses 103 104 +1
Partials 84 84
Continue to review full report at Codecov.
|
t.fail(message) | ||
end() | ||
} | ||
if (message.includes('Imported blocks')) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI If data chunk size exceeds 16Kb (default value for readableHighWaterMark), then it will be split into several chunks to fit 16Kb size limit. So it's possible to meet the situation, when some part of 'Imported block' is in the first chunk and other is in another. It could be highly possible if script writes big bunch of data in one libuv event loop cycle, not sure if Client will ever do it. But if it will, then there will be false negative tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah good to know, thanks, I was wondering about how it may be chunked. Any suggestions on a better or more reliable way to parse the incoming data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm occasionally getting this race condition in CI, haven't run into this locally yet. If you look at the timestamps you can see that it's not related to exceeding the test's 4 minute timeout:
I'm still looking into why this happens. One idea I have is that it may be because of the default 8s timeout in some other parts of the lib: |
It's in a ping-pong method handler. It seems like node server is shutting down without waiting of underlying sockets to be closed. |
yeah, I'm finding that the handshake is indeed timing out ( It looks like in the error stack trace above that it does eventually get the ping message back so it's trying to send back pong, but the socket is closed. If the socket does close in devp2p, then it should set the socket I'm not seeing anywhere else currently that if the handshake timeout limit is exceeded, that it's stopping, closing, or destroying anything. Still looking into it 🤔 |
Continuing debugging here, my hunch in this run below is that it's a race condition of a peer disconnecting (TOO_MANY_PEERS) but the peer still trying to send a pong response back to the socket? This is still confusing to me though because at the top of
|
2335c4e
to
5b98930
Compare
I have rebased this branch on latest master. It is still working well for me locally (see my latest run below), however still debugging the intermittent timeout issue on CI.
|
What version of Node throws this error? |
The ci runs on 12.x in this PR |
This PR adds a simple client/cli test that spins up the client and waits to successfully download a block.
Sample output: