-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UDP engine crashes and exhibits weird behaviour #2009
Comments
Regarding the message size, UDP is currently limited to 8191 bytes
|
Becuase you srnd all and then receive message count larger than high watermark of subscriber will cause messages to drop, as UDP is unreliable, watermark is reached, internal buffers get full and new messages will get drop. |
Can you attach the stack trace of the assert? |
I will send the stack traces as soon as possible. |
Also - for UDP - sending/receiving message where (topic-length + message-size) is greater than 8191 bytes should throw an error at the API level, if it's not supported; unless I am missing something! |
@hitstergtd I will take a look next week, regarding message size, it is a simple code need fixing. at the time is was less important. Regarding the crashing, do you a stack trace on where it is crashing? |
@somdoron No problem - I just thought I would report it so that's it's hopefully fixed for the 4.2 release, as I see that being one of the important features. I also wanted to see throughout and latency numbers for UDP transport to see if it fares any better than the TCP stream engine. I will generate a stack trace in next couple of days and put it up here. Do you need it for all crash scenarios or just one of them? |
we can start with one of them |
I still get a core-dump in case of messages larger than the aforementioned ~16413 Bytes. Is help appreciated? I can provide minimal working examples that produce this issue. |
hey @StephanOpfer, do you have the backtrace of the core-dump? |
I came across these as part of adding throughput and latency tests. I maybe doing something messed, which is leading to these results, although I am fairly certain it's nothing unreasonable. :)
About the code:
The linked gist is basically a variant of the
inproc_thr
benchmark:thr_test
inproc_thr
, i.e.message-size
andmessage-count
What works:
message-count <= 1000
ANDmessage-size >= 0
ANDmessage-size <= 8183
What does NOT work:
message-count > 1000
ANDmessage-size > 0
(1 in 10 chance of a core dump)message-count = 4500
ANDmessage-size = 999
(sometimes, or dumps core as per below)message-count >= 1
ANDmessage-size = 50000
message-count = 1
ANDmessage-size = 16413
(try running this repeatedly)message-count = 4499
ANDmessage-size = 999
message-count = 1000
ANDmessage-size = 8184
message-count = 1000
ANDmessage-size = 16413
(but one message dumps core!)Code to reproduce this is available at:
https://gist.github.com/hitstergtd/68503600e353adb3155504982df54682.
Not sure if these existed since day 1 or progressively over the last few months. I haven't tried the above scenarios on Windows, only on Linux 4.4.0-22 kernel (Ubuntu latest), so not sure if they're somehow quite poller dependent or not.
The text was updated successfully, but these errors were encountered: