Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for sending big messages #2666

Closed
GeertVanWonterghem opened this issue Sep 3, 2016 · 6 comments
Closed

Support for sending big messages #2666

GeertVanWonterghem opened this issue Sep 3, 2016 · 6 comments

Comments

@GeertVanWonterghem
Copy link

I tested, and, when a message being sent to the client takes longer than the pingTimeout, the connection is disconnected.
In my use case however, I want to connect clients, download the full status, which can be up to at least 2MB, and then come to normal operation.
This means that, in case of low-quality network, the client will disconnect.
I would see several possibilities to solve:
1- Allow to disable ping during some time
2- The library shouldn't trigger ping Timeout when it is sending a message (because that obviously means the client is still connected)
3- The sending of the packets is automatically chunked so ping/pong packets can still be sent/received in between because they are sent with higher priority

My personal preference would be solution 2, but don't know how hard that is to implement...
Is anything like that implemented or would you recommend a different approach?

@carpii
Copy link

carpii commented Sep 5, 2016

I don't have a good answer to your question but..

My personal preference would be solution 2

I don't think you should prefer this method.

It will result in a DOS vulnerability in your node server, since any client could connect and trickle data very slowly just to keep the connection open. There are web servers which have been vulnerable in the past to this same technique

Since the keepalive heartbeat occurs over the same websocket as you are sending user data over, it seems like a difficult problem for the library to fix, without ramping up the server-side ram requirements

If I were you, I think I would look at segmenting the payload server side, and then implementing your own 'chunk 2 of 40' style messages, before its all augmented back together on the client.

Alternatively look at ways to restructure your status data so it could be incrementally downloaded

@GeertVanWonterghem
Copy link
Author

Interesting view. Thanks.
The problem with segmenting I see is that we don't know how much the message should be segmented to, because we don't know what the speed of the link to the client is. In my use case, clients can connect from anywhere.

@MaffooBristol
Copy link

You could always send a small packet first, say 100 bytes, test how long it takes and base the following chunk size on that

@BuGlessRB
Copy link

This could/should be implemented at the Socket.IO layer which could automatically fragment packets larger than x bytes, which are then transmitted and reassembled on the other side.
It would, also, allow for predictable real-time performance, where small application packets can still sneak through even though a large transmission is going on.

In fact, I like the idea so much, that I'm going to try and add that functionality to socket.IO right now.

@GeertVanWonterghem
Copy link
Author

Great! Curious for the results...

@darrachequesne
Copy link
Member

That issue was closed automatically. Please check if your issue is fixed with the latest release, and reopen if needed (with a fiddle reproducing the issue if possible).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants