Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Management #598

Closed
8databit opened this issue Nov 3, 2017 · 8 comments · Fixed by #761
Closed

Memory Management #598

8databit opened this issue Nov 3, 2017 · 8 comments · Fixed by #761

Comments

@8databit
Copy link

8databit commented Nov 3, 2017

Hey,

I'm kinda worried about the following thing. Let's say a user sends a 20MB message of any type, I think as how it is now it will store the final 20MB in the RAM memory, logical to me, using a ByteBuffer. This, i think, can cause an issue when multiple clients are sending big amounts of data. Is there a way to limit the size of a message incoming at the server, being more precise: being stored temporary in the memory? What if the size is bigger than 1GB? Is this even allowed by most browsers?
I saw you already did some throttling.
Also, just curious. Why the size of 16384 bytes for the rec-buffer of the socket? Is there a logical explanation? Great work ;)

Expected Behavior

Throttle setting

Possible Solution

Throttle setting

Context

Probably will give issues when this isn't properly managed. If already implemented which class should i be in and which line ~ area.

@marci4
Copy link
Collaborator

marci4 commented Nov 4, 2017

Hello @8databit,

thank you for your concerns!
Yes you are understanding this correctly. It will be stored as a big bytebuffer and maybe later converted to a string as well,
Right now there is no limit in place. I know that you can send files with some MB over websockets without any problems (for streaming videos e.g.).

Where did you saw throttling? There should not be any!
Well that is a mystery I did not solve yet. It was added a long time ago when I was not part of this project and none of the former maintainer are there to answer me this question.
I think they needed to set a limit for the performance so they decided to do something like that!.

Do you know other projects which may have a throttle setting?

Right now I am not sure how to implement this without breaking everything ;)

Greetings
marci4

@8databit
Copy link
Author

8databit commented Nov 4, 2017

Hey,

Call me Joyie, Well i maybe read the code wrong but ur adding a queue which holds the bytebuffers if im corrrect. This queue has a limitation of how many buffers it holds, so this is some sort of throttle.

	protected void allocateBuffers(WebSocket c) throws InterruptedException {
		if (queuesize.get() >= 2 * decoders.size() + 1) {
			return;
		}
		queuesize.incrementAndGet();
		buffers.put(createBuffer());
	}

	private void pushBuffer(ByteBuffer buf) throws InterruptedException {
		if (buffers.size() > queuesize.intValue())
			return;
		buffers.put(buf);
	}

Sinse it doen't put the buffer in the queue when there is no queue space available, i thought this was a throttle on the server sided reading. Although this isn't per socket-connection and the code is kinda unclear to me, bit of a puzzle hahaha.

I have been struggling on this for a while myself actually using TCP. I'd like to mention that the receive and read buffer of the socket also allocates ram. The thing i would do is just only make a feature that just doesn't read the buffer of the socket when the limitation is overflown and just outputs onmessage(). What happens after then is up to the user of this project. Create a file, or keep it in a list/queue?
For now, just for security measures i would just set a standard limitation of 1MB+ amount of ram (bytebuffer) that a socket/connection can use. This makes RAM = socket_rcvbuf + (socket_readbuf + bytebuffer). Without ofcourse the objects and everything created.
See:
if (key.isReadable()) inside websocketserver class. if it haven't read the data , the data will just be left inside the buffer of the socket. Correct me if im wrong idk if this is possible cus of the RCF implementation.

Files should be written in parts yeah ^^ although i think this protocol should not be implemented in this project. It's for the user of this project to determine when the server is expecting a file or sending a file.

And no i unfortunately do not know any projects implementing this method.
I do not have enough knowledge about the websocket protocol for as now to be usefull, i have been studying this for a few days now. Would love to help once i'm finished with my project. if i come across anything ill make sure ill update it and post it here :)

@marci4
Copy link
Collaborator

marci4 commented Nov 5, 2017

Hello Joyie,

well I always considered this part just as a performance booster to reduce the need allocate new bytebuffer and reuse it out of the queue.
It could theoretically delay the amount of incoming frames, but it does not throttle them!

For me your suggestion is not really intuitive. A user excepts a onMessage call to contain a complete received message.
The problem which such a limit is also that the RFC spec is not setting any specific limit. So setting a limit that low (1MB) could break a lot of applications

The websocket protocol does not define what it should or should no be used to! It is up to the application to choose and I can not enforce a specific usage!

Help is always welcome! That's the spirit of an open source project!

Greetings
marci4

@8databit
Copy link
Author

8databit commented Nov 6, 2017

Heyy marci!

Thanks for ur reply. I get what you mean, I was just giving an example ^^. Your right it should be implemented with care. I have taken a look inside the files again and maybe it's an idea to look at:

10.7. Handling of Invalid Data

of the RFC 6455 docs. This might be a solution, or not lmao need u to verify this.

translateSingleFrame(); inside Draft_6455.

		//CHECK LIMIT
		
		ByteBuffer payload = ByteBuffer.allocate(checkAlloc(payloadlength));
		if (MASK) {
			byte[] maskskey = new byte[4];
			buffer.get(maskskey);
			for (int i = 0; i < payloadlength; i++) {
				payload.put((byte) (buffer.get( /* payloadstart + i */ ) ^ maskskey[i % 4]));
			}
		} else {
			payload.put(buffer.array(), buffer.position(), payload.limit());
			buffer.position(buffer.position() + payload.limit());
		}

If we came till this point we were able to fetch the payloadsize and are about to allocate space. What if we just simply add a limit here. It sends a close frame (10.7. @ Docs of 6455 with status code 1009 if it crosses the limit:

1009 indicates that an endpoint is terminating the connection
because it has received a message that is too big for it to
process.

This just simply closes the connection, be sure to force close the connection if it takes to long to close with the Close Frame. The client could either reconnect after a random amount of seconds (0-5 based on the docs) if it has received a message for closure with this error code.

This should be seen more as a security feature than something for performance, if i connect to some server randomly and send a large amount of data i can just fill up the ByteBuffer list of Draft_6455 because it will keep looking for a Complete Frame. If i have 10 connections and i keep sending data nothing is done by the server while i'm sending them.

	/**
	 * Attribute for the payload of the current continuous frame
	 */
	private List<ByteBuffer> byteBufferList;
        public static int RCVBUF = 16384;

I would also add a limit to the byteBufferList.

//For now we use an int and we put the default value at MAX_INT, max frame size = 2^63
int limit = Integer.MAX_VALUE;

//Add a seperate ADD method for the byteBufferList
int limitByteBufferList = Math.ceil(limit / RCVBUF);

//Queue<T>  | Convert byteBufferList to a queue (faster if used correctly) because were
//expecting it to be called a lot of times, store totalbytes in a seperate integer variable
//to avoid double itteration.
	private ByteBuffer getPayloadFromByteBufferList() throws LimitExedeedException {
		long totalSize = 0;
		for (ByteBuffer buffer : byteBufferList) {
			totalSize += buffer.limit();
		}
		if (totalSize > Integer.MAX_VALUE) {
			throw new LimitExedeedException("Payloadsize is to big...");
		}
		ByteBuffer resultingByteBuffer = ByteBuffer.allocate((int) totalSize);
		for (ByteBuffer buffer : byteBufferList) {
			resultingByteBuffer.put(buffer);
		}
		resultingByteBuffer.flip();
		return resultingByteBuffer;
	}

This could be implemented as an additional feature, not as a standard thing, based on your reply.

@marci4
Copy link
Collaborator

marci4 commented Nov 8, 2017

Hello Joyie,
sorry for the delayed answer.

I think https://tools.ietf.org/html/rfc6455#section-10.4 is a better solution.

The point you mention received all the bytes out of the socket already!
So that would not really be a good point in my opinion. A quicker throttle would be better in my opinion (maybe quickcheck the payload size of a non complete frame)

Yes, it is also necessary to check the length at the bytebufferlist (for fragmented frames).

I am not sure when I get the time to implement anything in this direction!

Feel free to try it out on your own!
Greetings
marci4

@marci4
Copy link
Collaborator

marci4 commented Nov 14, 2017

Hello Joyie,

I implemented your recommended changes.
Could you please take a look at them?
You can find them here!
https://github.com/marci4/Java-WebSocket-Dev/tree/ReceivedLimit

To set a lower limit then Integer.MAX_VALUE, call the constructor for Draft_6455 with your specific limit in bytes.

Greetings
marci4

@marci4 marci4 self-assigned this Nov 15, 2017
@marci4 marci4 removed the Question label Nov 15, 2017
@marci4
Copy link
Collaborator

marci4 commented Jan 6, 2018

Hello @8databit,

did you have a the time to take a look at this?

Greetings
marci4

@marci4
Copy link
Collaborator

marci4 commented Jan 28, 2018

@8databit

any update?

marci4 added a commit to marci4/Java-WebSocket-Dev that referenced this issue Aug 30, 2018
@marci4 marci4 added this to the Release 1.4.0 milestone Aug 30, 2018
@marci4 marci4 mentioned this issue Sep 6, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants