Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

to buffer output? #318

Open
etnt opened this issue Jan 29, 2018 · 14 comments
Open

to buffer output? #318

etnt opened this issue Jan 29, 2018 · 14 comments

Comments

@etnt
Copy link
Collaborator

etnt commented Jan 29, 2018

Does Yaws support buffering of the output?
I couldn't find it so I guess not...
Could perhaps be a nice little feature?

@klacke
Copy link
Collaborator

klacke commented Jan 29, 2018

So, what do you mean, that a page can return data, and then at the end return .. like flush and yaws will then write everything back to the client? No, you know this is not supported. Shouldn't be too hard to emulate yourself though.

@etnt
Copy link
Collaborator Author

etnt commented Jan 29, 2018

Well, Yaws could keep a X bytes buffer and flush it when it is "filled". I've been experimenting with our streaming REST application and e.g a 4k buffer gives a significant improvement for large data sets returned. So I guess, being a bit lazy, I was hoping Yaws already had some support for it ;-)

@vinoski
Copy link
Collaborator

vinoski commented Jan 29, 2018

Have you tried using the Yaws streaming features, specifically the non-chunked delivery approaches described on that page? Seems like that feature would give you complete control over your app's buffering.

@etnt
Copy link
Collaborator Author

etnt commented Jan 29, 2018

Yes, I'm using it already.
It's no biggie to write your own buffering (as I already have done), but could perhaps be convenient if Yaws supported it out of the box.

@vinoski
Copy link
Collaborator

vinoski commented Jan 29, 2018

Adding some sort of optional buffering helper/utility for non-chunked streaming could be useful. I guess it would be a server process that would serve as the streaming pid, providing an API allowing callers to send data to it. It would take a buffer size as a start argument and would buffer an iolist of that size, flushing it as needed. Is this along the lines of what you were thinking?

@etnt
Copy link
Collaborator Author

etnt commented Jan 31, 2018

Ideally, it should not change the current API or adding something new. Just a config param 'StreamingBufferSize' which per default is Zero, i.e buffering is turned off, preserving the current behaviour.

@vinoski
Copy link
Collaborator

vinoski commented Jan 31, 2018

Just so I'm clear, you're suggesting that the yaws_api:stream_process_deliver* functions learn to respect a stream buffer size configuration setting?

@etnt
Copy link
Collaborator Author

etnt commented Feb 1, 2018

No, I mean the yaws_api:stream_chunk_deliver/2 and friends...

So, for example, when the user code, running in the Yaws process, returns control, it can do it
by either: {streamcontent, MimeType, Data} ,or: {streamcontent_with_timeout, MimeType, Data, Timeout} . Some other process will now start sending chunks of data to be streamed to the Yaws process.

Now, perhaps one could return something like: {streamcontent, MimeType, Data, BufferSize},
then Yaws will buffer any streamed content sent to it until BufferSize is reached, where it will write the data on the outgoing socket.

@vinoski
Copy link
Collaborator

vinoski commented Feb 1, 2018

OK, thanks @etnt. Now, don't you think @klacke should implement this, since he's now apparently returned from his world tour? :-)

@etnt
Copy link
Collaborator Author

etnt commented Feb 2, 2018

Indeed! He needs to get up to speed again... ;-)

@klacke
Copy link
Collaborator

klacke commented Feb 6, 2018 via email

@vinoski
Copy link
Collaborator

vinoski commented Feb 6, 2018

Nice, @klacke !

I don't think it'll be too hard if we take the config variable route. The only hard part I can think of is how to test it?

@klacke
Copy link
Collaborator

klacke commented Feb 6, 2018 via email

@etnt
Copy link
Collaborator Author

etnt commented Feb 12, 2018

Just a little observation; I did a version where I used the stream_process* stuff to write directly on the socket. The result was a little bit slower than the "old" stream_content code.
(I tried with various buffer sizes but anything larger than a 4kB buffer didn't seem to improve the performance. But OTOH....performance measuring is tricky...)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants