Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow XdrAble to create it's own Xdr instance #64

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

pepijnve
Copy link
Contributor

I ran into performance issues when generating large NFSv3 write requests. The data I want to send is already available in a ByteBuffer and I want to avoid having to copy the entire buffer.

This PR allows XdrAble implementations to optionally create an Xdr instance themselves. This enable gather-style writes where the final Xdr wraps a composite Grizzly buffer. For instance, in my prototype version of this PR, the nfs Write3Args is creating an Xdr wrapper around a composite buffer consisting of <write3args header><data buffer><padding buffer>. This improved write performance quite a bit.

Note that this PR is mainly to get a discussion started; there might be better ways to provide this functionality. This is just a first draft.

Signed-off-by: Pepijn Van Eeckhoudt <pepijn@vaneeckhoudt.net>
@dcache-ci
Copy link

Can one of the admins verify this patch?

@kofemann
Copy link
Member

ok to test

@kofemann
Copy link
Member

This is a nice idea. I has as well several attempts to make a composite Xdr to avoid extra copies. However every time failed to make it intuitive and nice. One option would be to have kind-of composite Xdr or converting a under laying into a composite buffer it large byte buffer is encoded. I will play around with your patch. Thanks!

@kofemann
Copy link
Member

@pepijnve as a continuation of discussion, here is an alternative to your approach:

kofemann@36f2559

In s simple test of encoding 64K byte buffer it's gives me x10 boost. But some real application test are required. I have updated you nfs server to see the difference.

@kofemann
Copy link
Member

actually, in the real-life example I don't see any measurable differences.

@pepijnve
Copy link
Contributor Author

At what data rate are you testing? We're trying to saturate a 10GbE link in our lab...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants