Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP2 bandwidth issues #31932

Open
Rantoledo opened this issue Feb 24, 2020 · 3 comments
Open

HTTP2 bandwidth issues #31932

Rantoledo opened this issue Feb 24, 2020 · 3 comments
Labels
http2 Issues or PRs related to the http2 subsystem.

Comments

@Rantoledo
Copy link

Rantoledo commented Feb 24, 2020

  • Version: 13.9.0
  • Platform: Linux

Hi,
I've been trying to launch a simple app, copying directories containing files only, using HTTP2.
my implementation was simple - create http2 session and use concurrent streams within this session to transfer all the files in the directory.

First environment: server and client were both launched on a Linux machine on AWS ec2 service. the two Linux machines were in the same subnet.
At first, I tried to copy directories with lots of small files - 500k/600k/700k each 1KB.
the performance was really good.
However, when I tried to copy a directory with larger files - 300/200/100 files 1GB each, the performance was poor. I've tried to change MaxSessionMemory but there was no change.

Then, I've tried to change my implementation. I opened a new session for each file. This improved the performance of the directories with the large files significantly, but the performance of the directories with small files was worse than before.
I had no idea why it's happening, so I just decided to set some sort of threshold "x" for file's size - If the file size was smaller then x, then don't initiate a new session and use the existing one, else - initiate a new session to copy the file. That way, I thought I overcame this obstacle.

But, unfortunately, I was wrong. My next target was to try to copy the files over the internet. So I've tried to launch server and client on Linux machines in different regions.
This time, the best performance for both scenarios was to create a new session for each file - even for directories with lots of small files.

PS - by saying "poor performance" I mean that I checked bandwidth using "bmon", and saw big bandwidth differences.
Also, tried to change initialWindowSize, but didn't work.

I'm not so familiar with the http2 protocol itself, but I think maybe we need to have some sort of a way to control the session memory and the stream's memory at the session, maybe that's the problem.

reproduce:
https://github.com/Rantoledo/http2_nodejs_client_server_example2

@GrosSacASac
Copy link
Contributor

const { openSync, closeSync, readFileSync, lstatSync } = require('fs');

Consider async file system functions.

@Rantoledo
Copy link
Author

@GrosSacASac Hi, thank you for your comment...
I don't think that's the case. I've changed these operations to be async (and changed my code in the link to reproduce). Still, the problem described remains as it is... bandwidth doesn't change from the previous tests.

@kanongil
Copy link
Contributor

kanongil commented Nov 4, 2020

This will be caused by the problem described in #31084.

@targos targos added the http2 Issues or PRs related to the http2 subsystem. label Dec 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
http2 Issues or PRs related to the http2 subsystem.
Projects
None yet
Development

No branches or pull requests

4 participants