Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy memory leak over HTTP 1 #11012

Open
Champ-Goblem opened this issue Jun 13, 2023 · 4 comments
Open

Proxy memory leak over HTTP 1 #11012

Champ-Goblem opened this issue Jun 13, 2023 · 4 comments

Comments

@Champ-Goblem
Copy link

What is the issue?

In one of our production services, we see a memory leak that grows slowly over time but ends up using up to the 500MB limit before it is finally killed after running out of memory.

The service topology looks like so:

Istio Ingress Gateway + Linkerd Sidecar → http/2 → Varnish + Linkerd Sidecar → http/1.1 → Workload + Linkerd Sidecar

Below is a screenshot from our metrics of the linkerd-proxy sidecar memory usage of the Workload service in MiB:

image

As you can see the proxy seems to be continuously allocating memory, which is unlike the other services running with the linkerd proxy where it usually sits around 20-30MB

We have managed to get a heap profile of the linkerd proxy via bytehound and the majority of the allocations are in bytes::bytes_mut::BytesMut::reserve_inner which seems to be related to http1 keep alive based on the call graph in the following screenshot. This seems similar to an issue raised previously in the tokio bytes repository tokio-rs/bytes#560 and a similar issue was raised in hyper referencing this hyperium/hyper#2924. From the Cargo.lock file for the linkerd proxy, it seems that version 1.2.1 for the bytes crate is already being used, which should have the fix in the above PR.

image

What is interesting is that it seems like the connections between the varnish server and the backend workload are stuck in an HTTP1 state rather than being automatically upgraded to HTTP2, even after having restarted both workloads. To demonstrate:

  • First exec into the varnish pod and run a curl to the backend service over HTTP/1.1:
# curl --http1.1 site-production:7000 -vv 1>/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 10.0.34.97:7000...
* Connected to site-production (10.0.34.97) port 7000 (#0)
> GET / HTTP/1.1
> Host: site-production:7000
> User-Agent: curl/8.1.2
> Accept: */*
> 
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0< HTTP/1.1 200 OK
< x-powered-by: Express
< cache-control: max-age=0
< pragma: no-cache
< expires: 0
< p3p: CP="NOI ADM DEV COM NAV OUR STP
< content-type: text/html; charset=utf-8
< content-length: 60984
< x-response-time: 1373.572ms
< date: Tue, 13 Jun 2023 16:26:38 GMT
< 
{ [55563 bytes data]
100 60984  100 60984    0     0  43503      0  0:00:01  0:00:01 --:--:-- 43528
* Connection #0 to host site-production left intact
  • The corresponding linkerd log line on the backend service:
site-production-d868d9dd-ssvvz linkerd-proxy {"client.addr":"10.4.9.82:56416","client.id":"-","host":"site-production:7000","method":"GET","processing_ns":"175794","request_bytes":"","response_bytes":"60984","status":200,"timestamp":"2023-06-13T16:26:37.246457617Z","total_ns":"1376377061","trace_id":"","uri":"/","user_agent":"curl/8.1.2","version":"HTTP/1.1"}
  • Next, via a different pod curl the backend service again:
# curl --http1.1 site-production:7000 -vv 1>/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 10.0.34.97:7000...
* Connected to site-production (10.0.34.97) port 7000 (#0)
> GET / HTTP/1.1
> Host: site-production:7000
> User-Agent: curl/8.1.2
> Accept: */*
> 
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0< HTTP/1.1 200 OK
< x-powered-by: Express
< cache-control: max-age=0
< pragma: no-cache
< expires: 0
< p3p: CP="NOI ADM DEV COM NAV OUR STP
< content-type: text/html; charset=utf-8
< content-length: 60984
< x-response-time: 1283.291ms
< date: Tue, 13 Jun 2023 16:28:46 GMT
< 
{ [7691 bytes data]
100 60984  100 60984    0     0  46104      0  0:00:01  0:00:01 --:--:-- 46095
* Connection #0 to host site-production left intact
  • The corresponding linkerd access log for the above curl:
site-production-d868d9dd-4hpxk linkerd-proxy {"client.addr":"10.4.25.242:36928","client.id":"default.default.serviceaccount.identity.linkerd-system.cluster.local","host":"site-production:7000","method":"GET","processing_ns":"214589","request_bytes":"","response_bytes":"60984","status":200,"timestamp":"2023-06-13T16:24:07.289773447Z","total_ns":"1391694628","trace_id":"","uri":"http://site-production:7000/","user_agent":"curl/8.1.2","version":"HTTP/2.0"}

The difference between the two requests is that the second seems to have been upgraded correctly to H2 as seen via the version entry in the log. It also looks like the client id has been picked up correctly from the source pod in the second request but not in the first. All pods in the above test were automatically injected with the linkerd proxy by the control plane and the backend service only accepts HTTP/1.1 requests.

It seems like the issue with the memory might not happen if the proxy-to-proxy connection could be successfully upgraded to http2?

Any thoughts or recommendations for debugging are welcome, thanks in advance.

How can it be reproduced?

Unfortunately, I have not been able to reproduce the issue in any test environments, it only seems to be happening on this specific request route, through these specific services.

Logs, error output, etc

See the issue description

output of linkerd check -o short

Linkerd core checks
===================

linkerd-version
---------------
‼ cli is up-to-date
    is running version 2.12.4 but the latest stable version is 2.13.4
    see https://linkerd.io/2.12/checks/#l5d-version-cli for hints

control-plane-version
---------------------
‼ control plane is up-to-date
    is running version 2.12.4 but the latest stable version is 2.13.4
    see https://linkerd.io/2.12/checks/#l5d-version-control for hints

linkerd-control-plane-proxy
---------------------------
‼ control plane proxies are up-to-date
    some proxies are not running the current version:
	* linkerd-destination-db7888cfd-bzgxf (stable-2.12.4)
	* linkerd-destination-db7888cfd-rbrnh (stable-2.12.4)
	* linkerd-destination-db7888cfd-zkbpr (stable-2.12.4)
	* linkerd-identity-d788f947-7p6l9 (stable-2.12.4)
	* linkerd-identity-d788f947-xthbw (stable-2.12.4)
	* linkerd-identity-d788f947-xvs2s (stable-2.12.4)
	* linkerd-proxy-injector-7cdf67b86d-9fs5c (stable-2.12.4)
	* linkerd-proxy-injector-7cdf67b86d-h76jf (stable-2.12.4)
	* linkerd-proxy-injector-7cdf67b86d-z56w5 (stable-2.12.4)
    see https://linkerd.io/2.12/checks/#l5d-cp-proxy-version for hints

linkerd-ha-checks
-----------------
‼ pod injection disabled on kube-system
    kube-system namespace needs to have the label config.linkerd.io/admission-webhooks: disabled if injector webhook failure policy is Fail
    see https://linkerd.io/2.12/checks/#l5d-injection-disabled for hints

Status check results are √

Environment

  • K8s version: 1.25.8
  • environment: GKE
  • linkerd version: 2.12.4

Possible solution

No response

Additional context

No response

Would you like to work on fixing this bug?

None

@hawkw
Copy link
Contributor

hawkw commented Jun 16, 2023

I double checked that the version of bytes that contains the fix is in fact being used by the proxy version in stable-2.12.4, and it appears to be. Looking at the tag for stable-2.12.4, we see that this release is on proxy version v2.188.0, and when we look at the proxy's lockfile on the tag for v2.188.0, we do see that the bytes dependency is on v1.2.1. So, the proxy in use here should have that fix.

I'd be interested to know whether this issue persists after updating to the latest stable (v2.13.x), as we've changed some aspects of the proxy's internals significantly between 2.12 and 2.13, and may have fixed this issue in the process.

Other than that, I'd love to know what's going on here, but without a self-contained reproduction, there's not a lot I can do...

@stale
Copy link

stale bot commented Sep 16, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Sep 16, 2023
@stale stale bot closed this as completed Sep 30, 2023
@kflynn
Copy link
Member

kflynn commented Oct 7, 2023

To clarify the recent label and status changes here: yup, we've seen some things around this that make us curious and we're doing further investigation.

Copy link

stale bot commented Jan 6, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jan 6, 2024
@wmorgan wmorgan added pinned and removed wontfix labels Jan 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants