Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle HTTP traffic over opaque transport connections #1416

Merged
merged 6 commits into from
Dec 23, 2021

Conversation

mateiidavid
Copy link
Member

@mateiidavid mateiidavid commented Dec 20, 2021

Closes linkerd/linkerd2#6178

When an endpoint is marked as opaque, but its logical service does not have an opaque annotation, the TransportHeader will not an include an alternate name, but it will the connection protocol:

[    72.279002s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.24:55848}:server{port=4143}:direct: linkerd_transport_header::server: Read transport header header=TransportHeader { port: 80, name: None, protocol: Some(Http2) }
[    72.279029s]  INFO ThreadId(01) inbound:accept{client.addr=10.42.0.24:55848}: linkerd_app_core::serve: Connection closed error=a named target must be provided on gateway connections client.addr=10.42.0.24:55848

The connection will be closed with an error. Through this change, we can handle HTTP traffic over opaque connections a bit more gracefully. When a TransportHeader has a protocol but no alternate name for the target, instead of rejecting the connection, we go through the inbound http stack. The full set of logs are attached in a spoiler tag below.

[    18.001257s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct: linkerd_transport_header::server: Read transport header header=TransportHeader { port: 80, name: None, protocol: Some(Http2) }
[    18.001266s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http: linkerd_proxy_http::server: Creating HTTP service version=H2
[    18.001281s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http: linkerd_proxy_http::server: Handling as HTTP version=H2
[    18.001490s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}: linkerd_proxy_http::orig_proto: translating HTTP2 to orig-proto: "HTTP/1.1"
Full set of logs from k3d test
[    18.001257s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct: linkerd_transport_header::server: Read transport header header=TransportHeader { port: 80, name: None, protocol: Some(Http2) }
[    18.001266s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http: linkerd_proxy_http::server: Creating HTTP service version=H2
[    18.001281s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http: linkerd_proxy_http::server: Handling as HTTP version=H2
[    18.001490s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}: linkerd_proxy_http::orig_proto: translating HTTP2 to orig-proto: "HTTP/1.1"
[    18.001499s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}: linkerd_app_inbound::policy::authorize::http: Request authorized permit=Permit { dst: OrigDstAddr(10.42.0.21:80), protocol: Detect { timeout: 10s }, labels: AuthzLabels { server: ServerLabel("default:all-unauthenticated"), authz: "default:all-unauthenticated" } } tls=Some(Established { client_id: Some(ClientId(Name("default.default.serviceaccount.identity.linkerd.cluster.local"))), negotiated_protocol: Some("transport.l5d.io/v1") }) client=10.42.0.12:51336
[    18.001525s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}: linkerd_app_inbound::http::router: using l5d-dst-canonical addr=nginx-svc.default.svc.cluster.local:80
[    18.001536s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}: linkerd_cache: Caching new service target=Logical { logical: Some(NameAddr { name: Name("nginx-svc.default.svc.cluster.local"), port: 80 }), addr: Remote(ServerAddr(10.42.0.21:80)), http: Http1, tls: Some(Established { client_id: Some(ClientId(Name("default.default.serviceaccount.identity.linkerd.cluster.local"))), negotiated_protocol: Some("transport.l5d.io/v1") }), permit: Permit { dst: OrigDstAddr(10.42.0.21:80), protocol: Detect { timeout: 10s }, labels: AuthzLabels { server: ServerLabel("default:all-unauthenticated"), authz: "default:all-unauthenticated" } }, labels: {"saz_name": "default:all-unauthenticated", "srv_name": "default:all-unauthenticated"} }
[    18.001580s] DEBUG ThreadId(01) evict{target=Logical { logical: Some(NameAddr { name: Name("nginx-svc.default.svc.cluster.local"), port: 80 }), addr: Remote(ServerAddr(10.42.0.21:80)), http: Http1, tls: Some(Established { client_id: Some(ClientId(Name("default.default.serviceaccount.identity.linkerd.cluster.local"))), negotiated_protocol: Some("transport.l5d.io/v1") }), permit: Permit { dst: OrigDstAddr(10.42.0.21:80), protocol: Detect { timeout: 10s }, labels: AuthzLabels { server: ServerLabel("default:all-unauthenticated"), authz: "default:all-unauthenticated" } }, labels: {"saz_name": "default:all-unauthenticated", "srv_name": "default:all-unauthenticated"} }}: linkerd_cache: Awaiting idleness
[    18.001608s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}: linkerd_stack::failfast: HTTP Logical service has become unavailable
[    18.001625s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_dns: resolve_srv name=linkerd-dst-headless.linkerd.svc.cluster.local.
[    18.001835s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_dns: ttl=4.99999445s addrs=[10.42.0.10:8086]
[    18.001845s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_proxy_dns_resolve: addrs=[10.42.0.10:8086] name=linkerd-dst-headless.linkerd.svc.cluster.local:8086
[    18.001862s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_proxy_discover::from_resolve: Changed change=Insert(10.42.0.10:8086, Target { addr: 10.42.0.10:8086, server_id: Some(ClientTls { server_id: ServerId(Name("linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local")), alpn: None }) })
[    18.001881s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}:endpoint{addr=10.42.0.10:8086}: linkerd_reconnect: Disconnected backoff=false
[    18.001891s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}:endpoint{addr=10.42.0.10:8086}: linkerd_reconnect: Creating service backoff=false
[    18.001901s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}:endpoint{addr=10.42.0.10:8086}: linkerd_proxy_transport::connect: Connecting server.addr=10.42.0.10:8086
[    18.001997s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}:endpoint{addr=10.42.0.10:8086}:h2: linkerd_proxy_transport::connect: Connected local.addr=10.42.0.21:48452 keepalive=Some(10s)
[    18.002496s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}:endpoint{addr=10.42.0.10:8086}:h2: linkerd_tls::client:
[    18.002539s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}:endpoint{addr=10.42.0.10:8086}: linkerd_reconnect: Connected
[    18.003042s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile: linkerd_reconnect: Disconnected backoff=false
[    18.003049s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile: linkerd_reconnect: Creating service backoff=false
[    18.003055s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile: linkerd_proxy_http::client: Building HTTP client settings=Http1
[    18.003060s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile: linkerd_reconnect: Connected
[    18.003066s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile: linkerd_service_profiles::http::proxy: Updating HTTP routes routes=0
[    18.003086s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile: linkerd_service_profiles::http::proxy: Updating HTTP routes routes=0
[    18.003097s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:http1: linkerd_proxy_http::client: method=GET uri=http://nginx-svc.default.svc.cluster.local/ version=HTTP/1.1
[    18.003104s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:http1: linkerd_proxy_http::client: headers={"host": "nginx-svc.default.svc.cluster.local", "user-agent": "curl/7.80.0-DEV", "accept": "*/*", "l5d-dst-canonical": "nginx-svc.default.svc.cluster.local:80", "l5d-client-id": "default.default.serviceaccount.identity.linkerd.cluster.local"}
[    18.003112s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:http1: linkerd_proxy_http::h1: Caching new client use_absolute_form=false
[    18.003128s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:http1: linkerd_proxy_transport::connect: Connecting server.addr=10.42.0.21:80
[    18.003175s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:http1: linkerd_proxy_transport::connect: Connected local.addr=10.42.0.21:40776 keepalive=None
[    18.003183s] DEBUG ThreadId(01) inbound:accept{client.addr=10.42.0.12:51336}:server{port=4143}:direct:opaque.http:http{v=h2}:http1{name=nginx-svc.default.svc.cluster.local:80}:profile:http1: linkerd_transport_metrics::client: client connection open
[    18.452910s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=10.42.0.1:37420}: linkerd_tls::server: Peeked bytes from TCP stream sz=106
[    18.452981s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=10.42.0.1:37420}: linkerd_detect: DetectResult protocol=Some(Http1) elapsed=12.557µs
[    18.453009s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=10.42.0.1:37420}: linkerd_proxy_http::server: Creating HTTP service version=Http1
[    18.453102s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=10.42.0.1:37420}: linkerd_proxy_http::server: Handling as HTTP version=Http1
[    18.453178s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=10.42.0.1:37420}: linkerd_app_inbound::policy::authorize::http: Request authorized permit=Permit { dst: OrigDstAddr(0.0.0.0:4191), protocol: Detect { timeout: 10s }, labels: AuthzLabels { server: ServerLabel("default:all-unauthenticated"), authz: "default:all-unauthenticated" } } tls=None(NoClientHello) client=10.42.0.1:37420
[    18.453455s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=10.42.0.1:37420}: linkerd_proxy_http::server: The client is shutting down the connection res=Ok(())
[    18.453526s] DEBUG ThreadId(02) daemon:admin{listen.addr=0.0.0.0:4191}:accept{client.addr=10.42.0.1:37420}: linkerd_app_core::serve: Connection closed

This seems to work in k3d. To get this to work, I had to implement a bunch of param traits on Local so that it can be used with the HTTP stack. In the process, I refactored some of the existing param implementations; would appreciate a more thorough look there to make sure we don't have redundancies or obvious failures.

Edit: I see there are some leftover compiler assertions we used for the target type. Do we generally want those removed when the PR is submitted? Probably an obvious question but thought I'd ask anyway.

If an endpoint is set as opaque, but its logical service is not marked
as opaque, the connection will error out. The `TransportHeader` in this
case will not contain the logical name of the service, but it will still
do protocol detection.

Through this change, when an endpoint is marked as opaque and we connect
to the proxy's inbound port, if the `TransportHeader` has a protocol, we
go through the inbound HTTP stack instead of proxying TCP.
Add session protocol to local

Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Matei David <matei@buoyant.io>
Signed-off-by: Matei David <matei@buoyant.io>
@mateiidavid mateiidavid requested a review from a team December 20, 2021 16:48
Signed-off-by: Matei David <matei@buoyant.io>
Comment on lines -134 to -135
let permit =
allow.check_authorized(client.client_addr, &tls)?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, we are no longer checking the policy on opaque connections?

It's probably better to introduce a new target type--LocalHttp or something (and rename Local to LocalTcp?). This switch could return an Either<Either<LocalTcp, LocalHttp>, GatewayTransportHeader> and then the inner switch predicate can simply return the target... I can probably put a suggestion up to this effect.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

* Introduce a dedicated direct::LocalHttp target type

* -pub

* - needless borrow

* remove redundant check_policy
linkerd/app/inbound/src/server.rs Outdated Show resolved Hide resolved
@olix0r olix0r merged commit 08060ee into main Dec 23, 2021
@olix0r olix0r deleted the matei/opaque-n-http-traffic branch December 23, 2021 16:55
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Jan 6, 2022
In addition to dependency updates, this change updates the inbound proxy
to handle opaquely transported HTTP traffic. This fixes an issue
encountered when a `Service`'s opaque ports annotation does not match
that of the pods in the service. This situation should be rare.

---

* Handle HTTP traffic over opaque transport connections (linkerd/linkerd2-proxy#1416)
* build(deps): bump tracing-subscriber from 0.3.3 to 0.3.4 (linkerd/linkerd2-proxy#1421)
* build(deps): bump pin-project from 1.0.8 to 1.0.9 (linkerd/linkerd2-proxy#1422)
* build(deps): bump tracing-subscriber from 0.3.4 to 0.3.5 (linkerd/linkerd2-proxy#1423)
* build(deps): bump pin-project from 1.0.9 to 1.0.10 (linkerd/linkerd2-proxy#1425)
* build(deps): bump http from 0.2.5 to 0.2.6 (linkerd/linkerd2-proxy#1424)
* build(deps): bump serde_json from 1.0.73 to 1.0.74 (linkerd/linkerd2-proxy#1427)
* Decouple client connection metadata from the I/O type (linkerd/linkerd2-proxy#1426)
* tests: rename 'metrics' addr to 'admin' (linkerd/linkerd2-proxy#1429)
olix0r added a commit to linkerd/linkerd2 that referenced this pull request Jan 6, 2022
In addition to dependency updates, this change updates the inbound proxy
to handle opaquely transported HTTP traffic. This fixes an issue
encountered when a `Service`'s opaque ports annotation does not match
that of the pods in the service. This situation should be rare.

---

* Handle HTTP traffic over opaque transport connections (linkerd/linkerd2-proxy#1416)
* build(deps): bump tracing-subscriber from 0.3.3 to 0.3.4 (linkerd/linkerd2-proxy#1421)
* build(deps): bump pin-project from 1.0.8 to 1.0.9 (linkerd/linkerd2-proxy#1422)
* build(deps): bump tracing-subscriber from 0.3.4 to 0.3.5 (linkerd/linkerd2-proxy#1423)
* build(deps): bump pin-project from 1.0.9 to 1.0.10 (linkerd/linkerd2-proxy#1425)
* build(deps): bump http from 0.2.5 to 0.2.6 (linkerd/linkerd2-proxy#1424)
* build(deps): bump serde_json from 1.0.73 to 1.0.74 (linkerd/linkerd2-proxy#1427)
* Decouple client connection metadata from the I/O type (linkerd/linkerd2-proxy#1426)
* tests: rename 'metrics' addr to 'admin' (linkerd/linkerd2-proxy#1429)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Handle HTTP protocol traffic over opaque transport connections
2 participants