feat: forward headers to grpc services#1382
Conversation
📝 WalkthroughWalkthroughAdds propagation of incoming HTTP headers into outgoing gRPC metadata for the gRPC datasource, plus tests and mock gRPC handlers that read those metadata keys and apply them to override response fields. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (HTTP request)
participant Engine as Engine Datasource
participant GRPC as gRPC Service
Client->>Engine: HTTP request with headers
Engine->>Engine: Lowercase headers -> key/value pairs
Engine->>GRPC: gRPC call (context with metadata)
GRPC->>GRPC: Read metadata (x-user-id, x-user-prefix, x-custom-name, x-count-offset)
GRPC-->>Engine: Response (data possibly overridden by metadata)
Engine-->>Client: GraphQL response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
9637a05 to
402694e
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
v2/pkg/engine/datasource/grpc_datasource/grpc_datasource.go (1)
95-125:⚠️ Potential issue | 🟠 MajorUse metadata.Join to preserve upstream outgoing context metadata.
metadata.NewOutgoingContextreplaces any existing outgoing metadata on the context rather than merging with it. If upstream code sets auth, tracing, or other metadata before calling Load, it will be lost. Merge with existing metadata before creating the new outgoing context.🛠️ Suggested fix
// convert headers to grpc metadata and attach to ctx if len(headers) > 0 { md := make(metadata.MD, len(headers)) for k, v := range headers { md.Set(strings.ToLower(k), v...) } + if existing, ok := metadata.FromOutgoingContext(ctx); ok { + md = metadata.Join(existing, md) + } ctx = metadata.NewOutgoingContext(ctx, md) }
🧹 Nitpick comments (1)
v2/pkg/grpctest/mockservice_resolve.go (1)
833-852: Guard x-count-offset parsing against int32 overflow.Line 843 uses strconv.Atoi then casts to int32, which can wrap on large values. Parsing with a 32‑bit bound keeps offsets predictable.
🔧 Suggested change
- if v, err := strconv.Atoi(values[0]); err == nil { - offset = int32(v) - } + if v, err := strconv.ParseInt(values[0], 10, 32); err == nil { + offset = int32(v) + }
jensneuse
left a comment
There was a problem hiding this comment.
Two minor points worth considering
NewOutgoingContext vs AppendToOutgoingContext: The code uses metadata.NewOutgoingContext, which replaces any existing outgoing metadata on the context. If the ctx ever arrives with pre-existing gRPC metadata, it would be lost. Using metadata.AppendToOutgoingContext would be safer for forward-compatibility, though today this isn’t an issue since the ctx comes from an HTTP request context.
No HTTP-specific header filtering: Headers like content-type, content-length, host, connection are meaningless as gRPC metadata and could theoretically conflict (e.g., content-type: application/json clashing with gRPC’s application/grpc). Headers starting with grpc- are reserved by gRPC itself. However, since the router controls what gets passed, this is a policy decision for the router, not this library — and the HTTP datasource makes the same trust assumption.
|
Good catches, I added a commit fixing this.
I'll add the filtering in the router PR then. I will leave this one open until the router PR is ready because if I would merge this straight then we would get all headers immediately without filtering. |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
v2/pkg/engine/datasource/grpc_datasource/grpc_datasource.go (1)
95-129:⚠️ Potential issue | 🟠 MajorFilter/validate HTTP header names before converting to gRPC metadata.
gRPC metadata keys enforce strict validation: only ASCII
0-9,a-z,A-Z(normalized to lowercase), and- _ .are allowed. Thegrpc-prefix is reserved for gRPC internals and forwarding arbitrary HTTP headers without validation risks gRPC rejecting the call or causing errors. Currently, the code only lowercases headers but does not validate the key format or skip reserved names before passing tometadata.AppendToOutgoingContext.Consider implementing a validation check to skip invalid keys and the
grpc-prefix.🔧 Suggested guard
if len(headers) > 0 { // assume that each header has exactly one value for default pairs size pairs := make([]string, 0, len(headers)*2) for headerName, headerValues := range headers { headerName = strings.ToLower(headerName) + if !isValidMetadataKey(headerName) || strings.HasPrefix(headerName, "grpc-") { + continue + } for _, v := range headerValues { pairs = append(pairs, headerName, v) } } ctx = metadata.AppendToOutgoingContext(ctx, pairs...) } + +func isValidMetadataKey(k string) bool { + if k == "" { + return false + } + for i := 0; i < len(k); i++ { + c := k[i] + if (c >= 'a' && c <= 'z') || (c >= '0' && c <= '9') || c == '-' || c == '_' || c == '.' { + continue + } + return false + } + return true +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@v2/pkg/engine/datasource/grpc_datasource/grpc_datasource.go` around lines 95 - 129, In Load, before converting HTTP headers to gRPC metadata (the block that builds pairs and calls metadata.AppendToOutgoingContext), validate and filter header keys: normalize to lowercase, skip any key that begins with the reserved prefix "grpc-" and skip keys that contain characters outside the allowed set (ASCII letters a–z, digits 0–9, and the characters '-', '_' and '.'); only append pairs for keys that pass this check so metadata.AppendToOutgoingContext receives only valid gRPC keys. Use the existing headers iteration in Load (the pairs creation logic) to apply this filter so no change is needed to acquirePoolItem or newJSONBuilder usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@v2/pkg/engine/datasource/grpc_datasource/grpc_datasource.go`:
- Around line 95-129: In Load, before converting HTTP headers to gRPC metadata
(the block that builds pairs and calls metadata.AppendToOutgoingContext),
validate and filter header keys: normalize to lowercase, skip any key that
begins with the reserved prefix "grpc-" and skip keys that contain characters
outside the allowed set (ASCII letters a–z, digits 0–9, and the characters '-',
'_' and '.'); only append pairs for keys that pass this check so
metadata.AppendToOutgoingContext receives only valid gRPC keys. Use the existing
headers iteration in Load (the pairs creation logic) to apply this filter so no
change is needed to acquirePoolItem or newJSONBuilder usage.
🤖 I have created a release *beep* *boop* --- ## [2.0.0-rc.252](v2.0.0-rc.251...v2.0.0-rc.252) (2026-02-19) ### Features * forward headers to grpc subgraphs ([#1382](#1382)) ([8459b34](8459b34)) ### Bug Fixes * **resolve:** fix flaky singleflight deduplication tests ([#1393](#1393)) ([4105082](4105082)) * **resolve:** guard OnFinished against nil loaderHookContext on skipped fetches ([#1394](#1394)) ([f79d071](f79d071)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
This is part of a Cosmo Connect related feature. It allows to send HTTP headers from user requests down to gRPC sources. Technically it converts headers to gRPC metadata on
datasource.Load.A header
X-My-Custom-Headerwill be sent as a grpc metadata fieldx-my-custom-headerwith all grpc calls the datasource will make. The change is implemented solely on the grpc datasource. All headers are mapped one-to-one as they are given to the datasourcesLoadmethod.Checklist
Summary by CodeRabbit
New Features
Tests