-
Notifications
You must be signed in to change notification settings - Fork 152
mock: act as proxy #314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mock: act as proxy #314
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: pohly The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@msau42: GitHub didn't allow me to assign the following users: tsmetana. Note that only kubernetes-csi members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. DetailsIn response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
We discussed a bit whether this functionality is needed and if so, where it should be implemented: kubernetes/kubernetes#97069 (comment) |
internal/proxy/proxy_test.go
Outdated
| limitations under the License. | ||
| */ | ||
|
|
||
| // Package proxy makes it possible to forward a listening socket in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: is it necessary to repeat the documentation in a test file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, that wasn't intentional. Will remove it.
internal/proxy/proxy.go
Outdated
| klog.V(5).Infof("proxy endpoint %s closed, shutting down", endpoint1) | ||
| return | ||
| } | ||
| go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It took me a while to understand how this work. A separate goroutine for s2 is started to start accepting on s1 as soon as possible. OK, it looks working (a comment would be nice).
At the same time, the comment above says that kernel is accepting connections on listening sockets. Therefore, can the proxy call accept(s2) here, without a goroutine? Kernel will keep incoming connection on s1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way it is now we collect connections from s1 as quickly as they come in and only rely on the kernel for s2. But it is indeed asymmetric and harder to understand. I'll try with a simpler approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The unnecessary complex approach also had a race condition: conn1 might change between the two accepts.
By running the mock driver in proxy mode with the csi.sock made available via another *listening* endpoint it becomes possible to run the actual driver code elsewhere, for example embedded inside an e2e.test binary. The second endpoint can be a TCP port that can be reached from outside of a Kubernetes cluster, either via a service or port-forwarding. The same can be done with "socat UNIX-LISTEN:/csi/csi.sock,fork TCP-LISTEN:9000,reuseport", but the implementation inside the mock driver has some advantages compared to that: - the same image as for normal mock driver testing can be used - the mock driver keeps the second listen socket open at all times, so a client connecting to it doesn't get "connection refused" errors, which happens while socat hasn't forked to accept such a connection
|
@jsafrane: I've pushed the simplified proxy code, fixed the proxy_test.go comment and modified log levels slightly: when run at -v3, all we'll get from the container is the "proxy established a new connection between ..." message. gRPC message logging will come from the embedded mock driver inside the e2e.test binary and the other sidecars (in particular, node-driver-registrar). |
|
I've verified that the e2e.test still works with this: tests are still running, but it worked fine so far. |
|
/lgtm |
What type of PR is this?
/kind feature
What this PR does / why we need it:
By running the mock driver in proxy mode with the csi.sock made
available via another listening endpoint it becomes possible to run
the actual driver code elsewhere, for example embedded inside an
e2e.test binary.
The second endpoint can be a TCP port that can be reached from outside
of a Kubernetes cluster, either via a service or port-forwarding.
The same can be done with "socat UNIX-LISTEN:/csi/csi.sock,fork
TCP-LISTEN:9000,reuseport", but the implementation inside the mock
driver has some advantages compared to that:
so a client connecting to it doesn't get "connection refused"
errors, which happens while socat hasn't forked to accept
such a connection
Special notes for your reviewer:
A corresponding change to Kubernetes is here: kubernetes-csi/node-driver-registrar#124
Does this PR introduce a user-facing change?: