VC-36510: Key Pair and Venafi Connection modes now use gzip compression#594
VC-36510: Key Pair and Venafi Connection modes now use gzip compression#594
Conversation
|
By the way, I don't understand why we have set a client timeout of 1 minute in Key Pair service account mode... Timeouts are a good practice in most cases, but I don't think this is such a case: I also noted that Venafi Kubernetes Agent's user agent is off: user-agent: Go-http-client/2.0It should be smth like user-agent: venafi-kubernetes-agent/1.1.0 |
There was a problem hiding this comment.
- Add some comments in the code, explaining why the collected data is compressed using GZIP....to reduce the likelihood of exceeding the maximum request body size limit imposed by the server or by loadbalancers and proxies in between.
- Show evidence that this still works with the old Jetstack TLSPK API server (does it handle requests with Content-Encoding: gzip?), or explain why that is not important.
- Did you consider adding a flag to allow users to turn off the gzip content encoding, in case it doesn't work with a corporate proxy?
- Or did you consider checking for HTTP server error 415 Unsupported Media Type and resending the request uncompressed?
- Does the API documentation explain that the API supports gzip encoded requests?
Good point, I forgot to explain why I've added a few comments in 22def46 to explain that.
I haven't added
Good question... I haven't thought about that. I've looked around and found one I haven't found other evidence of people struggling with GZIP-encoded requests I don't think there is a need to add a flag to disable GZIP encoding.
I haven't considered this. It seems like some work to add this logic, and I'm Note: Looks like 415 Unsupported Media Type isn't the only possible error
I had not thought of doing that. I just tried the google search I also maintain an ad-hoc internal documentation at I conclude that the VCP API doesn't officially support gzip-encoded requests... should I let the Production Engineering team know about this? I'm not sure what to do about it. |
|
I realize that this PR is trickier than anticipated. @j-fuentes told "if it is low hanging fruit, just enable compression; if it is not low hanging fruit, discuss further actions". I'll have a chat with Jose to know what to do next. Update: we have decided to continue even though it is a little more difficult than anticipated. We have decided to go with |
22def46 to
fa29555
Compare
|
@tfadeyi I'm done with the PR! I have added the necessary unit tests and have run a smoke test in Venafi Connection mode as well as in Key Pair mode. I am confident that this will work and will not break customers. You can proceed with the review and with merging and releasing v1.2.0 once you feel confident. Before releasing, can you proceed with the "Test 7"? I've sent data to the tenant https://ven-tlspk.venafi.cloud/ at around 2024/10/18 19:21:37 UTC, can you check that the bucket contains the data? |
For context, we have added gzip compression for the request bodies of data sent to the Venafi Control Plane API because we have hit 413 errors in NGINX and want to reduce the network load caused by the Agent for intermediate proxies and for our NGINX frontend (it buffers requests at the moment).
fa29555 to
179b748
Compare
I've tested e2e (pushing data from this branch of the agent to checking the blob storage) and the data is successfully uploaded to the backend side, with both I'm going to look into the failing unit tests. When setting
|
Signed-off-by: Oluwole Fadeyi <oluwole.fadeyi@venafi.com>

Summary:
client_max_body_sizefrom 256 MB to 1024 MB, andclient_body_buffer_sizefrom 128 MB to 750 MB (gitlab MR, message).Content-Lengthwas above 256 MB. When the request body goes above 750 MB, the body is written to a temporary file to disk rather than buffered in memory.proxy_request_bufferingand isonby default.--disable-compressionand let the Support team tell customers. I'm also in favor of adding--disable-compressionfor development purposes (e.g., when using mitmproxy).Testing
./hack/e2e/test.sh. (see results below)--disable-compressionis used in Key Pair mode → unit test written--disable-compressionis used in Venafi Connection mode. → unit test writtenRollout Plan:
venctlwill not immediately get v1.2.0 for the reason explained in the#venctlchannel. You can contact @SgtCoDFish or @tfadeyi to know more about that.--disable-compressionif a problem occurs.Before: request body weights 49 MB:
After: request body weights 5 MB:
^ note that the above request was initially missing the
Content-Encoding: gzipheader. I fixed this in 22def46.(see the section below to know how to reproduce this)
Manual Tests
Test 1
For this test, I've used the tenant https://ven-tlspk.venafi.cloud/. To access the API key, use the user
system.admin@tlspk.qa.venafi.ioand the password is visible in the page Production Accounts (private to Venafi). Then go to the settings and find the API key, and set it in the env varAPIKEY.The tenant https://ven-tlspk.venafi.cloud/ doesn't have the right tier to pull images, so I use an API key from the tenant https://glow-in-the-dark.venafi.cloud/, that's why I set the env var
APIKEY_GLOW_IN_THE_DARK. Ask Atanas to get access to the tenant https://glow-in-the-dark.venafi.cloud/.Then:
Finally, run the smoke test Venafi Connection mode:
Result:
Test 2
For this test, I've decided to use mitmproxy to see exactly what is being sent on the wire.
First, copy the following script to a file called
bigjsonbody.go(written in Go because bash is super slow):Then:
Finally:
For this tests, I've used the tenant https://ven-tlspk.venafi.cloud/. To access the API key, use the user
system.admin@tlspk.qa.venafi.ioand the password is visible in the page Production Accounts (private to Venafi). Then go to the settings and find the API key, and set it as an env var:export APIKEY=...Create the service account and key pair:
Now, make sure to have
127.0.0.1 mein your/etc/hosts.Then, run mitmproxy with:
Run this:
Finally, run the Agent with:
Result: