-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix TA in clusters with global proxy #3187
Fix TA in clusters with global proxy #3187
Conversation
Signed-off-by: Pavol Loffay <p.loffay@gmail.com>
Signed-off-by: Pavol Loffay <p.loffay@gmail.com>
@@ -5,6 +5,7 @@ metadata: | |||
creationTimestamp: null | |||
name: targetallocator-kubernetessd | |||
spec: | |||
namespace: chainsaw-targetallocator-kubernetessd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switching to using the static nemaspaces for the tests that assert collector config map with TA endpoint (now it contains namespace).
We will be able to get rid of this once we migrate to v1beta1 and use the namespace substitution provided by chainsaw.
@@ -279,15 +279,15 @@ func TestAddHTTPSDConfigToPromConfig(t *testing.T) { | |||
"job_name": "test_job", | |||
"http_sd_configs": []interface{}{ | |||
map[string]interface{}{ | |||
"url": fmt.Sprintf("http://%s:80/jobs/%s/targets?collector_id=$POD_NAME", taServiceName, url.QueryEscape("test_job")), | |||
"url": fmt.Sprintf("http://%s.default.svc.cluster.local:80/jobs/%s/targets?collector_id=$POD_NAME", taServiceName, url.QueryEscape("test_job")), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this should always work... do we have any concerns around custom DNS resolvers? i guess this would probably fail today for those use cases, but want to be sure this couldn't possibly break a user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I follow you. Could you please explain more how this fix could break uses with custom DNS resolver?
Using default.svc.cluster.local
is pretty standard on k8s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, i guess my concern is for users with custom DNS policies set https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config. I think i'm being overly cautious.
I believe this change breaks us. Why is this hardcoded as svc.cluster.local. I assume many people use custom cluster-domain. |
@foolusion sorry to hear that, would you be able to open a new issue explaining the bug you're running into? |
Operator image: docker.io/pavolloffay/opentelemetry-operator:dev-b67d063e-1722432504
Description:
See the changelog for more info.
On clusters with global proxy following error was happening on the collector
Link to tracking Issue(s):
Testing:
Documentation: