-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TLS handshake error in opentelemetry-operator #1235
Comments
can you share the logs from the operator or pods you were saying that indicated a failure? Can you also share the operator version for your custom image? |
@jaronoff97 opentelemetry-operator version :0.102.0 logs from operator pod:-
logs from cert-manager pod
just for update this is the GCP setup, and we already whitelist the port which allows master nodes access to port 9443/tcp on worker nodes. |
@jaronoff97 any update here? |
I was away for the weekend. Unfortunately, this is related to a known issue with Go + Kubernetes. You can read more about this issue here. Please comment on this issue if you have the time. I'm going to close this in favor of the operator's tracking issue. The failures should be intermittent and non-permanent, which is visible in the timestamps of the EOF logs. If this is a permanent TLS failure, please let me know and I will reopen this issue. |
@jaronoff97 this is permanent issue for us, i need one confirmation after deploying operator, we deployed the collector using below helm chart and after that we added one python-instrument and added annotation in python-app pod, but we have not seen any entry in collector as well as jeager(storing data from collector) below are the helm configuration .
I am not sure what i did wrong here, i need one confirmation from yourside, if that TLS handshake is coming in operator pod and collector and Instrumentation are created with no error and at application pod side, we can see init containers are started correctly that means operator is working fine(i am assuming) and but still we not receive any trace in jeager. can you please suggest us next step what we can do here to achieve this or we are doing in wrong direction here. opentelemetry-collector-version: 0.95.0 |
@jaronoff97 just for update , i guess i found the issue with python-auto-instrumentation, we enable the python app with debug mode, instrument is not able to trace the request. after removing the debug-mode, we can see the traces. is it the normal behaviour ? |
I'm not positive that's the normal behavior... but if you were using the debug exporter that would be expected. I also would verify that your destination endpoint is correct – |
@jaronoff97 Is there a way to achieve consistent service naming for rollout resources, similar to how it is done for Deployment resources? I am aware that adding the service name as an environment variable in each application's Helm file is a solution, but I am looking for an alternative approach that avoids this. Could you please help with this? |
it sounds like this question is different than the topic of this issue and probably is better asked in the CNCF slack in the otel-helm-charts channel. Could you re-ask the question there and we can continue discussing there? |
hi, Is there any progress on this issue? I had the same problem, |
Hi All,
I am very new to open telemetry, i was deploying the operator using this link.
but after deploying from the operator pod am seeing these logs in the pod http: TLS handshake error from x.x.x.x:50516,
but i can see from API server request is received and connection establish from the operator pod.
kube-api server logs
i already try with both cert manager, auto generate certificate and parsing own certificates. but in every case we are receiving same issue.
using this values file for operator
helm version:- 3.14
Kubernetes version:- 1.28
Go version:- go1.21.9
kubectl:- 0.26.11
chart-version- 0.62.0
i am not sure what i am doing wrong here, can some help here as we need to work on tracing with Auto-instrumentation
The text was updated successfully, but these errors were encountered: