Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a mechanism to disable the 0.0.0.0 bind address warning #6938

Closed
sirianni opened this issue Jan 12, 2023 · 13 comments
Closed

Provide a mechanism to disable the 0.0.0.0 bind address warning #6938

sirianni opened this issue Jan 12, 2023 · 13 comments
Assignees

Comments

@sirianni
Copy link

Is your feature request related to a problem? Please describe.
After upgrading to v0.69.0 we get warnings on collector startup

Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks

We are deploying the collector in a Kubernetes environment and this warning does not apply. The collector's own docs mention this

OpenTelemetry Collector recommends to bind receivers' servers to addresses that limit connections to authorized users. This is typically not needed in containerized environments, although the Open Telemetry Collector logs the following....

Describe the solution you'd like
I would like a way to disable this warning since it is not applicable to our deployment.

This could be a new feature flag, or a stylized way to encode the bind address such that it indicates that the user has explicitly opted in to the 0.0.0.0 configuration.

Describe alternatives you've considered

  • Set the collector log level to warning. Not desirable since I may miss legitimate warnings.

  • Ignore the warnings.

    • Not desirable since we try to follow a "warning free ops" model (like "warning free code" 🙂 ).
    • The warnings also cause angst/confusion for developers since it spits an ugly stack trace along with the warning.

Slack reference

@Aneurysm9
Copy link
Member

Ignore the warnings.

  • Not desirable since we try to follow a "warning free ops" model (like "warning free code" 🙂 ).

Setting a listening address explicitly is the effective way to silence this warning that is inline with the model of having warning-free code. You don't get warning-free code by disabling the warnings, but by making your code not do the things that generate them.

@TylerHelmuth
Copy link
Member

If k8s is not an exception to this security guidance and the expectation is that users deploying a collector in k8s should be explicitly setting the value then we should remove the exception from our documentation.

@sirianni
Copy link
Author

You don't get warning-free code by disabling the warnings, but by making your code not do the things that generate them.

I agree. Forgive my limited Kubernetes networking knowledge, but what address should I be binding to in order to allow traffic from outside the pod (via a k8s Service). My understanding is that localhost or 127.0.0.1 is only the loopback interface.

@Aneurysm9
Copy link
Member

You don't get warning-free code by disabling the warnings, but by making your code not do the things that generate them.

I agree. Forgive my limited Kubernetes networking knowledge, but what address should I be binding to in order to allow traffic from outside the pod (via a k8s Service). My understanding is that localhost or 127.0.0.1 is only the loopback interface.

That depends on your configuration and isn't something I can answer for you. Options include using the downward API to get the pod IP into an environment variable and interpolating that into your config, using a service mesh or other proxy inside the pod and binding to localhost, or probably many others I'm not thinking of because I don't know your environment.

@sirianni
Copy link
Author

sirianni commented Jan 12, 2023

I don't know your environment.

Exactly. In my environment it's safe to bind to 0.0.0.0. Yet the code insists on warning.

Allowing the configuration to explicitly ignore that warning means that I understand the risks and have confirmed they don't apply in my setup.

@ethan256
Copy link

ethan256 commented Jan 13, 2023

You don't get warning-free code by disabling the warnings, but by making your code not do the things that generate them.

I agree. Forgive my limited Kubernetes networking knowledge, but what address should I be binding to in order to allow traffic from outside the pod (via a k8s Service). My understanding is that localhost or 127.0.0.1 is only the loopback interface.

That depends on your configuration and isn't something I can answer for you. Options include using the downward API to get the pod IP into an environment variable and interpolating that into your config, using a service mesh or other proxy inside the pod and binding to localhost, or probably many others I'm not thinking of because I don't know your environment.

I agree with this view. I think some environment variables should be provided to initialize. For example, provide an environment variable $HOSTIP, when this variable is not set the endpoint defaults to 0.0.0.0:<port>, otherwise use vaule of $HOSTIP instead of 0.0.0.0 for the endpoint

@ethan256
Copy link

ethan256 commented Jan 13, 2023

@sirianni

The values in the configuration file can be overridden with --set. For example:

OpenTelemetryCollector

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
spec:
  args:
    set: receivers.jaeger.protocols.thrift_http.endpoint=${POD_IP}:14268
  env:
    - name: POD_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
  mode: deployment
  config: |
    receivers:
      otlp:
        protocols:
          http:

    exporters:
      otlphttp:
        endpoint: http://localhost:4318/v1/teaces

    processors:
      batch:

    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [otlphttp]

@jpkrohling
Copy link
Member

I don't know your environment.

Exactly.

This was taken way out of context. I'm with @Aneurysm9 here: you should understand your network before forcing a solution to make your deployment available on a broader surface than needed. Until you do that, I recommend living with the warning as a reminder that a better setting is available.

As others have noted, both here and on Slack, using 0.0.0.0 is rarely the best solution, as you'd be exposing your service to not only the current interfaces on the host but all future interfaces as well. The right solution is to expose only to the interface you intend your users to hit.

Forgive my limited Kubernetes networking knowledge, but what address should I be binding to in order to allow traffic from outside the pod (via a k8s Service).

The pod's cluster IP, as outlined by @ethan256.

@mx-psi
Copy link
Member

mx-psi commented Jan 16, 2023

After re-reading the conversation on Slack and here, I agree with @TylerHelmuth that this sounds like a docs issue to me. Would anyone volunteer to add an example on how to properly set the address on Kubernetes to our docs and removing the exception wording on the security docs?

@TylerHelmuth
Copy link
Member

@mx-psi i can tackle that on Tuesday. The helm charts will need updated to get in line with the new policy as well.

@TylerHelmuth
Copy link
Member

@mx-psi I have finished updating both the collector's docs and the helm chart

@mx-psi
Copy link
Member

mx-psi commented Jan 25, 2023

There is no consensus to add a configuration flag to disable the warning and we have updated both our docs and the Helm chart to address this, so I am closing this as done since I think the underlying problem has been addressed.

If there are other instances where we don't recommend a safe default or should change, let's open new issues to address those individually.

@alexchowle
Copy link

You don't get warning-free code by disabling the warnings, but by making your code not do the things that generate them.

I agree. Forgive my limited Kubernetes networking knowledge, but what address should I be binding to in order to allow traffic from outside the pod (via a k8s Service). My understanding is that localhost or 127.0.0.1 is only the loopback interface.

That depends on your configuration and isn't something I can answer for you. Options include using the downward API to get the pod IP into an environment variable and interpolating that into your config, using a service mesh or other proxy inside the pod and binding to localhost, or probably many others I'm not thinking of because I don't know your environment.

How could one supply the Container's IP to bind to (via an environment variable) when not using a container orchestrator i.e. just plain Docker?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants