Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use docker compose in otel collector example #5244

Merged
merged 15 commits into from
May 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm

- Update `go.opentelemetry.io/proto/otlp` from v1.1.0 to v1.2.0. (#5177)
- Improve performance of baggage member character validation in `go.opentelemetry.io/otel/baggage`. (#5214)
- The `otel-collector` example now uses docker compose to bring up services instead of kubernetes. (#5244)

## [1.25.0/0.47.0/0.0.8/0.1.0-alpha] 2024-04-05

Expand Down
28 changes: 0 additions & 28 deletions example/otel-collector/Makefile

This file was deleted.

198 changes: 15 additions & 183 deletions example/otel-collector/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,165 +13,17 @@ App + SDK ---> OpenTelemetry Collector ---|

# Prerequisites

You will need access to a Kubernetes cluster for this demo. We use a local
instance of [microk8s](https://microk8s.io/), but please feel free to pick
your favorite. If you do decide to use microk8s, please ensure that dns
and storage addons are enabled
You will need [Docker Compose V2](https://docs.docker.com/compose/) installed for this demo.

```bash
microk8s enable dns storage
```

For simplicity, the demo application is not part of the k8s cluster, and will
access the OpenTelemetry Collector through a NodePort on the cluster. Note that
the NodePort opened by this demo is not secured.

Ideally you'd want to either have your application running as part of the
kubernetes cluster, or use a secured connection (NodePort/LoadBalancer with TLS
or an ingress extension).

If not using microk8s, ensure that cert-manager is installed by following [the
instructions here](https://cert-manager.io/docs/installation/).

# Deploying to Kubernetes

All the necessary Kubernetes deployment files are available in this demo, in the
[k8s](./k8s) folder. For your convenience, we assembled a [makefile](./Makefile)
with deployment commands (see below). For those with subtly different systems,
you are, of course, welcome to poke inside the Makefile and run the commands
manually. If you use microk8s and alias `microk8s kubectl` to `kubectl`, the
Makefile will not recognize the alias, and so the commands will have to be run
manually.

## Setting up the Prometheus operator

If you're using microk8s like us, simply do

```bash
microk8s enable prometheus
```

and you're good to go. Move on to [Using the makefile](#using-the-makefile).

Otherwise, obtain a copy of the Prometheus Operator stack from
[prometheus-operator](https://github.com/prometheus-operator/kube-prometheus):

```bash
git clone https://github.com/prometheus-operator/kube-prometheus.git
cd kube-prometheus
kubectl create -f manifests/setup

# wait for namespaces and CRDs to become available, then
kubectl create -f manifests/
```

And to tear down the stack when you're finished:

```bash
kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
```

## Using the makefile

Next, we can deploy our Jaeger instance, Prometheus monitor, and Collector
using the [makefile](./Makefile).

```bash
# Create the namespace
make namespace-k8s

# Deploy Jaeger operator
make jaeger-operator-k8s

# After the operator is deployed, create the Jaeger instance
make jaeger-k8s
# Deploying to docker compose
dashpole marked this conversation as resolved.
Show resolved Hide resolved

# Then the Prometheus instance. Ensure you have enabled a Prometheus operator
# before executing (see above).
make prometheus-k8s

# Finally, deploy the OpenTelemetry Collector
make otel-collector-k8s
```

If you want to clean up after this, you can use the `make clean-k8s` to delete
all the resources created above. Note that this will not remove the namespace.
Because Kubernetes sometimes gets stuck when removing namespaces, please remove
this namespace manually after all the resources inside have been deleted,
for example with
This command will bring up the OpenTelemetry Collector, Jaeger, and Prometheus, and
expose the necessary ports for you to view the data.

```bash
kubectl delete namespaces observability
```

# Configuring the OpenTelemetry Collector

Although the above steps should deploy and configure everything, let's spend
some time on the [configuration](./k8s/otel-collector.yaml) of the Collector.

One important part here is that, in order to enable our application to send data
to the OpenTelemetry Collector, we need to first configure the `otlp` receiver:

```yml
...
otel-collector-config: |
receivers:
# Make sure to add the otlp receiver.
# This will open up the receiver on port 4317.
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
processors:
...
```

This will create the receiver on the Collector side, and open up port `4317`
for receiving traces.

The rest of the configuration is quite standard, with the only mention that we
need to create the Jaeger and Prometheus exporters:

```yml
...
exporters:
jaeger:
endpoint: "jaeger-collector.observability.svc.cluster.local:14250"

prometheus:
endpoint: 0.0.0.0:8889
namespace: "testapp"
...
docker compose up -d
XSAM marked this conversation as resolved.
Show resolved Hide resolved
```

## OpenTelemetry Collector service

One more aspect in the OpenTelemetry Collector [configuration](./k8s/otel-collector.yaml) worth looking at is the NodePort service used for accessing it:

```yaml
apiVersion: v1
kind: Service
metadata:
...
spec:
ports:
- name: otlp # Default endpoint for otlp receiver.
port: 4317
protocol: TCP
targetPort: 4317
nodePort: 30080
- name: metrics # Endpoint for metrics from our app.
port: 8889
protocol: TCP
targetPort: 8889
selector:
component: otel-collector
type:
NodePort
```

This service will bind the `4317` port used to access the otlp receiver to port `30080` on your cluster's node. By doing so, it makes it possible for us to access the Collector by using the static address `<node-ip>:30080`. In case you are running a local cluster, this will be `localhost:30080`. Note that you can also change this to a LoadBalancer or have an ingress extension for accessing the service.

# Running the code

You can find the complete code for this example in the [main.go](./main.go)
Expand All @@ -192,40 +44,20 @@ sample application

## Jaeger UI

First, we need to enable an ingress provider. If you've been using microk8s,
do

```bash
microk8s enable ingress
```

Then find out where the Jaeger console is living:

```bash
kubectl get ingress --all-namespaces
```

For us, we get the output

```
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
observability jaeger-query <none> * 127.0.0.1 80 5h40m
```

indicating that the Jaeger UI is available at
[http://localhost:80](http://localhost:80). Navigate there in your favorite
The Jaeger UI is available at
[http://localhost:16686](http://localhost:16686). Navigate there in your favorite
web-browser to view the generated traces.

## Prometheus

Unfortunately, the Prometheus operator doesn't provide a convenient
out-of-the-box ingress route for us to use, so we'll use port-forwarding
instead. Note: this is a quick-and-dirty solution for the sake of example.
You *will* be attacked by shady people if you do this in production!
The Prometheus UI is available at
[http://localhost:9090](http://localhost:9090). Navigate there in your favorite
web-browser to view the generated metrics.

# Shutting down

To shut down and clean the example, run

```bash
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
docker compose down
```

Then navigate to [http://localhost:9090](http://localhost:9090) to view
the Prometheus dashboard.
23 changes: 23 additions & 0 deletions example/otel-collector/docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0

services:
otel-collector:
image: otel/opentelemetry-collector-contrib:0.91.0
MrAlias marked this conversation as resolved.
Show resolved Hide resolved
command: ["--config=/etc/otel-collector.yaml"]
volumes:
- ./otel-collector.yaml:/etc/otel-collector.yaml
ports:
- 4317:4317

prometheus:
image: prom/prometheus:v2.45.2
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- 9090:9090

jaeger:
image: jaegertracing/all-in-one:1.52
ports:
- 16686:16686
8 changes: 0 additions & 8 deletions example/otel-collector/k8s/jaeger.yaml

This file was deleted.

7 changes: 0 additions & 7 deletions example/otel-collector/k8s/namespace.yaml

This file was deleted.

Loading