Automated setup script for deploying OpenTelemetry Operator, Collector, Kubernetes monitoring, and Events collection to your Kubernetes cluster with Last9 integration.
- β One-command installation - Deploy everything with a single command
- β Flexible deployment options - Install only what you need (logs, traces, metrics, events)
- β Auto-instrumentation - Automatic instrumentation for Java, Python, Node.js, and more
- β Kubernetes monitoring - Full cluster observability with kube-prometheus-stack
- β Events collection - Capture and forward Kubernetes events
- β Cluster identification - Automatic cluster name detection and attribution
- β Tolerations support - Deploy on tainted nodes (control-plane, spot instances, etc.)
- β Environment customization - Override deployment environment and cluster name
kubectlconfigured to access your Kubernetes clusterhelm(v3+) installed
Installs OpenTelemetry Operator, Collector, Kubernetes monitoring stack, and Events agent:
./last9-otel-setup.sh \
token="Basic <your-base64-token>" \
endpoint="<your-otlp-endpoint>" \
monitoring-endpoint="<your-metrics-endpoint>" \
username="<your-username>" \
password="<your-password>"curl -fsSL https://raw.githubusercontent.com/last9/l9-otel-operator/main/last9-otel-setup.sh | bash -s -- \
token="Basic <your-token>" \
endpoint="<your-otlp-endpoint>" \
monitoring-endpoint="<your-metrics-endpoint>" \
username="<user>" \
password="<pass>"For applications that need distributed tracing:
./last9-otel-setup.sh operator-only \
token="Basic <your-token>" \
endpoint="<your-otlp-endpoint>"For log collection use cases:
./last9-otel-setup.sh logs-only \
token="Basic <your-token>" \
endpoint="<your-otlp-endpoint>"For cluster metrics and monitoring:
./last9-otel-setup.sh monitoring-only \
monitoring-endpoint="<your-metrics-endpoint>" \
username="<your-username>" \
password="<your-password>"For Kubernetes events collection:
./last9-otel-setup.sh events-only \
endpoint="<your-otlp-endpoint>" \
token="Basic <your-base64-token>" \
monitoring-endpoint="<your-metrics-endpoint>"./last9-otel-setup.sh \
token="..." \
endpoint="..." \
cluster="prod-us-east-1"If not provided, the cluster name is auto-detected from kubectl config current-context.
./last9-otel-setup.sh \
token="..." \
endpoint="..." \
env="production"Default: staging for collector, local for auto-instrumentation.
For deploying on nodes with taints (e.g., control-plane, monitoring nodes):
./last9-otel-setup.sh \
token="..." \
endpoint="..." \
tolerations-file=/path/to/tolerations.yamlExample tolerations files are provided in the examples/ directory:
tolerations-all-nodes.yaml- Deploy on all nodes including control-planetolerations-monitoring-nodes.yaml- Deploy on dedicated monitoring nodestolerations-spot-instances.yaml- Deploy on spot/preemptible instancestolerations-multi-taint.yaml- Handle multiple taintstolerations-nodeSelector-only.yaml- Use nodeSelector without tolerations
| File | Description |
|---|---|
last9-otel-collector-values.yaml |
OpenTelemetry Collector configuration for logs and traces |
k8s-monitoring-values.yaml |
Kube-prometheus-stack configuration for metrics |
last9-kube-events-agent-values.yaml |
Events collection agent configuration |
collector-svc.yaml |
Collector service for application instrumentation |
instrumentation.yaml |
Auto-instrumentation configuration |
deploy.yaml |
Sample application deployment with auto-instrumentation |
tolerations.yaml |
Sample tolerations configuration |
The following placeholders are automatically replaced during installation:
{{AUTH_TOKEN}}- Your Last9 authorization token{{OTEL_ENDPOINT}}- Your OTEL endpoint URL{{MONITORING_ENDPOINT}}- Your metrics endpoint URL
./last9-otel-setup.sh uninstall-all# Uninstall only monitoring stack
./last9-otel-setup.sh uninstall function="uninstall_last9_monitoring"
# Uninstall only events agent
./last9-otel-setup.sh uninstall function="uninstall_events_agent"
# Uninstall OpenTelemetry components (operator + collector)
./last9-otel-setup.sh uninstallAfter installation, verify the deployment:
# Check all pods in last9 namespace
kubectl get pods -n last9
# Check collector logs
kubectl logs -n last9 -l app.kubernetes.io/name=opentelemetry-collector
# Check monitoring stack
kubectl get prometheus -n last9
# Check events agent
kubectl get pods -n last9 -l app.kubernetes.io/name=last9-kube-events-agentThe script automatically sets up instrumentation for:
- β Java - Automatic OTLP export
- π Python - Automatic OTLP export
- π’ Node.js - Automatic OTLP export
- π΅ Go - Manual instrumentation supported
- π Ruby - Coming soon
The OpenTelemetry Collector can automatically discover and scrape application metrics using Kubernetes service discovery with Prometheus-compatible scraping.
Note: This is an optional feature. Use last9-otel-collector-metrics-values.yaml to enable metrics scraping.
To enable application metrics scraping, deploy with the additional metrics configuration file:
# Deploy with metrics scraping enabled
helm upgrade last9-opentelemetry-collector opentelemetry-collector \
--namespace last9 \
--values last9-otel-collector-values.yaml \
--values last9-otel-collector-metrics-values.yamlConfigure Last9 Metrics Endpoint:
Before deploying, update these placeholders in last9-otel-collector-metrics-values.yaml:
{{LAST9_METRICS_ENDPOINT}}- Your Last9 Prometheus remote write URL{{LAST9_METRICS_USERNAME}}- Your Last9 metrics username{{LAST9_METRICS_PASSWORD}}- Your Last9 metrics password
Add these annotations to your pod template or service to enable automatic metrics scraping:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics" # Optional, defaults to /metricsThat's it! Your application metrics will be automatically:
- Discovered - No manual configuration needed
- Scraped - Every 30 seconds by default
- Enriched - With pod, namespace, node labels
- Exported - To Last9 via Prometheus remote write
- Automatic Discovery - OTel Collector watches Kubernetes API for all pods/services
- Annotation-Based Filtering - Only scrapes resources with
prometheus.io/scrape: "true" - Metadata Enrichment - Adds Kubernetes labels automatically (pod, namespace, node, app)
- Direct Export - Sends metrics to Last9 Prometheus endpoint
| Annotation | Required | Default | Description |
|---|---|---|---|
prometheus.io/scrape |
Yes | - | Set to "true" to enable scraping |
prometheus.io/port |
Yes | - | Port number exposing /metrics |
prometheus.io/path |
No | /metrics | HTTP path for metrics endpoint |
This setup scales automatically:
- 1 service β Automatically scraped
- 1000 services β Automatically scraped
- No configuration changes needed when adding new services
Base Configuration: last9-otel-collector-values.yaml
- Traces and logs collection
- Basic OTLP receiver
- No metrics scraping
Optional Metrics Configuration: last9-otel-collector-metrics-values.yaml
- Prometheus receiver with kubernetes_sd_configs for auto-discovery
- prometheusremotewrite exporter for sending to Last9
- RBAC for Kubernetes API access
- Increased resource limits for collector pods
- BasicAuth extension for Last9 metrics endpoint
To use both: --values last9-otel-collector-values.yaml --values last9-otel-collector-metrics-values.yaml
Check if metrics are being scraped:
# Check collector logs for scraping
kubectl logs -n last9 -l app.kubernetes.io/name=last9-otel-collector | grep kubernetes-pods
# Port-forward to collector metrics endpoint
kubectl port-forward -n last9 daemonset/last9-otel-collector 8888:8888
# Check scrape status
curl http://localhost:8888/metrics | grep scrape_samples_scraped