Skip to content

Commit 4339ddf

Browse files
committed
chore: some hardening
1 parent 0455720 commit 4339ddf

File tree

4 files changed

+18
-21
lines changed

4 files changed

+18
-21
lines changed

Dockerfile

+3-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
1-
FROM golang:1.13.8-alpine3.11 as build
1+
FROM golang:1.14.1-alpine3.11 as build
22

3-
# Get prebuild libkafka
3+
# Get prebuild libkafka.
4+
# XXX stop using the edgecommunity channel once librdkafka 1.3.0 is officially published
45
RUN echo "@edge http://dl-cdn.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories && \
56
echo "@edgecommunity http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
67
apk add --no-cache alpine-sdk 'librdkafka@edgecommunity>=1.3.0' 'librdkafka-dev@edgecommunity>=1.3.0'

README.md

+3-7
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ The Avro-JSON serialization is the same. See the [Avro schema](./schemas/metric.
3838

3939
### prometheus-kafka-adapter
4040

41-
There is a docker image `telefonica/prometheus-kafka-adapter:1.5.1` [available on Docker Hub](https://hub.docker.com/r/telefonica/prometheus-kafka-adapter/).
41+
There is a docker image `telefonica/prometheus-kafka-adapter:1.6.0` [available on Docker Hub](https://hub.docker.com/r/telefonica/prometheus-kafka-adapter/).
4242

4343
Prometheus-kafka-adapter listens for metrics coming from Prometheus and sends them to Kafka. This behaviour can be configured with the following environment variables:
4444

@@ -60,9 +60,7 @@ To connect to Kafka over SSL define the following additonal environment variable
6060
- `KAFKA_SSL_CLIENT_KEY_PASS`: Kafka SSL client certificate key password (optional), defaults to `""`
6161
- `KAFKA_SSL_CA_CERT_FILE`: Kafka SSL broker CA certificate file, defaults to `""`
6262

63-
When deployed in a K8s Cluster using Helm and using a Kafka external to the cluster, it might be necessary to define the kafka hostname resolution locally (this fills the /etc/hosts of the container).
64-
65-
Use a custom values.yaml file with section 'hostAliases' (as mentioned in default values.yaml).
63+
When deployed in a Kubernetes cluster using Helm and using a Kafka external to the cluster, it might be necessary to define the kafka hostname resolution locally (this fills the /etc/hosts of the container). Use a custom values.yaml file with section `hostAliases` (as mentioned in default values.yaml).
6664

6765
### prometheus
6866

@@ -73,9 +71,7 @@ remote_write:
7371
- url: "http://prometheus-kafka-adapter:8080/receive"
7472
```
7573
76-
When deployed in a K8s Cluster using Helm and using an external Prometheus, it might be necessary to expose prometheus-kafka-adapter input port as a node port.
77-
78-
Use a custom values.yaml file to set service.type: NodePort and service.nodeport:<PortNumber> (see comments in default values.yaml)
74+
When deployed in a Kubernetes cluster using Helm and using an external Prometheus, it might be necessary to expose prometheus-kafka-adapter input port as a node port. Use a custom values.yaml file to set `service.type: NodePort` and `service.nodeport: <PortNumber>` (see comments in default values.yaml)
7975

8076
## development
8177

helm/prometheus-kafka-adapter/values.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ replicaCount: 1
66

77
image:
88
repository: telefonica/prometheus-kafka-adapter
9-
tag: 1.4.1
9+
tag: 1.6.0
1010
pullPolicy: IfNotPresent
1111

1212
imagePullSecrets: []

main.go

+11-11
Original file line numberDiff line numberDiff line change
@@ -29,20 +29,20 @@ func main() {
2929
log.Info("creating kafka producer")
3030

3131
kafkaConfig := kafka.ConfigMap{
32-
"bootstrap.servers": kafkaBrokerList,
33-
"compression.codec": kafkaCompression,
34-
"batch.num.messages": kafkaBatchNumMessages,
35-
"go.batch.producer": true, // Enable batch producer (for increased performance).
36-
"go.delivery.reports": false, // per-message delivery reports to the Events() channel
32+
"bootstrap.servers": kafkaBrokerList,
33+
"compression.codec": kafkaCompression,
34+
"batch.num.messages": kafkaBatchNumMessages,
35+
"go.batch.producer": true, // Enable batch producer (for increased performance).
36+
"go.delivery.reports": false, // per-message delivery reports to the Events() channel
3737
}
3838

3939
if kafkaSslClientCertFile != "" && kafkaSslClientKeyFile != "" && kafkaSslCACertFile != "" {
40-
kafkaSslValidation = true
41-
kafkaConfig["security.protocol"] = "ssl"
42-
kafkaConfig["ssl.ca.location"] = kafkaSslCACertFile // CA certificate file for verifying the broker's certificate.
43-
kafkaConfig["ssl.certificate.location"] = kafkaSslClientCertFile // Client's certificate
44-
kafkaConfig["ssl.key.location"] = kafkaSslClientKeyFile // Client's key
45-
kafkaConfig["ssl.key.password"] = kafkaSslClientKeyPass // Key password, if any.
40+
kafkaSslValidation = true
41+
kafkaConfig["security.protocol"] = "ssl"
42+
kafkaConfig["ssl.ca.location"] = kafkaSslCACertFile // CA certificate file for verifying the broker's certificate.
43+
kafkaConfig["ssl.certificate.location"] = kafkaSslClientCertFile // Client's certificate
44+
kafkaConfig["ssl.key.location"] = kafkaSslClientKeyFile // Client's key
45+
kafkaConfig["ssl.key.password"] = kafkaSslClientKeyPass // Key password, if any.
4646
}
4747

4848
producer, err := kafka.NewProducer(&kafkaConfig)

0 commit comments

Comments
 (0)