Closed as not planned
Description
If we use the prometheus operator
, we can easily configure the target pod we want to collect metrics for using pod monitor
crd and label selector and deploy the prometheus server
.
The approximate spec of the pod monitor is as follows.
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: <pod monitor name>
labels:
<pod monitor labels for prometheus server>
spec:
selector:
matchLabels:
<pod labels>
namespaceSelector:
matchNames:
- <pod namespace>
podMetricsEndpoints:
- port: <container metric port name>
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: <prometheus server name>
spec:
serviceAccountName: prometheus
podMonitorSelector:
matchLabels:
<pod monitor labels>
resources:
<prometheus cpu/memory resources>
enableAdminAPI: false
The spark operator submits the driver and executor pods, and adds a label related to the spark operator to each pod.
If we want to collect spark pod's metric, we need to add sparkoperator.k8s.io
label to pod monitor
.
To do this, iteratively we need to looks up the description of the spark pod for filling following values.
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: <pod monitor name>
labels:
<pod monitor labels for prometheus server>
ownerReferences:
- apiVersion: sparkoperator.k8s.io/v1beta2
controller: true
kind: SparkApplication
name: <spark-app-name>
uid: <spark-app-uid>
spec:
selector:
matchLabels:
sparkoperator.k8s.io/app-name: <spark-app-name>
spark-role: <spark-role>
sparkoperator.k8s.io/launched-by-spark-operator: "true"
namespaceSelector:
matchNames:
- <spark namespace>
podMetricsEndpoints:
- port: <container metrics port name>
So my suggestion is to add a configuration for pod monitor in spec.monitoring
and create a pod monitor
in the form below through it.
spec:
monitoring:
...
prometheus:
...
portName: <port-name>
podMonitor:
labels:
<pod monitor labels for prometheus server>
spark-role: <driver or executor>