Status | |
---|---|
Stability | alpha: metrics |
Distributions | contrib |
Issues | |
Code Owners | @djaglowski, @Caleb-Hurshman, @mrsillydog |
This receiver fetches metrics for an Apache Spark cluster through the Apache Spark REST API - specifically, the /metrics/json, /api/v1/applications/[app-id]/stages, /api/v1/applications/[app-id]/executors, and /api/v1/applications/[app-id]/jobs endpoints.
The purpose of this component is to allow monitoring of Apache Spark clusters and the applications running on them through the collection of performance metrics like memory utilization, CPU utilization, shuffle operations, garbage collection time, I/O operations, and more.
This receiver supports Apache Spark versions:
- 3.3.2+
These configuration options are for connecting to an Apache Spark application.
The following settings are optional:
collection_interval
: (default =60s
): This receiver collects metrics on an interval. This value must be a string readable by Golang's time.ParseDuration. Valid time units arens
,us
(orµs
),ms
,s
,m
,h
.initial_delay
(default =1s
): defines how long this receiver waits before starting.endpoint
: (default =http://localhost:4040
): Apache Spark endpoint to connect to in the form of[http][://]{host}[:{port}]
application_names
: An array of Spark application names for which metrics should be collected. If no application names are specified, metrics will be collected for all Spark applications running on the cluster at the specified endpoint.
receivers:
apachespark:
collection_interval: 60s
endpoint: http://localhost:4040
application_names:
- PythonStatusAPIDemo
- PythonLR
The full list of settings exposed for this receiver are documented here with detailed sample configurations here.
Details about the metrics produced by this receiver can be found in metadata.yaml