Skip to content

Enhancement: Implementation for MetricsReporter backed by Spring Boot Actuator #6227

Closed
@stepio

Description

@stepio

Hello,

First of all, thanks for this great project and your efforts to develop and support it!

Recently I've spotted an interesting issue in spring-kafka project:
spring-projects/spring-kafka#127
@dmachlin asked if there is an option to submit Kafka metrics into Spring Actuator.
As I've already been thinking about this for some time, I've decided to implement a POC for this.

As of now, it does not depend on spring-kafka project itself, although works good with it. I've used Kafka's native concept of implementing MetricsReporter interface, just exposing the appropriate metrics to GaugeService periodically.

Unfortunately, I did no auto-configuration here, so maybe you would consider it not Boot-able enough :) But it was not my goal anyway - I've just implemented the code to make Kafka expose it's metrics to Spring Actuator.

To enable this you need to enrich consumer/producer properties - I provide a static "helper" method for this:

        KafkaStatisticsProvider.configureKafkaMetrics(props, gaugeService);

This method does the following:

    public static void configureKafkaMetrics(Map<String, Object> configs, GaugeService gaugeService) {
        configureKafkaMetrics(configs, gaugeService, METRICS_UPDATE_INTERVAL_DEFAULT);
    }

    public static void configureKafkaMetrics(Map<String, Object> configs, GaugeService gaugeService, long updateInterval) {
        Objects.requireNonNull(configs);
        if (gaugeService != null) {
            configs.put(ConsumerConfig.METRIC_REPORTER_CLASSES_CONFIG, Collections.singletonList(KafkaStatisticsProvider.class.getCanonicalName()));
            configs.put(METRICS_GAUGE_SERVICE_IMPL, gaugeService);
            LOGGER.debug("Set property {} with provided GaugeService instance reference", METRICS_GAUGE_SERVICE_IMPL);
            configs.put(METRICS_UPDATE_INTERVAL_PARAM, updateInterval);
            LOGGER.debug("Set property {} with value {}", METRICS_UPDATE_INTERVAL_PARAM, updateInterval);
        }
    }

This allows Kafka to instantiate and use my MetricsReporter implementation, exposing stats to "/metrics" endpoint. And for my Kafka version (0.9.0.1) I get next stats:

gauge.kafka.consumer-metrics.request-size-max: 143,
gauge.kafka.consumer-metrics.connection-creation-rate: 0,
gauge.kafka.consumer-metrics.connection-close-rate: 0,
gauge.kafka.consumer-metrics.io-ratio: 0.00012978103445396397,
gauge.kafka.consumer-coordinator-metrics.heartbeat-response-time-max: 3,
gauge.kafka.consumer-coordinator-metrics.join-time-avg: 0,
gauge.kafka.consumer-metrics.io-wait-ratio: 1.0055865490422031,
gauge.kafka.consumer-fetch-manager-metrics.records-per-request-avg: 0,
gauge.kafka.consumer-metrics.request-rate: 2.5392582692950216,
gauge.kafka.consumer-metrics.incoming-byte-rate: 128.9007684597394,
gauge.kafka.consumer-metrics.response-rate: 2.5392582692950216,
gauge.kafka.consumer-metrics.network-io-rate: 5.078516538590043,
gauge.kafka.consumer-coordinator-metrics.sync-rate: 0,
gauge.kafka.consumer-node-metrics.request-latency-avg: 0,
gauge.kafka.consumer-fetch-manager-metrics.fetch-latency-max: 507,
gauge.kafka.consumer-coordinator-metrics.join-rate: 0,
gauge.kafka.consumer-coordinator-metrics.commit-latency-max: 19,
gauge.kafka.consumer-metrics.request-size-avg: 90.76315789473684,
gauge.kafka.consumer-fetch-manager-metrics.fetch-size-max: 0,
gauge.kafka.consumer-metrics.io-wait-time-ns-avg: 130610641.64208242,
gauge.kafka.consumer-node-metrics.request-latency-max: "-Infinity",
gauge.kafka.consumer-coordinator-metrics.last-heartbeat-seconds-ago: 1,
gauge.kafka.consumer-metrics.outgoing-byte-rate: 230.47109923154025,
gauge.kafka.consumer-fetch-manager-metrics.fetch-size-avg: 0,
gauge.kafka.consumer-fetch-manager-metrics.fetch-latency-avg: 501.97297297297297,
gauge.kafka.consumer-coordinator-metrics.sync-time-max: 0,
gauge.kafka.consumer-fetch-manager-metrics.fetch-throttle-time-max: 0,
gauge.kafka.consumer-coordinator-metrics.commit-rate: 0.2154581423590708,
gauge.kafka.consumer-node-metrics.outgoing-byte-rate: 0,
gauge.kafka.consumer-metrics.select-rate: 7.699116522203851,
gauge.kafka.consumer-metrics.io-time-ns-avg: 16856.61388286334,
gauge.kafka.consumer-node-metrics.request-size-max: "-Infinity",
gauge.kafka.consumer-metrics.connection-count: 4,
gauge.kafka.consumer-coordinator-metrics.join-time-max: 0,
gauge.kafka.consumer-node-metrics.response-rate: 2.004677581022386,
gauge.kafka.consumer-coordinator-metrics.sync-time-avg: 0,
gauge.kafka.consumer-fetch-manager-metrics.bytes-consumed-rate: 0,
gauge.kafka.consumer-node-metrics.incoming-byte-rate: 0,
gauge.kafka.consumer-fetch-manager-metrics.fetch-throttle-time-avg: 0,
gauge.kafka.consumer-coordinator-metrics.heartbeat-rate: 0.34012962718013645,
gauge.kafka.consumer-fetch-manager-metrics.fetch-rate: 2.0055287549460674,
gauge.kafka.consumer-fetch-manager-metrics.records-consumed-rate: 0,
gauge.kafka.consumer-node-metrics.request-rate: 0,
gauge.kafka.consumer-fetch-manager-metrics.records-lag-max: "-Infinity",
gauge.kafka.consumer-node-metrics.request-size-avg: 86,
gauge.kafka.consumer-coordinator-metrics.assigned-partitions: 1,
gauge.kafka.consumer-coordinator-metrics.commit-latency-avg: 7.454545454545454

So the question... What do you think about this? Should I make a PR?

Metadata

Metadata

Assignees

No one assigned

    Labels

    status: declinedA suggestion or change that we don't feel we should currently apply

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions