Skip to content

[BUG] status=2/INVALIDARGUMENT error after version upgrade #701

@kamrajshahapure

Description

@kamrajshahapure

Describe the bug
We are trying to upgrade Go-Carbon from v0.14.0 to v0.17.1 but getting errors (with Kafka config?)
(The same error persists even after upgrading it to v0.18.0 )

Few observations:

  • The same config file works when I downgrade the version to v0.14.0
  • If I remove the kafka config in new versions (v0.17.1 and v0.18.0), the gocarbon service starts as expected, this led me to conclude that there is a possible issue with kafka config for newer versions

Error:
When starting the go-carbon.service, I get go-carbon.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

Logs
systemd gives go-carbon.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

strace gives this towards the end

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xb88106]

goroutine 54 [running]:
github.com/go-graphite/go-carbon/receiver/kafka.(*Kafka).worker(0xc000493b00)
	/home/runner/work/go-carbon/go-carbon/receiver/kafka/kafka.go:515 +0x3c6
github.com/go-graphite/go-carbon/receiver/kafka.(*Kafka).consume.func4()
	/home/runner/work/go-carbon/go-carbon/receiver/kafka/kafka.go:417 +0x3f
created by github.com/go-graphite/go-carbon/receiver/kafka.(*Kafka).consume
	/home/runner/work/go-carbon/go-carbon/receiver/kafka/kafka.go:415 +0x106a

gocarbon logs show that it is indeed connecting to kafka but after this the service stops due to INVALIDARGUMENT error on systemd

[2025-02-25T09:38:34.160Z] INFO [kafka] previous state loaded {"offset": 1155539079397}
[2025-02-25T09:38:34.160Z] INFO [kafka] reconnect forced {}
[2025-02-25T09:38:34.160Z] INFO [kafka] connecting to kafka {}
[2025-02-25T09:38:34.162Z] INFO [carbonserver] starting carbonserver {"listen": "127.0.0.1:8080", "whisperData": "/var/lib/graphite/whisper", "maxGlobs": 100, "scanFrequency": "5m0s"}
[2025-02-25T09:38:34.163Z] INFO [carbonserver] file list updated {"handler": "fileListUpdated", "file_scan_runtime": 0.001004787, "indexing_runtime": 0.000005735, "rdtime_update_runtime": 0.000000031, "cache_index_runtime": 0.000000152, "total_runtime": 0.00101477, "Files": 1, "index_size": 1, "pruned_trigrams": 0, "cache_metric_len_before": 0, "cache_metric_len_after": 0, "metrics_known": 0, "index_type": "trigram", "read_from_cache": false}
[2025-02-25T09:38:34.164Z] INFO [main] started {}
[2025-02-25T09:38:34.169Z] INFO [kafka] connected to kafka {}
[2025-02-25T09:38:34.169Z] INFO [kafka] Worker started {}

Go-carbon Configuration:
The Kafka Config looks like this:

[receiver.kafka]
 protocol = "kafka"
 parse-protocol = "plain"
 brokers = ['brokerhost:9092']
 topic = "customTopic"
 state-file = "/var/lib/graphite/kafka.state"
 partition = 0
 reconnect-interval = "5m"
 fetch-interval = "200ms"
 initial-offset = "-30m"
 kafka-version = "1.0.0"

Metric retention and aggregation schemas
Please provide content of storage-schemas.conf and storage-aggregation.conf files.

Simplified query (if applicable)
Please provide a query that triggered the issue, ideally narrowed down to smallest possible set of functions.

Additional context
Add any other context about the problem here. Like type and version of a frontend application etc.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions