Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

otlp collector keep crashing after enable redaction for log #35316

Closed
pretystar opened this issue Sep 20, 2024 · 4 comments · Fixed by #35331
Closed

otlp collector keep crashing after enable redaction for log #35316

pretystar opened this issue Sep 20, 2024 · 4 comments · Fixed by #35331
Labels
bug Something isn't working priority:p1 High processor/redaction Redaction processor release:blocker The issue must be resolved before cutting the next release

Comments

@pretystar
Copy link

Component(s)

processor/redaction

What happened?

Description

otlp collector keep crashing after enable redaction for log

Steps to Reproduce

Try to enable redaction with below config:

  processors:
    redaction:
      allow_all_keys: true
      summary: debug
  service:
    pipelines:
      logs:
        receivers:
          - otlp
        exporters:
          - debug
          - azuremonitor
        processors: 
          - attributes/log
          - redaction

Expected Result

Actual Result

Pod keep crashing with log:


Collector version

0.109.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

No response

Log output

2024-09-20T08:26:45.108Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 6}
panic: runtime error: index out of range [1] with length 1

goroutine 858 [running]:
go.opentelemetry.io/collector/pdata/plog.LogRecordSlice.At(...)
	go.opentelemetry.io/collector/pdata@v1.15.0/plog/generated_logrecordslice.go:56
github.com/open-telemetry/opentelemetry-collector-contrib/processor/redactionprocessor.(*redaction).processResourceLog(0xc00320f620, {0xa1b9438, 0xc002ea3b60}, {0xc008e27440?, 0xc001e6b3e8?})
	github.com/open-telemetry/opentelemetry-collector-contrib/processor/redactionprocessor@v0.109.0/processor.go:110 +0x125
github.com/open-telemetry/opentelemetry-collector-contrib/processor/redactionprocessor.(*redaction).processLogs(0xc00320f620, {0xa1b9438, 0xc002ea3b60}, {0xc0028f47e0?, 0xc001e6b3e8?})
	github.com/open-telemetry/opentelemetry-collector-contrib/processor/redactionprocessor@v0.109.0/processor.go:67 +0x4d
go.opentelemetry.io/collector/processor/processorhelper.NewLogsProcessor.func1({0xa1b9438, 0xc002ea3b60}, {0xc0028f47e0?, 0xc001e6b3e8?})
	go.opentelemetry.io/collector/processor@v0.109.0/processorhelper/logs.go:57 +0x13e
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs(...)
	go.opentelemetry.io/collector/consumer@v0.109.0/logs.go:26
go.opentelemetry.io/collector/processor/processorhelper.NewLogsProcessor.func1({0xa1b9438, 0xc002ea3b60}, {0xc0028f47e0?, 0xc001e6b3e8?})
	go.opentelemetry.io/collector/processor@v0.109.0/processorhelper/logs.go:67 +0x2a2
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs(...)
	go.opentelemetry.io/collector/consumer@v0.109.0/logs.go:26
go.opentelemetry.io/collector/consumer.ConsumeLogsFunc.ConsumeLogs(...)
	go.opentelemetry.io/collector/consumer@v0.109.0/logs.go:26
go.opentelemetry.io/collector/internal/fanoutconsumer.(*logsConsumer).ConsumeLogs(0xc0035452c0, {0xa1b9438, 0xc002ea3b60}, {0xc0028f47e0?, 0xc001e6b3e8?})
	go.opentelemetry.io/collector@v0.109.0/internal/fanoutconsumer/logs.go:62 +0x1e7
go.opentelemetry.io/collector/receiver/otlpreceiver/internal/logs.(*Receiver).Export(0xc003552f30, {0xa1b9438, 0xc002ea3ad0}, {0xc0028f47e0?, 0xc001e6b3e8?})
	go.opentelemetry.io/collector/receiver/otlpreceiver@v0.109.0/internal/logs/otlp.go:41 +0xd9
go.opentelemetry.io/collector/pdata/plog/plogotlp.rawLogsServer.Export({{0xa158408?, 0xc003552f30?}}, {0xa1b9438, 0xc002ea3ad0}, 0xc0028f47e0)
	go.opentelemetry.io/collector/pdata@v1.15.0/plog/plogotlp/grpc.go:88 +0xea
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler.func1({0xa1b9438?, 0xc002ea3ad0?}, {0x8e6f280?, 0xc0028f47e0?})
	go.opentelemetry.io/collector/pdata@v1.15.0/internal/data/protogen/collector/logs/v1/logs_service.pb.go:311 +0xcb
go.opentelemetry.io/collector/config/configgrpc.(*ServerConfig).toServerOption.enhanceWithClientInformation.func9({0xa1b9438?, 0xc002ea3a70?}, {0x8e6f280, 0xc0028f47e0}, 0x4111a5?, 0xc0028f47f8)
	go.opentelemetry.io/collector/config/configgrpc@v0.109.0/configgrpc.go:459 +0x46
go.opentelemetry.io/collector/pdata/internal/data/protogen/collector/logs/v1._LogsService_Export_Handler({0x7cb7520, 0xc0035a8ab0}, {0xa1b9438, 0xc002ea3a70}, 0xc0026fde00, 0xc00356afe0)
	go.opentelemetry.io/collector/pdata@v1.15.0/internal/data/protogen/collector/logs/v1/logs_service.pb.go:313 +0x143
google.golang.org/grpc.(*Server).processUnaryRPC(0xc003261200, {0xa1b9438, 0xc002ea3980}, {0xa202580, 0xc000201040}, 0xc001dbf320, 0xc0035bd3b0, 0x105ce5f0, 0x0)
	google.golang.org/grpc@v1.66.0/server.go:1393 +0xe11
google.golang.org/grpc.(*Server).handleStream(0xc003261200, {0xa202580, 0xc000201040}, 0xc001dbf320)
	google.golang.org/grpc@v1.66.0/server.go:1804 +0xe8b
google.golang.org/grpc.(*Server).serveStreams.func2.1()
	google.golang.org/grpc@v1.66.0/server.go:1029 +0x7f
created by google.golang.org/grpc.(*Server).serveStreams.func2 in goroutine 223
	google.golang.org/grpc@v1.66.0/server.go:1040 +0x125

Additional context

No response

@pretystar pretystar added bug Something isn't working needs triage New item requiring triage labels Sep 20, 2024
@github-actions github-actions bot added the processor/redaction Redaction processor label Sep 20, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@qrli
Copy link

qrli commented Sep 20, 2024

Possible reason:
image

@mx-psi
Copy link
Member

mx-psi commented Sep 20, 2024

@qrli That definitely looks odd! Would you be up to filing a PR to fix this?

@mx-psi mx-psi added priority:p2 Medium and removed needs triage New item requiring triage labels Sep 20, 2024
@TylerHelmuth TylerHelmuth added priority:p1 High and removed priority:p2 Medium labels Sep 20, 2024
@TylerHelmuth
Copy link
Member

Bumping the priority since this is a panic scenario

@TylerHelmuth TylerHelmuth added the release:blocker The issue must be resolved before cutting the next release label Sep 20, 2024
jriguera pushed a commit to springernature/opentelemetry-collector-contrib that referenced this issue Oct 4, 2024
… pipeline. (open-telemetry#35331)

**Description:** 
Fixes an index issue caused by an incorrect for loop condition.

**Link to tracking Issue:**
closes
open-telemetry#35316

**Testing:** <Describe what testing was performed and which tests were
added.>
Updated unit test. The updated test panics without the fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working priority:p1 High processor/redaction Redaction processor release:blocker The issue must be resolved before cutting the next release
Projects
None yet
4 participants