You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Many errors are related to the invalid JSON message, skipping in the TCP Input but no errors when flowing to fluentd.
To Reproduce
I have a producer of TCP Messages from an external place (APIGEE). Every request is forwarded to Fluentbit INPUT TCP, parsed as JSON and sent directly to OpenSearch.
I have one INPUT and one OUTPUT to route the data.
I tested to enable Tap functionality over Inputs
I tested to enable LogLevel Trace trying to get more insights.
I tested disabling json parser and enabling none but the logs disappeared, I suppose because the message contains the JSON contentType and is JSON formated.
Another interesting test that I did was send from the producer a dummy JSON without any dynamic data or long string, only one key-value dummy, but the problem persisted.
My last attempt was to deploy a fluentd with the same configuration and none/json parser and duplicate the data to try to get the error but exactly in the same time range, I received about twice documents in fluentd and no warning about json parsing.
Evidence, around 50% of the documents are dropped.
[INPUT]
Name tcp
Alias tcp-input-nonprod
Listen 0.0.0.0
Port 2021
Format json
Tag apigee-nonprod.*
Source_Address_Key Off
Buffer_Size 4096k
Chunk_Size 4096k
Threaded On
[OUTPUT]
Name opensearch
Alias opensearch_apigee_nonprod
Match apigee-nonprod.*
Host ...
Port 443
Type doc
Logstash_Format On
Logstash_Prefix logs-apigee-nonprod
Include_Tag_Key Off
Replace_Dots On
AWS_Auth On
AWS_Region eu-central-1
tls On
Suppress_Type_Name On
Buffer_Size 4MB
Environment name and version Kubernetes v1.30
Server type and version: Opensearch 2.11
Operating System and version: NA
Filters and plugins: TCP and Opensearch
Additional context
Any hint to solve the issue?
In case that is not possible, can get the parser an output about what is the issue with the JSON, Fluentd do this:
Bug Report
Describe the bug
Many errors are related to the
invalid JSON message, skipping
in the TCP Input but no errors when flowing to fluentd.To Reproduce
I have a producer of TCP Messages from an external place (APIGEE). Every request is forwarded to Fluentbit INPUT TCP, parsed as JSON and sent directly to OpenSearch.
Trace
trying to get more insights.json
parser and enablingnone
but the logs disappeared, I suppose because the message contains the JSON contentType and is JSON formated.fluentd
with the same configuration and none/json parser and duplicate the data to try to get the error but exactly in the same time range, I received about twice documents in fluentd and no warning about json parsing.Evidence, around 50% of the documents are dropped.
Expected behavior
All logs are parsed properly.
Your Environment
Additional context
The text was updated successfully, but these errors were encountered: