Open
Description
Describe the bug:
If I have a logging-instance running with multiple fluentd workers it is not forbidden to use the detectExceptions filter, while this will break the fluentd processes.
Expected behaviour:
As this is a known limitation, it would be nice to mark the Flow as "not valid".
Steps to reproduce the bug:
Create a logging-instance with multiple workers and create a Flow with detectExceptions filter.
Environment details:
- Kubernetes version (e.g. v1.15.2): 1.25
- Cloud-provider/provisioner (e.g. AKS, GKE, EKS, PKE etc): AKS
- logging-operator version (e.g. 2.1.1): 4.2.2
- Install method (e.g. helm or static manifests): Helm
- Logs from the misbehaving component (and any other relevant logs):
- Resource definition (possibly in YAML format) that caused the issue, without sensitive data:
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:
name: flow
spec:
filters:
- record_modifier:
records:
- fluentd_worker: ${ENV['HOSTNAME']}
- record_transformer:
remove_keys: $.kubernetes.docker_id, $.kubernetes.annotations, $.kubernetes.container_hash,
$.kubernetes.pod_id
- parser:
key_name: message
parse:
type: json
remove_key_name_field: true
reserve_data: true
- detectExceptions:
languages:
- java
- python
multiline_flush_interval: "0.1"
localOutputRefs:
- output
match:
- select: {}
/kind bug