This repository has been archived by the owner on May 16, 2023. It is now read-only.
This repository has been archived by the owner on May 16, 2023. It is now read-only.
Closed
Description
openedon Sep 2, 2020
Chart version:
logstash-7.9.0
Kubernetes version:
v1.16.3
Kubernetes provider: E.g. GKE (Google Kubernetes Engine)
on prem
Helm Version:
v3.2.4
helm get release
output
e.g. helm get elasticsearch
(replace elasticsearch
with the name of your helm release)
Be careful to obfuscate every secrets (credentials, token, public IP, ...) that could be visible in the output before copy-pasting.
If you find some secrets in plain text in helm get release
output you should use Kubernetes Secrets to managed them is a secure way (see Security Example).
Output of helm get release
NAME: logstash-syslog
LAST DEPLOYED: Wed Sep 2 11:35:27 2020
NAMESPACE: naas-tele-dev
STATUS: deployed
REVISION: 3
TEST SUITE: None
USER-SUPPLIED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
envFrom: []
extraContainers: ""
extraEnvs:
- name: MY_ENVIRONMENT_VAR
value: dev-syslog
extraInitContainers: ""
extraPorts: []
extraVolumeMounts: ""
extraVolumes: ""
fullnameOverride: ""
httpPort: 9600
labels: {}
lifecycle: {}
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
initialDelaySeconds: 300
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
logstashConfig:
logstash.yml: |
http.host: 0.0.0.0
xpack.monitoring.enabled: false
logstashJavaOpts: -Xmx1g -Xms1g
logstashPipeline:
logstash.conf: |
input {
udp {
port => 5514
type => "syslog"
}
}
filter {
if [type] == "syslog" {
grok { match => { "message" => "%{SYSLOGLINE}" } }
date { match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] }
}
}
output {elasticsearch {hosts => ["http://elasticsearch-master:9200"] index => "syslog"} }
maxUnavailable: 1
nameOverride: ""
nodeAffinity: {}
nodeSelector: {}
persistence:
annotations: {}
enabled: false
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
podSecurityPolicy:
create: false
name: ""
spec:
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
priorityClassName: ""
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
replicas: 1
resources:
limits:
cpu: 1000m
memory: 1536Mi
requests:
cpu: 100m
memory: 1536Mi
schedulerName: ""
secretMounts: []
secrets: []
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
service:
annotations: {}
ports:
- name: syslog-udp
port: 514
protocol: UDP
targetPort: 5515
type: ClusterIP
terminationGracePeriod: 120
tolerations: []
updateStrategy: RollingUpdate
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
COMPUTED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
envFrom: []
extraContainers: ""
extraEnvs:
- name: MY_ENVIRONMENT_VAR
value: dev-syslog
extraInitContainers: ""
extraPorts: []
extraVolumeMounts: ""
extraVolumes: ""
fullnameOverride: ""
httpPort: 9600
image: docker.elastic.co/logstash/logstash
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.9.0
labels: {}
lifecycle: {}
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
initialDelaySeconds: 300
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
logstashConfig:
logstash.yml: |
http.host: 0.0.0.0
xpack.monitoring.enabled: false
logstashJavaOpts: -Xmx1g -Xms1g
logstashPipeline:
logstash.conf: |
input {
udp {
port => 5514
type => "syslog"
}
}
filter {
if [type] == "syslog" {
grok { match => { "message" => "%{SYSLOGLINE}" } }
date { match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] }
}
}
output {elasticsearch {hosts => ["http://elasticsearch-master:9200"] index => "syslog"} }
maxUnavailable: 1
nameOverride: ""
nodeAffinity: {}
nodeSelector: {}
persistence:
annotations: {}
enabled: false
podAnnotations: {}
podManagementPolicy: Parallel
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
podSecurityPolicy:
create: false
name: ""
spec:
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
priorityClassName: ""
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
replicas: 1
resources:
limits:
cpu: 1000m
memory: 1536Mi
requests:
cpu: 100m
memory: 1536Mi
schedulerName: ""
secretMounts: []
secrets: []
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
service:
annotations: {}
ports:
- name: syslog-udp
port: 514
protocol: UDP
targetPort: 5515
type: ClusterIP
terminationGracePeriod: 120
tolerations: []
updateStrategy: RollingUpdate
volumeClaimTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
HOOKS:
MANIFEST:
---
# Source: logstash/templates/poddisruptionbudget.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "logstash-syslog-logstash-pdb"
labels:
app: "logstash-syslog-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash-syslog"
spec:
maxUnavailable: 1
selector:
matchLabels:
app: "logstash-syslog-logstash"
---
# Source: logstash/templates/configmap-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-syslog-logstash-config
labels:
app: "logstash-syslog-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash-syslog"
data:
logstash.yml: |
http.host: 0.0.0.0
xpack.monitoring.enabled: false
---
# Source: logstash/templates/configmap-pipeline.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-syslog-logstash-pipeline
labels:
app: "logstash-syslog-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash-syslog"
data:
logstash.conf: |
input {
udp {
port => 5514
type => "syslog"
}
}
filter {
if [type] == "syslog" {
grok { match => { "message" => "%{SYSLOGLINE}" } }
date { match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] }
}
}
output {elasticsearch {hosts => ["http://elasticsearch-master:9200"] index => "syslog"} }
---
# Source: logstash/templates/service-headless.yaml
kind: Service
apiVersion: v1
metadata:
name: "logstash-syslog-logstash-headless"
labels:
app: "logstash-syslog-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash-syslog"
spec:
clusterIP: None
selector:
app: "logstash-syslog-logstash"
ports:
- name: http
port: 9600
---
# Source: logstash/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: "logstash-syslog-logstash"
labels:
app: "logstash-syslog-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash-syslog"
annotations:
{}
spec:
type: ClusterIP
selector:
app: "logstash-syslog-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash-syslog"
ports:
- name: syslog-udp
port: 514
protocol: UDP
targetPort: 5515
---
# Source: logstash/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash-syslog-logstash
labels:
app: "logstash-syslog-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash-syslog"
spec:
serviceName: logstash-syslog-logstash-headless
selector:
matchLabels:
app: "logstash-syslog-logstash"
release: "logstash-syslog"
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
name: "logstash-syslog-logstash"
labels:
app: "logstash-syslog-logstash"
chart: "logstash"
heritage: "Helm"
release: "logstash-syslog"
annotations:
configchecksum: 3c7dffd31dd4804cf26db421f63a59aa8654eddad0e100cfbe90fdc132db168
pipelinechecksum: 4c5a966e1e2b10c5e73b15f97886c3c8fd9eb9f0662b12cff158ae27e34d50a
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "logstash-syslog-logstash"
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 120
volumes:
- name: logstashconfig
configMap:
name: logstash-syslog-logstash-config
- name: logstashpipeline
configMap:
name: logstash-syslog-logstash-pipeline
containers:
- name: "logstash"
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
image: "docker.elastic.co/logstash/logstash:7.9.0"
imagePullPolicy: "IfNotPresent"
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
initialDelaySeconds: 300
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
ports:
- name: http
containerPort: 9600
resources:
limits:
cpu: 1000m
memory: 1536Mi
requests:
cpu: 100m
memory: 1536Mi
env:
- name: LS_JAVA_OPTS
value: "-Xmx1g -Xms1g"
- name: MY_ENVIRONMENT_VAR
value: dev-syslog
volumeMounts:
- name: logstashconfig
mountPath: /usr/share/logstash/config/logstash.yml
subPath: logstash.yml
- name: logstashpipeline
mountPath: /usr/share/logstash/pipeline/logstash.conf
subPath: logstash.conf
Describe the bug:
When I try and pass the values for extraPorts:
helm upgrade gives me this validation error.
Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(Service.spec.ports[1]): unknown field "containerPort" in io.k8s.api.core.v1.ServicePort, ValidationError(Service.spec.ports[1]): missing required field "port" in io.k8s.api.core.v1.ServicePort]
Steps to reproduce:
- Update values file with extraports
extraPorts:
- name: syslogs
containerPort: 5515
- helm upgrade logstash-syslog elastic/logstash --values ~/Documents/kube-elk-helm/logstash/values-dev-syslog.yaml
Expected behavior:
It should add other ports for the pod so I can intake syslogs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Metadata
Assignees
Labels
Something isn't workingSomething isn't workingThis issue or pull request already existsThis issue or pull request already exists