No targetPort in kibana deployment #133
Description
openedon May 15, 2019
Chart version:
7.0.1
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.7-gke.10", GitCommit:"8d9b8641e72cf7c96efa61421e87f96387242ba1", GitTreeState:"clean", BuildDate:"2019-04-12T22:59:24Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes provider: E.g. GKE (Google Kubernetes Engine)
GKE
Helm Version:
Client: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Values.yaml:
---
replicas: 1
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
image: "docker.elastic.co/kibana/kibana"
imageTag: "7.0.1"
imagePullPolicy: "IfNotPresent"
resources:
requests:
cpu: "100m"
memory: "500m"
limits:
cpu: "1000m"
memory: "1Gi"
protocol: http
healthCheckPath: "/app/kibana"
# Allows you to add any config files in /usr/share/kibana/config/
# such as kibana.yml
kibanaConfig: {}
# kibana.yml: |
# key:
# nestedkey: value
# If Pod Security Policy in use it may be required to specify security context as well as service account
podSecurityContext: {}
#runAsUser: "place the user id here"
#fsGroup: "place the group id here"
serviceAccount: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
httpPort: 5601
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
updateStrategy:
type: "Recreate"
service:
type: LoadBalancer
#port: 5601
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
imagePullSecrets: []
nodeSelector: {}
tolerations: []
affinity: {}
nameOverride: ""
fullnameOverride: ""
Describe the bug:
targetPort
is needed in the service definition to allow different port numbers to be used between the service and application levels
Steps to reproduce:
- Deploy using the above yaml
- This will create a loadbalancer service with port 80 open, but will not map through to kibana as it is listening on 5601
- Change kibana/service.yaml and amend spec clause as follows:
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
protocol: TCP
name: http
targetPort: {{ .Values.httpPort }}
selector:
app: {{ .Chart.Name }}
release: {{ .Release.Name | quote }}
(note the targetPort line.)
Expected behavior:
To be able to use different ports for loadbalancer and application (e.g. allow kibana to listen on 5601 and have the service open at port 80)
Provide logs and/or server output (if relevant):
Any additional context: