Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

[Kibana] remove useless maxUnavailable in Kibana chart #422

Merged
merged 2 commits into from
Dec 29, 2019
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
remove useless maxUnavailable from Kibana
  • Loading branch information
victorsalaun committed Dec 29, 2019
commit eec9e30d786d0a082af7b5e35f905a4b9a0198d3
1 change: 0 additions & 1 deletion kibana/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,6 @@ helm install --name kibana elastic/kibana --set imageTag=7.5.1
| `serviceAccount` | Allows you to overwrite the "default" [serviceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) for the pod | `[]` |
| `priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` |
| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. | `5601` |
| `maxUnavailable` | The [maxUnavailable](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget) value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod | `1` |
| `updateStrategy` | Allows you to change the default update [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) for the deployment. A [standard upgrade](https://www.elastic.co/guide/en/kibana/current/upgrade-standard.html) of Kibana requires a full stop and start which is why the default strategy is set to `Recreate` | `Recreate` |
| `readinessProbe` | Configuration for the [readinessProbe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) | `failureThreshold: 3`<br>`initialDelaySeconds: 10`<br>`periodSeconds: 10`<br>`successThreshold: 3`<br>`timeoutSeconds: 5` |
| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` |
Expand Down
6 changes: 1 addition & 5 deletions kibana/tests/kibana_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,15 +191,11 @@ def test_override_the_default_update_strategy():
config = '''
updateStrategy:
type: "RollingUpdate"
rollingUpdate:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this test should stay like that as it's testing adding a Rolling Update strategy to Kibana Deployment which is still valid, while maxUnavailable value should be used to define a Pod Disruption Budget resource as in Elasticsearch chart.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, indeed same name maxUnavailable but for different usage.
I reverted the deletion of the test.

maxUnavailable: 1
maxSurge: 1
'''

r = helm_template(config)
assert r['deployment'][name]['spec']['strategy']['type'] == 'RollingUpdate'
assert r['deployment'][name]['spec']['strategy']['rollingUpdate']['maxUnavailable'] == 1
assert r['deployment'][name]['spec']['strategy']['rollingUpdate']['maxSurge'] == 1


def test_using_a_name_override():
config = '''
Expand Down
5 changes: 0 additions & 5 deletions kibana/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -73,11 +73,6 @@ priorityClassName: ""

httpPort: 5601

# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1

updateStrategy:
type: "Recreate"

Expand Down