diff --git a/BREAKING_CHANGES.md b/BREAKING_CHANGES.md new file mode 100644 index 000000000..5a9b76475 --- /dev/null +++ b/BREAKING_CHANGES.md @@ -0,0 +1,112 @@ +# Breaking changes + + + + +- [7.6.2 - 2020/03/31](#762---20200331) + - [Kibana default resources](#kibana-default-resources) +- [7.6.0 - 2020/02/11](#760---20200211) + - [Elasticsearch default resources](#elasticsearch-default-resources) +- [7.5.0 - 2019/12/02](#750---20191202) + - [Metricbeat kube-state-metrics upgrade](#metricbeat-kube-state-metrics-upgrade) +- [7.0.0-alpha1 - 2019/04/17](#700-alpha1---20190417) + - [Elasticsearch upgrade from 6.x](#elasticsearch-upgrade-from-6x) + + + + + + +## 7.6.2 - 2020/03/31 + +### Kibana default resources + +Kibana default resources (cpu/memory requests and limits) are increased in +[#540][]. + +This change may impact cpu/memory available resources capacity in your +Kubernetes cluster. + +To come back to former default values, use the following values: + +```yaml +extraEnvs: +- name: "NODE_OPTIONS" + value: "" +resources: + requests: + cpu: "100m" + memory: "500Mi" + limits: + cpu: "1000m" + memory: "1Gi" +``` + + +## 7.6.0 - 2020/02/11 + +### Elasticsearch default resources + +Elasticsearch default cpu requests is increased in [#458][] following our +recommendation that resources requests and limits should have the same values. + +This change may impact available cpu capacity in your Kubernetes cluster. + +To come back to former default values, use the following values: + +```yaml +resources: + requests: + cpu: "100m" +``` + + +## 7.5.0 - 2019/12/02 + +### Metricbeat kube-state-metrics upgrade + +[kube-state-metrics][] chart dependency is upgraded from 1.6.0 to 2.4.1 in +[#352][]. This is causing Metricbeat chart upgrade from versions < 7.5.0 failing +with the following error: + +``` +UPGRADE FAILED +Error: Deployment.apps "metricbeat-kube-state-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"kube-state-metrics"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && Deployment.apps "metricbeat-metricbeat-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"metricbeat-metricbeat-metrics", "chart":"metricbeat-7.5.0", "heritage":"Tiller", "release":"metricbeat"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable +Error: UPGRADE FAILED: Deployment.apps "metricbeat-kube-state-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"kube-state-metrics"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && Deployment.apps "metricbeat-metricbeat-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"metricbeat-metricbeat-metrics", "chart":"metricbeat-7.5.0", "heritage":"Tiller", "release":"metricbeat"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable +``` + +The workaround is to use `--force` argument for `helm upgrade` command which +will force Metricbeat resources update through delete/recreate. + + +## 7.0.0-alpha1 - 2019/04/17 + +### Elasticsearch upgrade from 6.x + +If you were using the default Elasticsearch version from the previous release +(6.6.2-alpha1) you will first need to upgrade to Elasticsearch 6.7.1 before +being able to upgrade to 7.0.0. You can do this by adding this to your values +file: + +```yaml +esMajorVersion: 6 +imageTag: 6.7.1 +``` + +If you are upgrading an existing cluster that did not override the default +`storageClassName` you will now need to specify the `storageClassName`. This +only affects existing clusters and was changed in [#94][]. The advantage of this +is that now the Helm chart will just use the default `storageClassName` rather +than needing to override it for any providers where it is not called `standard`. + +``` +volumeClaimTemplate: + storageClassName: "standard" +``` + + +[#94]: https://github.com/elastic/helm-charts/pull/94 +[#352]: https://github.com/elastic/helm-charts/pull/352 +[#458]: https://github.com/elastic/helm-charts/pull/458 +[#540]: https://github.com/elastic/helm-charts/pull/540 +[kube-state-metrics]: https://github.com/helm/charts/tree/master/stable/kube-state-metrics diff --git a/CHANGELOG.md b/CHANGELOG.md index 39382aa59..fed81d7a6 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,84 @@ # Changelog + + + + +- [7.6.2 - 2020/03/31](#762---20200331) + - [APM Server](#apm-server) + - [Elasticsearch](#elasticsearch) + - [Filebeat](#filebeat) + - [Kibana](#kibana) + - [Logstash](#logstash) +- [7.6.1 - 2020/03/04](#761---20200304) + - [APM Server](#apm-server-1) + - [Elasticsearch](#elasticsearch-1) +- [7.6.0 - 2020/02/11](#760---20200211) + - [APM Server](#apm-server-2) + - [Elasticsearch](#elasticsearch-2) + - [Filebeat](#filebeat-1) + - [Kibana](#kibana-1) + - [Logstash](#logstash-1) + - [Metricbeat](#metricbeat) +- [7.5.2 - 2020/01/21](#752---20200121) + - [Elasticsearch](#elasticsearch-3) + - [Filebeat](#filebeat-2) + - [Kibana](#kibana-2) + - [Logstash](#logstash-2) + - [Metricbeat](#metricbeat-1) +- [7.5.1 - 2019/12/18](#751---20191218) + - [Filebeat](#filebeat-3) + - [Kibana](#kibana-3) + - [Metricbeat](#metricbeat-2) +- [7.5.0 - 2019/12/02](#750---20191202) + - [Elasticsearch](#elasticsearch-4) + - [Filebeat](#filebeat-4) + - [Kibana](#kibana-4) + - [Logstash](#logstash-3) + - [Metricbeat](#metricbeat-3) +- [7.4.1 - 2019/10/23](#741---20191023) + - [Elasticsearch](#elasticsearch-5) + - [Kibana](#kibana-5) + - [Metricbeat](#metricbeat-4) +- [7.4.0 - 2019/10/01](#740---20191001) + - [Elasticsearch](#elasticsearch-6) + - [Kibana](#kibana-6) + - [Filebeat](#filebeat-5) + - [Metricbeat](#metricbeat-5) +- [7.3.2 - 2019/09/19](#732---20190919) + - [Elasticsearch](#elasticsearch-7) + - [Kibana](#kibana-7) + - [Filebeat](#filebeat-6) + - [Metricbeat](#metricbeat-6) +- [7.3.0 - 2019/07/31](#730---20190731) + - [Elasticsearch](#elasticsearch-8) + - [Kibana](#kibana-8) +- [7.2.1-0 - 2019/07/18](#721-0---20190718) + - [Elasticsearch](#elasticsearch-9) + - [Kibana](#kibana-9) + - [Filebeat](#filebeat-7) + - [Metricbeat](#metricbeat-7) +- [7.2.0 - 2019/07/01](#720---20190701) + - [Elasticsearch](#elasticsearch-10) + - [Kibana](#kibana-10) + - [Filebeat](#filebeat-8) +- [7.1.1 - 2019/06/07](#711---20190607) + - [Elasticsearch](#elasticsearch-11) + - [Kibana](#kibana-11) + - [Filebeat](#filebeat-9) +- [7.1.0 - 2019/05/21](#710---20190521) + - [Elasticsearch](#elasticsearch-12) + - [Kibana](#kibana-12) + - [Filebeat](#filebeat-10) +- [7.0.1-alpha1 - 2019/05/01](#701-alpha1---20190501) + - [Elasticsearch](#elasticsearch-13) + - [Kibana](#kibana-13) +- [7.0.0-alpha1 - 2019/04/17](#700-alpha1---20190417) + - [Elasticsearch](#elasticsearch-14) + + + + + ## 7.6.2 - 2020/03/31 @@ -33,12 +113,6 @@ ### Kibana -**Warning** -[#540](https://github.com/elastic/helm-charts/pull/540) increase default CPU and memory requests/limits. This may impact the resources (nodes) required in your Kubernetes cluster to deploy Kibana chart. - -If you wish to come back to former values, you need to override CPU and Memory requests/limits as well as `NODE_OPTIONS` `extraEnvs` variable when deploying your Helm Chart. - - | PR | Author | Title | | ------------------------------------------------------ | ---------------------------------------- | -------------------------------------------------------------------------------------- | |[#493](https://github.com/elastic/helm-charts/pull/493) | [@jamoflaw](https://github.com/jamoflaw) | Fix Mismatch Between Service Selector and Pod Labels when using Helm Aliases in Kibana | @@ -239,9 +313,6 @@ If you wish to come back to former values, you need to override CPU and Memory r ### Metricbeat -**Warning** -[#352](https://github.com/elastic/helm-charts/pull/352) is introducing a breaking change, please refer to [Metricbeat Breaking Changes](./metricbeat/README.md#breaking-changes) section for users upgrading from a chart version < 7.5.0. - | PR | Author | Title | | ------------------------------------------------------ | ------------------------------------------------ | ----------------------------------------------------------------------------------------- | |[#352](https://github.com/elastic/helm-charts/pull/352) | [@masterkain](https://github.com/masterkain) | Bump kube-state-metrics to latest chart and app version | @@ -507,19 +578,3 @@ If you wish to come back to former values, you need to override CPU and Memory r ### Elasticsearch * [#94](https://github.com/elastic/helm-charts/pull/94) - @kimxogus - Remove hardcoded storageClassName - -### Notes - -If you were using the default Elasticsearch version from the previous release (6.6.2-alpha1) you will first need to upgrade to Elasticsearch 6.7.1 before being able to upgrade to 7.0.0. You can do this by adding this to your values file: - -``` -esMajorVersion: 6 -imageTag: 6.7.1 -``` - -If you are upgrading an existing cluster that did not override the default `storageClassName` you will now need to specify the `storageClassName`. This only affects existing clusters and was changed in https://github.com/elastic/helm-charts/pull/94. The advantage of this is that now the helm chart will just use the default storageClassName rather than needing to override it for any providers where it is not called `standard`. - -``` -volumeClaimTemplate: - storageClassName: "standard" -``` diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 8dd908c8f..0886b3791 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,21 +1,256 @@ -# Contributing to the Elastic helm charts +# Contributing to the Elastic Helm charts + + + + +- [Adding new features](#adding-new-features) +- [Requirements for submiting a pull request](#requirements-for-submiting-a-pull-request) +- [CLA (Contributor License Agreement)](#cla-contributor-license-agreement) +- [How We Use Git and GitHub](#how-we-use-git-and-github) + - [Forking](#forking) + - [Branching](#branching) + - [Commits and Merging](#commits-and-merging) + - [Rebasing and fixing merge conflicts](#rebasing-and-fixing-merge-conflicts) + - [What Goes Into a Pull Request](#what-goes-into-a-pull-request) +- [Releases](#releases) +- [Testing](#testing) + - [Templating tests](#templating-tests) + - [Integration tests](#integration-tests) + + + + + + +## Adding new features + +If you aren't 100% sure that this is a feature that makes sense for everyone. +Please open an issue first to discuss with the maintainers before investing a +lot of time into it. + + +## Requirements for submiting a pull request + +Before submitting a pull request make sure you validated the following +requirements: + +* CLA should be signed (see [CLA section][] for more details). + +* Charts version shouldn't be bumped (see [Releases section][] for more +details). + +* Charts `README.md` should be updated if required (especially updating default +values if they have been changed). + +* Templating tests should be added/updated (see [Templating tests section][] for +more details). + +* Integration tests should be added/updated (see [Integration tests section][] +for more details). + ## CLA (Contributor License Agreement) -If you haven't already you will need to sign the [CLA](https://www.elastic.co/contributor-agreement) before your pull request can be reviewed and merged. +Please make sure you have signed our [Contributor License Agreement][]. We are +not asking you to assign copyright to us, but to give us the right to distribute +your code without restriction. We ask this of all contributors in order to +assure our users of the origin and continuing existence of the code. +You only need to sign the CLA once. -## Version bumps -Just like with the rest of the stack, all versions in this helm chart repo are bumped and released at the same time. There is no need to bump the version in your pull request. +## How We Use Git and GitHub -## Testing and documentation +### Forking -When making any changes be sure to also update the following: +We follow the [GitHub forking model][] for collaborating on Helm charts code. +This model assumes that you have a remote called `upstream` which points to the +official Kibana repo, which we'll refer to in later code snippets. -* Charts README.md -* The templating tests which can be found in `${CHART}/tests/*.py`. [Example](/elasticsearch/tests/elasticsearch_test.py) -* The integration tests which can be found in `${CHART}/examples/*/test/goss.yaml`. [Example](/elasticsearch/examples/default/test/goss.yaml) +### Branching + +* All work on the next major release (`8.0.0`) goes into master. +* Past major release branches are named `{majorVersion}.x`. They contain work +that will go into the next minor release. For example, if the next minor release +is `7.8.0`, work for it should go into the `7.x` branch. +* Past minor release branches are named `{majorVersion}.{minorVersion}`. They +contain work that will go into the next patch release. For example, if the next +patch release is `7.7.1`, work for it should go into the `7.7` branch. +* All work is done on feature branches and merged into one of these branches. +* Where appropriate, we'll backport changes into older release branches. + +### Commits and Merging + +* Feel free to make as many commits as you want, while working on a branch. +* Please use your commit messages to include helpful information on your +changes and an explanation of *why* you made the changes that you did. +* Resolve merge conflicts by rebasing the target branch over your feature +branch, and force-pushing (see below for instructions). +* When merging, we'll squash your commits into a single commit. + +#### Rebasing and fixing merge conflicts + +Rebasing can be tricky, and fixing merge conflicts can be even trickier because +it involves force pushing. This is all compounded by the fact that attempting to +push a rebased branch remotely will be rejected by git, and you'll be prompted +to do a `pull`, which is not at all what you should do (this will really mess up +your branch's history). + +Here's how you should rebase master onto your branch, and how to fix merge +conflicts when they arise. + +First, make sure master is up-to-date. + +```shell +git checkout master +git fetch upstream +git rebase upstream/master +``` + +Then, check out your branch and rebase master on top of it, which will apply all +of the new commits on master to your branch, and then apply all of your branch's +new commits after that. + +```shell +git checkout name-of-your-branch +git rebase master +``` + +You want to make sure there are no merge conflicts. If there are merge +conflicts, git will pause the rebase and allow you to fix the conflicts before +continuing. + +You can use `git status` to see which files contain conflicts. They'll be the +ones that aren't staged for commit. Open those files, and look for where git has +marked the conflicts. Resolve the conflicts so that the changes you want to make +to the code have been incorporated in a way that doesn't destroy work that's +been done in master. Refer to master's commit history on GitHub if you need to +gain a better understanding of how code is conflicting and how best to resolve +it. + +Once you've resolved all of the merge conflicts, use `git add -A` to stage them +to be committed, and then use `git rebase --continue` to tell git to continue +the rebase. + +When the rebase has completed, you will need to force push your branch because +the history is now completely different than what's on the remote. **This is +potentially dangerous** because it will completely overwrite what you have on +the remote, so you need to be sure that you haven't lost any work when resolving +merge conflicts. (If there weren't any merge conflicts, then you can force push +without having to worry about this.) + +``` +git push origin name-of-your-branch --force +``` + +This will overwrite the remote branch with what you have locally. You're done! + +**Note that you should not run `git pull`**, for example in response to a push +rejection like this: + +``` +! [rejected] name-of-your-branch -> name-of-your-branch (non-fast-forward) +error: failed to push some refs to 'https://github.com/YourGitHubHandle/kibana.git' +hint: Updates were rejected because the tip of your current branch is behind +hint: its remote counterpart. Integrate the remote changes (e.g. +hint: 'git pull ...') before pushing again. +hint: See the 'Note about fast-forwards' in 'git push --help' for details. +``` + +Assuming you've successfully rebased and you're happy with the code, you should +force push instead. + +### What Goes Into a Pull Request + +* Please include an explanation of your changes in your PR description. +* Links to relevant issues, external resources, or related PRs are very +important and useful. +* Please update any tests that pertain to your code, and add new tests where +appropriate. +* See [Submitting a Pull Request](#submitting-a-pull-request) for more info. + + +## Releases + +Just like with the rest of the stack, all versions in this helm chart repo are +bumped and released at the same time. There is no need to bump the version in +your pull request. + +Charts are released from version branchs (example `7.7` branch). + +[Elastic Helm repository][] is updated only during releases. + +The current release process is documented in [release.md][]. + + +## Testing + +### Templating tests + +Templating tests which can be found in `${CHART}/tests/*.py` +([Example][templating test example]). + +These charts use [pytest][] to test the templating logic. The dependencies for +testing can be installed from the [requirements.txt][] in the parent directory: + +``` +pip install -r ./requirements.txt +``` + +Tests can then be run from each chart directory using `make pytest` + +You can also use `make template` (equivalent to `helm template` ) to look at the +YAML being generated: + +It is possible to run all of the tests and linting inside of a Docker container +using `make test` + +Note that templating tests are formated using [Black][], you should run +`make lint-python` (equivalent to `black --diff --check .` ) to validate them or +`black .` to apply formatting before submitting a pull request which will modify +them. + +### Integration tests + +Integration tests which can be found in `${CHART}/examples/*/test/goss.yaml` +([Example][integration test example]). + +Integration tests are run using [goss][] which is a [Serverspec][] like tool +written in golang. See [integration test example][] for an example of what the +tests look like. + +The different integration tests are present in each chart's `examples` +directory. + +Each charts contains an `examples/default` integration test which validate the +chart deployment with default values. + +`examples` directory contains also integration tests for other use cases (for +example: using `oss` Docker images, using `6.x` version or using `security` ). + +Every directory which contains some `test` subdirectory is an integration test +(`examples` directory contains also some configuration examples for some +specific scenarios without tests like configuration for specific k8s providers). + +To run the goss tests against the default example: + +``` +cd examples/default +make goss +``` -## Adding new features -If you aren't 100% sure that this is a feature that makes sense for everyone. Please open an issue first to discuss with the maintainers before investing a lot of time into it. +[black]: https://black.readthedocs.io/en/stable/ +[cla section]: #cla-contributor-license-agreement +[contributor license agreement]: https://www.elastic.co/contributor-agreement +[elastic helm repository]: https://helm.elastic.co +[github forking model]: https://help.github.com/articles/fork-a-repo/ +[goss]: https://github.com/aelsabbahy/goss/blob/master/docs/manual.md +[integration test example]: https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/default/test/goss.yaml +[integration tests section]: #integration-tests +[pytest]: https://docs.pytest.org/en/latest/ +[serverspec]: https://serverspec.org +[templating test example]: https://github.com/elastic/helm-charts/blob/master/elasticsearch/tests/elasticsearch_test.py +[templating tests section]: #templating-tests +[release.md]: https://github.com/elastic/helm-charts/blob/master/helpers/release.md +[releases section]: #releases +[requirements.txt]: https://github.com/elastic/helm-charts/blob/master/requirements.txt diff --git a/README.md b/README.md index 64db4cc88..9b0b1d4d6 100644 --- a/README.md +++ b/README.md @@ -2,26 +2,40 @@ [![Build Status](https://img.shields.io/jenkins/s/https/devops-ci.elastic.co/job/elastic+helm-charts+master.svg)](https://devops-ci.elastic.co/job/elastic+helm-charts+master/) -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. +This functionality is in beta and is subject to change. The design and code is +less mature than official GA features and is being provided as-is with no +warranties. Beta features are not subject to the support SLA of official GA +features. ## Charts -Please look in the chart directories for the documentation for each chart. These helm charts are designed to be a lightweight way to configure our official docker images. Links to the relevant docker image documentation has also been added below. +Please look in the chart directories for the documentation for each chart. These +Helm charts are designed to be a lightweight way to configure our official +Docker images. Links to the relevant Docker image documentation has also been +added below. | Chart | Docker documentation | -| ------------------------------------------ | ------------------------------------------------------------------------------- | +|--------------------------------------------|---------------------------------------------------------------------------------| +| [APM-Server](./apm-server/README.md) | https://www.elastic.co/guide/en/apm/server/current/running-on-docker.html | | [Elasticsearch](./elasticsearch/README.md) | https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html | +| [Filebeat](./filebeat/README.md) | https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html | | [Kibana](./kibana/README.md) | https://www.elastic.co/guide/en/kibana/current/docker.html | | [Logstash](./logstash/README.md) | https://www.elastic.co/guide/en/logstash/current/docker.html | -| [Filebeat](./filebeat/README.md) | https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html | | [Metricbeat](./metricbeat/README.md) | https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-docker.html | -| [APM-Server](./apm-server/README.md) | https://www.elastic.co/guide/en/apm/server/current/running-on-docker.html | ## Kubernetes Versions -The charts are [currently tested](https://devops-ci.elastic.co/job/elastic+helm-charts+master/) against all GKE versions available. The exact versions are defined under `KUBERNETES_VERSIONS` in [helpers/matrix.yml](/helpers/matrix.yml) +The charts are [currently tested][] against all GKE versions available. The +exact versions are defined under `KUBERNETES_VERSIONS` in +[helpers/matrix.yml][]. ## Helm versions -While we are checking backward compatibility, the charts are only tested with Helm version mentioned in [helm-tester Dockerfile](helpers/helm-tester/Dockerfile) (currently 2.16.6). -Note that we don't support [Helm 3](https://v3.helm.sh/) version. +While we are checking backward compatibility, the charts are only tested with +Helm version mentioned in [helm-tester Dockerfile][] (currently 2.16.6). +Note that we don't support [Helm 3][] version. + +[currently tested]: https://devops-ci.elastic.co/job/elastic+helm-charts+master/ +[helm 3]: https://v3.helm.sh +[helm-tester Dockerfile]: https://github.com/elastic/helm-charts/blob/master/helpers/helm-tester/Dockerfile +[helpers/matrix.yml]: https://github.com/elastic/helm-charts/blob/master/helpers/matrix.yml diff --git a/apm-server/README.md b/apm-server/README.md index 6040cca68..d8cfa4696 100644 --- a/apm-server/README.md +++ b/apm-server/README.md @@ -1,56 +1,73 @@ # APM Server Helm Chart + + + + +- [Requirements](#requirements) +- [Installing](#installing) + - [Using Helm repository](#using-helm-repository) + - [Using master branch](#using-master-branch) +- [Upgrading](#upgrading) +- [Compatibility](#compatibility) +- [Usage notes](#usage-notes) +- [Configuration](#configuration) +- [Examples](#examples) + - [Default](#default) +- [Contributing](#contributing) + + + + + This functionality is in alpha and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Alpha features are not subject to the support SLA of official GA features. -This helm chart is a lightweight way to configure and run our official -[APM Server docker image](https://www.elastic.co/guide/en/apm/server/current/running-on-docker.html). +This Helm chart is a lightweight way to configure and run our official +[APM Server Docker image][]. + ## Requirements * Kubernetes >= 1.9 -* [Helm](https://helm.sh/) >= 2.8.0 +* [Helm][] >= 2.8.0 -## Usage notes and getting started -* The default APM Server configuration file for this chart is configured to use an -Elasticsearch endpoint as configured by the rest of these helm charts. This can -easily be overridden in the config value `apmConfig.apm-server.yml`. -* Automated testing of this chart is currently only run against GKE (Google Kubernetes Engine). ## Installing -* Add the elastic helm charts repo - ``` - helm repo add elastic https://helm.elastic.co - ``` -* Install it - ``` - helm install --name apm-server elastic/apm-server - ``` +### Using Helm repository + +* Add the Elastic Helm charts repo: +`helm repo add elastic https://helm.elastic.co` + +* Install it: `helm install --name apm-server elastic/apm-server` ### Using master branch -* Clone the git repo - ``` - git clone git@github.com:elastic/helm-charts.git - ``` -* Install it - ``` - helm install --name apm-server ./helm-charts/apm-server - ``` +* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git` + +* Install it: `helm install --name apm-server ./helm-charts/apm-server` + + +## Upgrading + +Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before +upgrading to a new chart version. + ## Compatibility -This chart is tested with the latest supported versions. The currently tested versions are: +This chart is tested with the latest supported versions. The currently tested +versions are: | 6.x | 7.x | -| ----- | ----- | +|-------|-------| | 6.8.8 | 7.6.2 | -Examples of installing older major versions can be found in the -[examples](https://github.com/elastic/helm-charts/tree/master/apm-server/examples) directory. +Examples of installing older major versions can be found in the [examples][] +directory. While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.6.2` of APM @@ -61,94 +78,108 @@ helm install --name apm-server elastic/apm-server --set imageTag=7.6.2 ``` +## Usage notes + +* The default APM Server configuration file for this chart is configured to use +an Elasticsearch endpoint as configured by the rest of these Helm charts. This +can easily be overridden in the config value `apmConfig.apm-server.yml`. + +* Automated testing of this chart is currently only run against GKE (Google +Kubernetes Engine). + + ## Configuration -| Parameter | Description | Default | -| ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | -| `apmConfig` | Allows you to add any config files in `/usr/share/apm-server/config` such as `apm-server.yml`. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/apm-server/values.yaml) for an example of the formatting with the default configuration. | see [values.yaml](https://github.com/elastic/helm-charts/tree/master/apm-server/values.yaml) | -| `replicas` | Number of APM servers to run | `1` | -| `extraContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | -| `extraInitContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | -| `extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `[]` | -| `envFrom` | Templatable string of envFrom to be passed to the [environment from variables](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables) which will be appended to the `envFrom:` definition for the container | `[]` | -| `extraVolumeMounts` | List of additional volumeMounts | `[]` | -| `extraVolumes` | List of additional volumes | `[]` | -| `image` | The APM Server docker image | `docker.elastic.co/apm/apm-server` | -| `imageTag` | The APM Server docker image tag | `7.6.2` | -| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` | -| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` | -| `managedServiceAccount` | Whether the `serviceAccount` should be managed by this helm chart. Set this to `false` in order to manage your own service account and related roles. | `true` | -| `podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all APM Server pods | `{}` | -| `labels` | Configurable [label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) applied to all APM server pods | `{}` | -| `podSecurityContext` | Configurable [podSecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for APM Server pod execution environment | `runAsUser: 0`
`privileged: false` | -| `livenessProbe` | Parameters to pass to [liveness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) checks for values such as timeouts and thresholds. | `failureThreshold: 3`
`initialDelaySeconds: 10`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `readinessProbe` | Parameters to pass to [readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) checks for values such as timeouts and thresholds. | `failureThreshold: 3`
`initialDelaySeconds: 10`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the `Deployment` | `requests.cpu: 100m`
`requests.memory: 100Mi`
`limits.cpu: 1000m`
`limits.memory: 200Mi` | -| `serviceAccount` | Custom [serviceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) that APM Server will use during execution. By default will use the service account created by this chart. | `""` | -| `secretMounts` | Allows you easily mount a secret as a file inside the `Deployment`. Useful for mounting certificates and other secrets. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/apm-server/values.yaml) for an example | `[]` | -| `terminationGracePeriod` | Termination period (in seconds) to wait before killing APM Server pod process on pod shutdown | `30` | -| `tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` | -| `nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) | `{}` | -| `affinity` | Configurable [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) | `{}` | -| `priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` | -| `updateStrategy` | Allows you to change the default update [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) for the deployment. | `RollingUpdate` | -| `autoscaling.enabled` | Enable the pod [horizonatal auto scaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) | `false` | -| `ingress` | Configurable [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) to expose the APM Server service. See [`values.yaml`](https://github.com/elastic/helm-charts/tree/master/apm-server/values.yaml) for an example | `enabled: false` | -| `service` | Configurable [service](https://kubernetes.io/docs/concepts/services-networking/service/) to expose the APM Server service. See [`values.yaml`](https://github.com/elastic/helm-charts/tree/master/apm-server/values.yaml) for an example | `type: ClusterIP`
`port: 8200`
`nodePort:`
`annotations: {}` | -| `lifecycle` | Configurable [livecycle hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) | `false` | -| `nameOverride` | Overrides the chart name for resources. If not set the name will default to `.Chart.Name` | `""` | -| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to `.Release.Name`-`.Values.nameOverride` or `.Chart.Name` | `""` | + +| Parameter | Description | Default | +|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------| +| `affinity` | Configurable [affinity][] | `{}` | +| `apmConfig` | Allows you to add any config files in `/usr/share/apm-server/config` such as `apm-server.yml` | see [values.yaml][] | +| `autoscaling` | Enable the [horizontal pod autoscaler][] | `enabled: false` | +| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` | +| `extraContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | +| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | `[]` | +| `extraInitContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | +| `extraVolumeMounts` | List of additional `volumeMounts` | `[]` | +| `extraVolumes` | List of additional `volumes` | `[]` | +| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to `.Release.Name` - `.Values.nameOverride` or `.Chart.Name` | `""` | +| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` | +| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` | +| `imageTag` | The APM Server Docker image tag | `7.6.2` | +| `image` | The APM Server Docker image | `docker.elastic.co/apm/apm-server` | +| `ingress` | Configurable [ingress][] to expose the APM Server service | see [values.yaml][] | +| `labels` | Configurable [labels][] applied to all APM server pods | `{}` | +| `lifecycle` | Configurable [lifecycle hooks][] | `false` | +| `livenessProbe` | Parameters to pass to liveness [probe][] checks for values such as timeouts and thresholds | see [values.yaml][] | +| `managedServiceAccount` | Whether the `serviceAccount` should be managed by this Helm chart. Set this to `false` in order to manage your own service account and related roles | `true` | +| `nameOverride` | Overrides the chart name for resources. If not set the name will default to `.Chart.Name` | `""` | +| `nodeSelector` | Configurable [nodeSelector][] | `{}` | +| `podAnnotations` | Configurable [annotations][] applied to all APM Server pods | `{}` | +| `podSecurityContext` | Configurable [podSecurityContext][] for APM Server pod execution environment | see [values.yaml][] | +| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the `PriorityClass` must be created first | `""` | +| `readinessProbe` | Parameters to pass to readiness [probe][] checks for values such as timeouts and thresholds | see [values.yaml][] | +| `replicas` | Number of APM servers to run | `1` | +| `resources` | Allows you to set the [resources][] for the `Deployment` | see [values.yaml][] | +| `secretMounts` | Allows you easily mount a secret as a file inside the `Deployment`. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` | +| `serviceAccount` | Custom [serviceAccount][] that APM Server will use during execution. By default will use the `serviceAccount` created by this chart | `""` | +| `service` | Configurable [service][] to expose the APM Server service. See [values.yaml][] for an example | see [values.yaml][] | +| `terminationGracePeriod` | Termination period (in seconds) to wait before killing APM Server pod process on pod shutdown | `30` | +| `tolerations` | Configurable [tolerations][] | `[]` | +| `updateStrategy` | Allows you to change the default [updateStrategy][] for the deployment | see [values.yaml][] | + ## Examples -In [examples/](ahttps://github.com/elastic/helm-charts/tree/master/apm-server/examples) you will find some example configurations. These examples -are used for the automated testing of this helm chart. +In [examples][] you will find some example configurations. These examples are +used for the automated testing of this Helm chart. ### Default -* Deploy the [default Elasticsearch helm chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#default) -* Deploy APM Server with the default values +* Deploy the [default Elasticsearch Helm chart][]. + +* Deploy APM Server with the default values: + ``` cd examples/default make ``` -* You can now setup a port forward for Elasticsearch to observe APM indices + +* You can now setup a port forward for Elasticsearch to observe APM indices: + ``` kubectl port-forward svc/elasticsearch-master 9200 curl localhost:9200/_cat/indices ``` -## Testing - -This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating -logic. The dependencies for testing can be installed from the -[`requirements.txt`](https://github.com/elastic/helm-charts/tree/master/requirements.txt) in the parent directory. - -``` -pip install -r ../requirements.txt -make pytest -``` - -You can also use `helm template` to look at the YAML being generated -``` -make template -``` - -It is possible to run all of the tests and linting inside of a docker container - -``` -make test -``` - -## Integration Testing - -Integration tests are run using -[goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) which is a -serverspec like tool written in golang. See [goss.yaml](https://github.com/elastic/helm-charts/tree/master/apm-server/examples/default/test/goss.yaml) -for an example of what the tests look like. - -To run the goss tests against the default example: -``` -cd examples/default -make goss -``` +## Contributing + +Please check [CONTRIBUTING.md][] before any contribution or for any questions +about our development and testing process. + + +[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/master/BREAKING_CHANGES.md +[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/master/CHANGELOG.md +[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/master/CONTRIBUTING.md +[affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +[apm server docker image]: https://www.elastic.co/guide/en/apm/server/current/running-on-docker.html +[default elasticsearch helm chart]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#default +[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config +[examples]: https://github.com/elastic/helm-charts/tree/master/apm-server/examples +[helm]: https://helm.sh +[horizontal pod autoscaler]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ +[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images +[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret +[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/ +[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +[lifecycle hooks]: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/ +[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector +[podSecurityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass +[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ +[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ +[service]: https://kubernetes.io/docs/concepts/services-networking/service/ +[serviceAccount]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +[updateStrategy]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment +[values.yaml]: https://github.com/elastic/helm-charts/tree/master/apm-server/values.yaml diff --git a/elasticsearch/README.md b/elasticsearch/README.md index 3e0218315..7f356177c 100644 --- a/elasticsearch/README.md +++ b/elasticsearch/README.md @@ -1,170 +1,220 @@ # Elasticsearch Helm Chart + + + + +- [Requirements](#requirements) +- [Installing](#installing) + - [Using Helm repository](#using-helm-repository) + - [Using master branch](#using-master-branch) +- [Upgrading](#upgrading) +- [Compatibility](#compatibility) +- [Usage notes](#usage-notes) +- [Migration from helm/charts stable](#migration-from-helmcharts-stable) +- [Configuration](#configuration) + - [Deprecated](#deprecated) +- [Try it out](#try-it-out) + - [Default](#default) + - [Multi](#multi) + - [Security](#security) +- [FAQ](#faq) + - [How to install plugins?](#how-to-install-plugins) + - [How to use the keystore?](#how-to-use-the-keystore) + - [Basic example](#basic-example) + - [Multiple keys](#multiple-keys) + - [Custom paths and keys](#custom-paths-and-keys) + - [How to enable snapshotting?](#how-to-enable-snapshotting) +- [Local development environments](#local-development-environments) + - [Minikube](#minikube) + - [Docker for Mac - Kubernetes](#docker-for-mac---kubernetes) + - [KIND - Kubernetes](#kind---kubernetes) + - [MicroK8S](#microk8s) +- [Clustering and Node Discovery](#clustering-and-node-discovery) +- [Contributing](#contributing) + + + + + + +This functionality is in beta and is subject to change. The design and code is +less mature than official GA features and is being provided as-is with no +warranties. Beta features are not subject to the support SLA of official GA +features. + +This Helm chart is a lightweight way to configure and run our official +[Elasticsearch Docker image][]. -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. - -This helm chart is a lightweight way to configure and run our official [Elasticsearch docker image](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) - -## Notice - -[7.6.1](https://github.com/elastic/helm-charts/releases/tag/7.6.1) release is introducing a change for Elasticsearch users upgrading from a previous chart version. -Following our recommandations, the change tracked in [#458](https://github.com/elastic/helm-charts/pull/458) is setting CPU request to the same value as CPU limit. - -For users which don't overwrite default values for CPU requests, Elasticsearch pod will now request `1000m` CPU instead of `100m` CPU. This may impact the resources (nodes) required in your Kubernetes cluster to deploy Elasticsearch chart. - -If you wish to come back to former values, you just need to override CPU requests when deploying your Helm Chart. - -- Overriding CPU requests in commandline argument: -``` -helm install --name elasticsearch --set resources.requests.cpu=100m elastic/elasticsearch -``` - -- Overriding CPU requests in your custom `values.yaml` file: -``` -resources: - requests: - cpu: "100m" -``` ## Requirements -* [Helm](https://helm.sh/) >=2.8.0 and <3.0.0 (see parent [README](https://github.com/elastic/helm-charts/tree/master/README.md) for more details) +* [Helm][] >=2.8.0 and <3.0.0 (see [parent README][] for more details) * Kubernetes >=1.8 -* Minimum cluster requirements include the following to run this chart with default settings. All of these settings are configurable. +* Minimum cluster requirements include the following to run this chart with +default settings. All of these settings are configurable. * Three Kubernetes nodes to respect the default "hard" affinity settings * 1GB of RAM for the JVM heap -## Usage notes and getting started - -* This repo includes a number of [example](https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples) configurations which can be used as a reference. They are also used in the automated testing of this chart -* Automated testing of this chart is currently only run against GKE (Google Kubernetes Engine). -* The chart deploys a statefulset and by default will do an automated rolling update of your cluster. It does this by waiting for the cluster health to become green after each instance is updated. If you prefer to update manually you can set [`updateStrategy: OnDelete`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#on-delete) -* It is important to verify that the JVM heap size in `esJavaOpts` and to set the CPU/Memory `resources` to something suitable for your cluster -* To simplify chart and maintenance each set of node groups is deployed as a separate helm release. Take a look at the [multi](https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/multi) example to get an idea for how this works. Without doing this it isn't possible to resize persistent volumes in a statefulset. By setting it up this way it makes it possible to add more nodes with a new storage size then drain the old ones. It also solves the problem of allowing the user to determine which node groups to update first when doing upgrades or changes. -* We have designed this chart to be very un-opinionated about how to configure Elasticsearch. It exposes ways to set environment variables and mount secrets inside of the container. Doing this makes it much easier for this chart to support multiple versions with minimal changes. - -## Migration from helm/charts stable - -If you currently have a cluster deployed with the [helm/charts stable](https://github.com/helm/charts/tree/master/stable/elasticsearch) chart you can follow the [migration guide](https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/migration/README.md) ## Installing ### Using Helm repository -* Add the elastic helm charts repo - ``` - helm repo add elastic https://helm.elastic.co - ``` -* Install it - ``` - helm install --name elasticsearch elastic/elasticsearch - ``` +* Add the Elastic Helm charts repo: +`helm repo add elastic https://helm.elastic.co` + +* Install it: `helm install --name elasticsearch elastic/elasticsearch` ### Using master branch -* Clone the git repo - ``` - git clone git@github.com:elastic/helm-charts.git - ``` -* Install it - ``` - helm install --name elasticsearch ./helm-charts/elasticsearch - ``` +* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git` + +* Install it: `helm install --name elasticsearch ./helm-charts/elasticsearch` + + +## Upgrading + +Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before +upgrading to a new chart version. + ## Compatibility -This chart is tested with the latest supported versions. The currently tested versions are: +This chart is tested with the latest supported versions. The currently tested +versions are: | 6.x | 7.x | -| ----- | ----- | +|-------|-------| | 6.8.8 | 7.6.2 | -Examples of installing older major versions can be found in the [examples](https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples) directory. +Examples of installing older major versions can be found in the [examples][] +directory. -While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.6.2` of Elasticsearch it would look like this: +While only the latest releases are tested, it is possible to easily install old +or new releases by overriding the `imageTag`. To install version `7.6.2` of +Elasticsearch it would look like this: ``` helm install --name elasticsearch elastic/elasticsearch --set imageTag=7.6.2 ``` + +## Usage notes + +* This repo includes a number of [examples][] configurations which can be used +as a reference. They are also used in the automated testing of this chart. +* Automated testing of this chart is currently only run against GKE (Google +Kubernetes Engine). +* The chart deploys a StatefulSet and by default will do an automated rolling +update of your cluster. It does this by waiting for the cluster health to become +green after each instance is updated. If you prefer to update manually you can +set `OnDelete` [updateStrategy][]. +* It is important to verify that the JVM heap size in `esJavaOpts` and to set +the CPU/Memory `resources` to something suitable for your cluster. +* To simplify chart and maintenance each set of node groups is deployed as a +separate Helm release. Take a look at the [multi][] example to get an idea for +how this works. Without doing this it isn't possible to resize persistent +volumes in a StatefulSet. By setting it up this way it makes it possible to add +more nodes with a new storage size then drain the old ones. It also solves the +problem of allowing the user to determine which node groups to update first when +doing upgrades or changes. +* We have designed this chart to be very un-opinionated about how to configure +Elasticsearch. It exposes ways to set environment variables and mount secrets +inside of the container. Doing this makes it much easier for this chart to +support multiple versions with minimal changes. + + +## Migration from helm/charts stable + +If you currently have a cluster deployed with the [helm/charts stable][] chart +you can follow the [migration guide][]. + + ## Configuration -| Parameter | Description | Default | -| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | -| `clusterName` | This will be used as the Elasticsearch [cluster.name](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.name.html) and should be unique per cluster in the namespace | `elasticsearch` | -| `nodeGroup` | This is the name that will be used for each group of nodes in the cluster. The name will be `clusterName-nodeGroup-X`, `nameOverride-nodeGroup-X` if a nameOverride is specified, and `fullnameOverride-X` if a fullnameOverride is specified | `master` | -| `masterService` | Optional. The service name used to connect to the masters. You only need to set this if your master `nodeGroup` is set to something other than `master`. See [Clustering and Node Discovery](https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#clustering-and-node-discovery) for more information | `` | -| `roles` | A hash map with the [specific roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html) for the node group | `master: true`
`data: true`
`ingest: true` | -| `replicas` | Kubernetes replica count for the statefulset (i.e. how many pods) | `3` | -| `minimumMasterNodes` | The value for [discovery.zen.minimum_master_nodes](https://www.elastic.co/guide/en/elasticsearch/reference/6.7/discovery-settings.html#minimum_master_nodes). Should be set to `(master_eligible_nodes / 2) + 1`. Ignored in Elasticsearch versions >= 7. | `2` | -| `esMajorVersion` | Used to set major version specific configuration. If you are using a custom image and not running the default Elasticsearch version you will need to set this to the version you are running (e.g. `esMajorVersion: 6`) | `""` | -| `esConfig` | Allows you to add any config files in `/usr/share/elasticsearch/config/` such as `elasticsearch.yml` and `log4j2.properties`. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/elasticsearch/values.yaml) for an example of the formatting. | `{}` | -| `extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `[]` | -| `envFrom` | Templatable string of envFrom to be passed to the [environment from variables](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables) which will be appended to the `envFrom:` definition for the container | `[]` | -| `extraVolumes` | Templatable string of additional volumes to be passed to the `tpl` function | `[]` | -| `extraVolumeMounts` | Templatable string of additional volumeMounts to be passed to the `tpl` function | `[]` | -| `extraContainers` | Templatable string of additional containers to be passed to the `tpl` function | `[]` | -| `extraInitContainers` | Templatable string of additional init containers to be passed to the `tpl` function | `[]` | -| `secretMounts` | Allows you easily mount a secret as a file inside the statefulset. Useful for mounting certificates and other secrets. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/elasticsearch/values.yaml) for an example | `[]` | -| `image` | The Elasticsearch docker image | `docker.elastic.co/elasticsearch/elasticsearch` | -| `imageTag` | The Elasticsearch docker image tag | `7.6.2` | -| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` | -| `podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all Elasticsearch pods | `{}` | -| `labels` | Configurable [label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) applied to all Elasticsearch pods | `{}` | -| `esJavaOpts` | [Java options](https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html) for Elasticsearch. This is where you should configure the [jvm heap size](https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html) | `-Xmx1g -Xms1g` | -| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the statefulset | `requests.cpu: 1000m`
`requests.memory: 2Gi`
`limits.cpu: 1000m`
`limits.memory: 2Gi` | -| `initResources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the initContainer in the statefulset | {} | -| `sidecarResources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the sidecar containers in the statefulset | {} | -| `networkHost` | Value for the [network.host Elasticsearch setting](https://www.elastic.co/guide/en/elasticsearch/reference/current/network.host.html) | `0.0.0.0` | -| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for statefulsets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage). You will want to adjust the storage (default `30Gi`) and the `storageClassName` if you are using a different storage class | `accessModes: [ "ReadWriteOnce" ]`
`resources.requests.storage: 30Gi` | -| `persistence.annotations` | Additional persistence annotations for the `volumeClaimTemplate` | `{}` | -| `persistence.enabled` | Enables a persistent volume for Elasticsearch data. Can be disabled for nodes that only have [roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html) which don't require persistent data. | `true` | -| `priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` | -| `antiAffinityTopologyKey` | The [anti-affinity topology key](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` | -| `antiAffinity` | Setting this to hard enforces the [anti-affinity rules](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). If it is set to soft it will be done "best effort". Other values will be ignored. | `hard` | -| `nodeAffinity` | Value for the [node affinity settings](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature) | `{}` | -| `podManagementPolicy` | By default Kubernetes [deploys statefulsets serially](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies). This deploys them in parallel so that they can discover eachother | `Parallel` | -| `protocol` | The protocol that will be used for the readinessProbe. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` | -| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html#_settings) in `extraEnvs` | `9200` | -| `transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-transport.html#_transport_settings) in `extraEnvs` | `9300` | -| `service.labels` | Labels to be added to non-headless service | `{}` | -| `service.labelsHeadless` | Labels to be added to headless service | `{}` | -| `service.loadBalancerIP` | Some cloud providers allow you to specify the loadBalancerIP. If the loadBalancerIP field is not specified, the IP is dynamically assigned. If you specify a loadBalancerIP but your cloud provider does not support the feature, the loadbalancerIP field is ignored. [LoadBalancer options](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) | `""` | -| `service.type` | Type of elasticsearch service. [Service Types](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) | `ClusterIP` | -| `service.nodePort` | Custom [nodePort](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport) port that can be set if you are using `service.type: nodePort`. | `` | -| `service.annotations` | Annotations that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` [Annotations](https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws) | `{}` | -| `service.httpPortName` | The name of the http port within the service | `http` | -| `service.transportPortName` | The name of the transport port within the service | `transport` | -| `service.loadBalancerSourceRanges` | The IP ranges that are allowed to access | `[]` | -| `updateStrategy` | The [updateStrategy](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) for the statefulset. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` | -| `maxUnavailable` | The [maxUnavailable](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget) value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` | -| `fsGroup (DEPRECATED)` | The Group ID (GID) for [securityContext.fsGroup](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) so that the Elasticsearch user can read from the persistent volume | `` | -| `podSecurityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) for the pod | `fsGroup: 1000`
`runAsUser: 1000` | -| `securityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container) for the container | `capabilities.drop:[ALL]`
`runAsNonRoot: true`
`runAsUser: 1000` | -| `terminationGracePeriod` | The [terminationGracePeriod](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) in seconds used when trying to stop the pod | `120` | -| `sysctlInitContainer.enabled` | Allows you to disable the sysctlInitContainer if you are setting vm.max_map_count with another method | `true` | -| `sysctlVmMaxMapCount` | Sets the [sysctl vm.max_map_count](https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html#vm-max-map-count) needed for Elasticsearch | `262144` | -| `readinessProbe` | Configuration fields for the [readinessProbe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) | `failureThreshold: 3`
`initialDelaySeconds: 10`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `clusterHealthCheckParams` | The [Elasticsearch cluster health status params](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params) that will be used by readinessProbe command | `wait_for_status=green&timeout=1s` | -| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` | -| `nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) so that you can target specific nodes for your Elasticsearch cluster | `{}` | -| `tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` | -| `ingress` | Configurable [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) to expose the Elasticsearch service. See [`values.yaml`](https://github.com/elastic/helm-charts/tree/master/elasticsearch/values.yaml) for an example | `enabled: false` | -| `schedulerName` | Name of the [alternate scheduler](https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods) | `nil` | -| `masterTerminationFix` | A workaround needed for Elasticsearch < 7.2 to prevent master status being lost during restarts [#63](https://github.com/elastic/helm-charts/issues/63) | `false` | -| `lifecycle` | Allows you to add lifecycle configuration. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/elasticsearch/values.yaml) for an example of the formatting. | `{}` | -| `keystore` | Allows you map Kubernetes secrets into the keystore. See the [config example](https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/config/values.yaml) and [how to use the keystore](https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#how-to-use-the-keystore) | `[]` | -| `rbac` | Configuration for creating a role, role binding and service account as part of this helm chart with `create: true`. Also can be used to reference an external service account with `serviceAccountName: "externalServiceAccountName"`. | `create: false`
`serviceAccountName: ""` | -| `podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true`. Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | `create: false`
`name: ""` | -| `nameOverride` | Overrides the clusterName when used in the naming of resources | `""` | -| `fullnameOverride` | Overrides the clusterName and nodeGroup when used in the naming of resources. This should only be used when using a single nodeGroup, otherwise you will have name conflicts | `""` | +| Parameter | Description | Default | +|------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------| +| `antiAffinityTopologyKey` | The [anti-affinity][] topology key. By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` | +| `antiAffinity` | Setting this to hard enforces the [anti-affinity][] rules. If it is set to soft it will be done "best effort". Other values will be ignored | `hard` | +| `clusterHealthCheckParams` | The [Elasticsearch cluster health status params][] that will be used by readiness [probe][] command | `wait_for_status=green&timeout=1s` | +| `clusterName` | This will be used as the Elasticsearch [cluster.name][] and should be unique per cluster in the namespace | `elasticsearch` | +| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` | +| `esConfig` | Allows you to add any config files in `/usr/share/elasticsearch/config/` such as `elasticsearch.yml` and `log4j2.properties`. See [values.yaml][] for an example of the formatting | `{}` | +| `esJavaOpts` | [Java options][] for Elasticsearch. This is where you should configure the [jvm heap size][] | `-Xmx1g -Xms1g` | +| `esMajorVersion` | Used to set major version specific configuration. If you are using a custom image and not running the default Elasticsearch version you will need to set this to the version you are running (e.g. `esMajorVersion: 6`) | `""` | +| `extraContainers` | Templatable string of additional `containers` to be passed to the `tpl` function | `""` | +| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | `[]` | +| `extraInitContainers` | Templatable string of additional `initContainers` to be passed to the `tpl` function | `""` | +| `extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function | `""` | +| `extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function | `""` | +| `fullnameOverride` | Overrides the `clusterName` and `nodeGroup` when used in the naming of resources. This should only be used when using a single `nodeGroup`, otherwise you will have name conflicts | `""` | +| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port][] in `extraEnvs` | `9200` | +| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` | +| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` | +| `imageTag` | The Elasticsearch Docker image tag | `7.6.2` | +| `image` | The Elasticsearch Docker image | `docker.elastic.co/elasticsearch/elasticsearch` | +| `ingress` | Configurable [ingress][] to expose the Elasticsearch service. See [values.yaml][] for an example | see [values.yaml][] | +| `initResources` | Allows you to set the [resources][] for the `initContainer` in the StatefulSet | `{}` | +| `keystore` | Allows you map Kubernetes secrets into the keystore. See the [config example][] and [how to use the keystore][] | `[]` | +| `labels` | Configurable [labels][] applied to all Elasticsearch pods | `{}` | +| `lifecycle` | Allows you to add lifecycle configuration. See [values.yaml][] for an example of the formatting | `{}` | +| `masterService` | The service name used to connect to the masters. You only need to set this if your master `nodeGroup` is set to something other than `master`. See [Clustering and Node Discovery][] for more information | `""` | +| `masterTerminationFix` | A workaround needed for Elasticsearch < 7.2 to prevent master status being lost during restarts [#63][] | `false` | +| `maxUnavailable` | The [maxUnavailable][] value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` | +| `minimumMasterNodes` | The value for [discovery.zen.minimum_master_nodes][]. Should be set to `(master_eligible_nodes / 2) + 1`. Ignored in Elasticsearch versions >= 7 | `2` | +| `nameOverride` | Overrides the `clusterName` when used in the naming of resources | `""` | +| `networkHost` | Value for the [network.host Elasticsearch setting][] | `0.0.0.0` | +| `nodeAffinity` | Value for the [node affinity settings][] | `{}` | +| `nodeGroup` | This is the name that will be used for each group of nodes in the cluster. The name will be `clusterName-nodeGroup-X` , `nameOverride-nodeGroup-X` if a `nameOverride` is specified, and `fullnameOverride-X` if a `fullnameOverride` is specified | `master` | +| `nodeSelector` | Configurable [nodeSelector][] so that you can target specific nodes for your Elasticsearch cluster | `{}` | +| `persistence` | Enables a persistent volume for Elasticsearch data. Can be disabled for nodes that only have [roles][] which don't require persistent data | see [values.yaml][] | +| `podAnnotations` | Configurable [annotations][] applied to all Elasticsearch pods | `{}` | +| `podManagementPolicy` | By default Kubernetes [deploys StatefulSets serially][]. This deploys them in parallel so that they can discover each other | `Parallel` | +| `podSecurityContext` | Allows you to set the [securityContext][] for the pod | see [values.yaml][] | +| `podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true`. Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | see [values.yaml][] | +| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` | +| `protocol` | The protocol that will be used for the readiness [probe][]. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` | +| `rbac` | Configuration for creating a role, role binding and ServiceAccount as part of this Helm chart with `create: true`. Also can be used to reference an external ServiceAccount with `serviceAccountName: "externalServiceAccountName"` | see [values.yaml][] | +| `readinessProbe` | Configuration fields for the readiness [probe][] | see [values.yaml][] | +| `replicas` | Kubernetes replica count for the StatefulSet (i.e. how many pods) | `3` | +| `resources` | Allows you to set the [resources][] for the StatefulSet | see [values.yaml][] | +| `roles` | A hash map with the specific [roles][] for the `nodeGroup` | see [values.yaml][] | +| `schedulerName` | Name of the [alternate scheduler][] | `""` | +| `secretMounts` | Allows you easily mount a secret as a file inside the StatefulSet. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` | +| `securityContext` | Allows you to set the [securityContext][] for the container | see [values.yaml][] | +| `service.annotations` | [LoadBalancer annotations][] that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` | `{}` | +| `service.httpPortName` | The name of the http port within the service | `http` | +| `service.labelsHeadless` | Labels to be added to headless service | `{}` | +| `service.labels` | Labels to be added to non-headless service | `{}` | +| `service.loadBalancerIP` | Some cloud providers allow you to specify the [loadBalancer][] IP. If the `loadBalancerIP` field is not specified, the IP is dynamically assigned. If you specify a `loadBalancerIP` but your cloud provider does not support the feature, it is ignored. | `""` | +| `service.loadBalancerSourceRanges` | The IP ranges that are allowed to access | `[]` | +| `service.nodePort` | Custom [nodePort][] port that can be set if you are using `service.type: nodePort` | `""` | +| `service.transportPortName` | The name of the transport port within the service | `transport` | +| `service.type` | Elasticsearch [Service Types][] | `ClusterIP` | +| `sidecarResources` | Allows you to set the [resources][] for the sidecar containers in the StatefulSet | {} | +| `sysctlInitContainer` | Allows you to disable the `sysctlInitContainer` if you are setting [sysctl vm.max_map_count][] with another method | `enabled: true` | +| `sysctlVmMaxMapCount` | Sets the [sysctl vm.max_map_count][] needed for Elasticsearch | `262144` | +| `terminationGracePeriod` | The [terminationGracePeriod][] in seconds used when trying to stop the pod | `120` | +| `tolerations` | Configurable [tolerations][] | `[]` | +| `transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration][] in `extraEnvs` | `9300` | +| `updateStrategy` | The [updateStrategy][] for the StatefulSet. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` | +| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for StatefulSets][]. You will want to adjust the storage (default `30Gi` ) and the `storageClassName` if you are using a different storage class | see [values.yaml][] | + +### Deprecated + +| Parameter | Description | Default | +|-----------|---------------------------------------------------------------------------------------------------------------|---------| +| `fsGroup` | The Group ID (GID) for [securityContext][] so that the Elasticsearch user can read from the persistent volume | `""` | + ## Try it out -In [examples/](https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples) you will find some example configurations. These examples are used for the automated testing of this helm chart +In [examples][] you will find some example configurations. These examples are +used for the automated testing of this Helm chart. ### Default -To deploy a cluster with all default values and run the integration tests +To deploy a cluster with all default values and run the integration tests: ``` cd examples/default @@ -173,7 +223,7 @@ make ### Multi -A cluster with dedicated node types +A cluster with dedicated node types: ``` cd examples/multi @@ -182,22 +232,27 @@ make ### Security -A cluster with node to node security and https enabled. This example uses autogenerated certificates and password, for a production deployment you want to generate SSL certificates following the [official docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#node-certificates). +A cluster with node to node security and https enabled. This example uses +autogenerated certificates and password, for a production deployment you want to +generate SSL certificates following the [official docs][node-certificates]. -* Generate the certificates and install Elasticsearch - ``` - cd examples/security - make +Generate the certificates and install Elasticsearch: - # Run a curl command to interact with the cluster - kubectl exec -ti security-master-0 -- sh -c 'curl -u $ELASTIC_USERNAME:$ELASTIC_PASSWORD -k https://localhost:9200/_cluster/health?pretty' - ``` +``` +cd examples/security +make + +# Run a curl command to interact with the cluster +kubectl exec -ti security-master-0 -- sh -c 'curl -u $ELASTIC_USERNAME:$ELASTIC_PASSWORD -k https://localhost:9200/_cluster/health?pretty' +``` -### FAQ -#### How to install plugins? +## FAQ -The [recommended](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_c_customized_image) way to install plugins into our docker images is to create a custom docker image. +### How to install plugins? + +The recommended way to install plugins into our Docker images is to create a +[custom Docker image][]. The Dockerfile would look something like: @@ -212,15 +267,19 @@ And then updating the `image` in values to point to your custom image. There are a couple reasons we recommend this. -1. Tying the availability of Elasticsearch to the download service to install plugins is not a great idea or something that we recommend. Especially in Kubernetes where it is normal and expected for a container to be moved to another host at random times. -2. Mutating the state of a running docker image (by installing plugins) goes against best practices of containers and immutable infrastructure. - -#### How to use the keystore? +1. Tying the availability of Elasticsearch to the download service to install +plugins is not a great idea or something that we recommend. Especially in +Kubernetes where it is normal and expected for a container to be moved to +another host at random times. +2. Mutating the state of a running Docker image (by installing plugins) goes +against best practices of containers and immutable infrastructure. +### How to use the keystore? -##### Basic example +#### Basic example -Create the secret, the key name needs to be the keystore key path. In this example we will create a secret from a file and from a literal string. +Create the secret, the key name needs to be the keystore key path. In this +example we will create a secret from a file and from a literal string. ``` kubectl create secret generic encryption_key --from-file=xpack.watcher.encryption_key=./watcher_encryption_key @@ -228,15 +287,17 @@ kubectl create secret generic slack_hook --from-literal=xpack.notification.slack ``` To add these secrets to the keystore: + ``` keystore: - secretName: encryption_key - secretName: slack_hook ``` -##### Multiple keys +#### Multiple keys -All keys in the secret will be added to the keystore. To create the previous example in one secret you could also do: +All keys in the secret will be added to the keystore. To create the previous +example in one secret you could also do: ``` kubectl create secret generic keystore_secrets --from-file=xpack.watcher.encryption_key=./watcher_encryption_key --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd' @@ -247,15 +308,21 @@ keystore: - secretName: keystore_secrets ``` -##### Custom paths and keys +#### Custom paths and keys -If you are using these secrets for other applications (besides the Elasticsearch keystore) then it is also possible to specify the keystore path and which keys you want to add. Everything specified under each `keystore` item will be passed through to the `volumeMounts` section for [mounting the secret](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets). In this example we will only add the `slack_hook` key from a secret that also has other keys. Our secret looks like this: +If you are using these secrets for other applications (besides the Elasticsearch +keystore) then it is also possible to specify the keystore path and which keys +you want to add. Everything specified under each `keystore` item will be passed +through to the `volumeMounts` section for mounting the [secret][]. In this +example we will only add the `slack_hook` key from a secret that also has other +keys. Our secret looks like this: ``` kubectl create secret generic slack_secrets --from-literal=slack_channel='#general' --from-literal=slack_hook='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd' ``` -We only want to add the `slack_hook` key to the keystore at path `xpack.notification.slack.account.monitoring.secure_url`. +We only want to add the `slack_hook` key to the keystore at path +`xpack.notification.slack.account.monitoring.secure_url`: ``` keystore: @@ -265,25 +332,37 @@ keystore: path: xpack.notification.slack.account.monitoring.secure_url ``` -You can also take a look at the [config example](https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/config/) which is used as part of the automated testing pipeline. +You can also take a look at the [config example][] which is used as part of the +automated testing pipeline. + +### How to enable snapshotting? -#### How to enable snapshotting? +1. Install your [snapshot plugin][] into a custom Docker image following the +[how to install plugins guide][]. +2. Add any required secrets or credentials into an Elasticsearch keystore +following the [how to use the keystore][] guide. +3. Configure the [snapshot repository][] as you normally would. +4. To automate snapshots you can use a tool like [curator][]. In the future +there are plans to have Elasticsearch manage automated snapshots with +[Snapshot Lifecycle Management][]. -1. Install your [snapshot plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/current/repository.html) into a custom docker image following the [how to install plugins guide](https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#how-to-install-plugins) -2. Add any required secrets or credentials into an Elasticsearch keystore following the [how to use the keystore guide](https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#how-to-use-the-keystore) -3. Configure the [snapshot repository](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html) as you normally would. -4. To automate snapshots you can use a tool like [curator](https://www.elastic.co/guide/en/elasticsearch/client/curator/current/snapshot.html). In the future there are plans to have Elasticsearch manage automated snapshots with [Snapshot Lifecycle Management](https://github.com/elastic/elasticsearch/issues/38461). -### Local development environments +## Local development environments -This chart is designed to run on production scale Kubernetes clusters with multiple nodes, lots of memory and persistent storage. For that reason it can be a bit tricky to run them against local Kubernetes environments such as minikube. Below are some examples of how to get this working locally. +This chart is designed to run on production scale Kubernetes clusters with +multiple nodes, lots of memory and persistent storage. For that reason it can be +a bit tricky to run them against local Kubernetes environments such as minikube. +Below are some examples of how to get this working locally. -#### Minikube +### Minikube -This chart also works successfully on [minikube](https://kubernetes.io/docs/setup/minikube/) in addition to typical hosted Kubernetes environments. -An example `values.yaml` file for minikube is provided under `examples/`. +This chart also works successfully on [minikube][] in addition to typical hosted +Kubernetes environments. An example `values.yaml` file for minikube is provided +under `examples/`. -In order to properly support the required persistent volume claims for the Elasticsearch `StatefulSet`, the `default-storageclass` and `storage-provisioner` minikube addons must be enabled. +In order to properly support the required persistent volume claims for the +Elasticsearch `StatefulSet` , the `default-storageclass` and +`storage-provisioner` minikube addons must be enabled: ``` minikube addons enable default-storageclass @@ -292,29 +371,32 @@ cd examples/minikube make ``` -Note that if `helm` or `kubectl` timeouts occur, you may consider creating a minikube VM with more CPU cores or memory allocated. +Note that if `helm` or `kubectl` timeouts occur, you may consider creating a +minikube VM with more CPU cores or memory allocated. -#### Docker for Mac - Kubernetes +### Docker for Mac - Kubernetes -It is also possible to run this chart with the built in Kubernetes cluster that comes with [docker-for-mac](https://docs.docker.com/docker-for-mac/kubernetes/). +It is also possible to run this chart with the built in Kubernetes cluster that +comes with [docker-for-mac][]: ``` cd examples/docker-for-mac make ``` -#### KIND - Kubernetes +### KIND - Kubernetes -It is also possible to run this chart using a Kubernetes [KIND (Kubernetes in Docker)](https://github.com/kubernetes-sigs/kind) cluster: +It is also possible to run this chart using a Kubernetes [KIND][] (Kubernetes in +Docker) cluster: ``` cd examples/kubernetes-kind make ``` -#### MicroK8S +### MicroK8S -It is also possible to run this chart using [MicroK8S](https://microk8s.io): +It is also possible to run this chart using [MicroK8S][]: ``` microk8s.enable dns @@ -324,46 +406,97 @@ cd examples/microk8s make ``` -## Clustering and Node Discovery - -This chart facilitates Elasticsearch node discovery and services by creating two `Service` definitions in Kubernetes, one with the name `$clusterName-$nodeGroup` and another named `$clusterName-$nodeGroup-headless`. -Only `Ready` pods are a part of the `$clusterName-$nodeGroup` service, while all pods (`Ready` or not) are a part of `$clusterName-$nodeGroup-headless`. - -If your group of master nodes has the default `nodeGroup: master` then you can just add new groups of nodes with a different `nodeGroup` and they will automatically discover the correct master. If your master nodes have a different `nodeGroup` name then you will need to set `masterService` to `$clusterName-$masterNodeGroup`. - -The chart value for `masterService` is used to populate `discovery.zen.ping.unicast.hosts`, which Elasticsearch nodes will use to contact master nodes and form a cluster. -Therefore, to add a group of nodes to an existing cluster, setting `masterService` to the desired `Service` name of the related cluster is sufficient. - -For an example of deploying both a group master nodes and data nodes using multiple releases of this chart, see the accompanying values files in `examples/multi`. - -## Testing - -This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating logic. The dependencies for testing can be installed from the [`requirements.txt`](https://github.com/elastic/helm-charts/tree/master/requirements.txt) in the parent directory. - -``` -pip install -r ../requirements.txt -make pytest -``` - -You can also use `helm template` to look at the YAML being generated - -``` -make template -``` - -It is possible to run all of the tests and linting inside of a docker container -``` -make test -``` - -## Integration Testing - -Integration tests are run using [goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) which is a serverspec like tool written in golang. See [goss.yaml](https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/default/test/goss.yaml) for an example of what the tests look like. - -To run the goss tests against the default example: +## Clustering and Node Discovery -``` -cd examples/default -make goss -``` +This chart facilitates Elasticsearch node discovery and services by creating two +`Service` definitions in Kubernetes, one with the name `$clusterName-$nodeGroup` +and another named `$clusterName-$nodeGroup-headless`. +Only `Ready` pods are a part of the `$clusterName-$nodeGroup` service, while all +pods ( `Ready` or not) are a part of `$clusterName-$nodeGroup-headless`. + +If your group of master nodes has the default `nodeGroup: master` then you can +just add new groups of nodes with a different `nodeGroup` and they will +automatically discover the correct master. If your master nodes have a different +`nodeGroup` name then you will need to set `masterService` to +`$clusterName-$masterNodeGroup`. + +The chart value for `masterService` is used to populate +`discovery.zen.ping.unicast.hosts` , which Elasticsearch nodes will use to +contact master nodes and form a cluster. +Therefore, to add a group of nodes to an existing cluster, setting +`masterService` to the desired `Service` name of the related cluster is +sufficient. + +For an example of deploying both a group master nodes and data nodes using +multiple releases of this chart, see the accompanying values files in +`examples/multi`. + + +## Contributing + +Please check [CONTRIBUTING.md][] before any contribution or for any questions +about our development and testing process. + + +[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/master/BREAKING_CHANGES.md +[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/master/CHANGELOG.md +[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/master/CONTRIBUTING.md +[#63]: https://github.com/elastic/helm-charts/issues/63 +[alternate scheduler]: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods +[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +[anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +[cluster.name]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.name.html +[clustering and node discovery]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#clustering-and-node-discovery +[config example]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/config/values.yaml +[curator]: https://www.elastic.co/guide/en/elasticsearch/client/curator/current/snapshot.html +[custom docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_c_customized_image +[deploys statefulsets serially]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies +[discovery.zen.minimum_master_nodes]: https://www.elastic.co/guide/en/elasticsearch/reference/current/discovery-settings.html#minimum_master_nodes +[docker-for-mac]: https://docs.docker.com/docker-for-mac/kubernetes/ +[elasticsearch cluster health status params]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params +[elasticsearch docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html +[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config +[examples]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/ +[helm]: https://helm.sh +[helm/charts stable]: https://github.com/helm/charts/tree/master/stable/elasticsearch/ +[how to install plugins guide]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#how-to-install-plugins +[how to use the keystore]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#how-to-use-the-keystore +[http.port]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html#_settings +[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images +[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret +[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/ +[java options]: https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html +[jvm heap size]: https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html +[kind]: https://github.com/kubernetes-sigs/kind +[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +[loadBalancer annotations]: https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws +[loadBalancer]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer +[maxUnavailable]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget +[migration guide]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/migration/README.md +[minikube]: https://kubernetes.io/docs/setup/minikube/ +[microk8s]: https://microk8s.io +[multi]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/multi/ +[network.host elasticsearch setting]: https://www.elastic.co/guide/en/elasticsearch/reference/current/network.host.html +[node affinity settings]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature +[node-certificates]: https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#node-certificates +[nodePort]: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport +[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector +[parent readme]: https://github.com/elastic/helm-charts/tree/master/README.md +[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass +[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ +[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ +[roles]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html +[secret]: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets +[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +[service types]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types +[snapshot lifecycle management]: https://github.com/elastic/elasticsearch/issues/38461 +[snapshot plugin]: https://www.elastic.co/guide/en/elasticsearch/plugins/current/repository.html +[snapshot repository]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html +[sysctl vm.max_map_count]: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html#vm-max-map-count +[terminationGracePeriod]: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods +[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +[transport port configuration]: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-transport.html#_transport_settings +[updateStrategy]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ +[values.yaml]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/values.yaml +[volumeClaimTemplate for statefulsets]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage diff --git a/elasticsearch/examples/kubernetes-kind/README.md b/elasticsearch/examples/kubernetes-kind/README.md index 0f5fced5d..b1877f233 100644 --- a/elasticsearch/examples/kubernetes-kind/README.md +++ b/elasticsearch/examples/kubernetes-kind/README.md @@ -9,13 +9,16 @@ for production. ## Current issue -There is currently an [kind issue][] with mount points created from PVCs not writeable by non-root users. -[kubernetes-sigs/kind#1157][] should fix it in a future release. +There is currently an [kind issue][] with mount points created from PVCs not +writable by non-root users. [kubernetes-sigs/kind#1157][] should fix it in a +future release. -Meanwhile, the workaround is to install manually [Rancher Local Path Provisioner][] and use `local-path` storage class for Elasticsearch volumes (see [Makefile][] instructions). +Meanwhile, the workaround is to install manually +[Rancher Local Path Provisioner][] and use `local-path` storage class for +Elasticsearch volumes (see [Makefile][] instructions). [Kind]: https://kind.sigs.k8s.io/ [Kind issue]: https://github.com/kubernetes-sigs/kind/issues/830 [Kubernetes-sigs/kind#1157]: https://github.com/kubernetes-sigs/kind/pull/1157 [Rancher Local Path Provisioner]: https://github.com/rancher/local-path-provisioner -[Makefile]: ./Makefile#L5 \ No newline at end of file +[Makefile]: https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/kubernetes-kind/Makefile#L5 \ No newline at end of file diff --git a/elasticsearch/examples/migration/README.md b/elasticsearch/examples/migration/README.md index e5f4b1a79..1d2015793 100644 --- a/elasticsearch/examples/migration/README.md +++ b/elasticsearch/examples/migration/README.md @@ -1,86 +1,167 @@ # Migration Guide from helm/charts -There are two viable options for migrating from the community Elasticsearch helm chart from the [helm/charts](https://github.com/helm/charts/tree/master/stable/elasticsearch) repo. +There are two viable options for migrating from the community Elasticsearch Helm +chart from the [helm/charts][] repo. 1. Restoring from Snapshot to a fresh cluster 2. Live migration by joining a new cluster to the existing cluster. ## Restoring from Snapshot -This is the recommended and preferred option. The downside is that it will involve a period of write downtime during the migration. If you have a way to temporarily stop writes to your cluster then this is the way to go. This is also a lot simpler as it just involves launching a fresh cluster and restoring a snapshot following the [restoring to a different cluster guide](https://www.elastic.co/guide/en/elasticsearch/reference/6.6/modules-snapshots.html#_restoring_to_a_different_cluster). +This is the recommended and preferred option. The downside is that it will +involve a period of write downtime during the migration. If you have a way to +temporarily stop writes to your cluster then this is the way to go. This is also +a lot simpler as it just involves launching a fresh cluster and restoring a +snapshot following the [restoring to a different cluster guide][]. ## Live migration -If restoring from a snapshot is not possible due to the write downtime then a live migration is also possible. It is very important to first test this in a testing environment to make sure you are comfortable with the process and fully understand what is happening. +If restoring from a snapshot is not possible due to the write downtime then a +live migration is also possible. It is very important to first test this in a +testing environment to make sure you are comfortable with the process and fully +understand what is happening. -This process will involve joining a new set of master, data and client nodes to an existing cluster that has been deployed using the [helm/charts](https://github.com/helm/charts/tree/master/stable/elasticsearch) community chart. Nodes will then be replaced one by one in a controlled fashion to decommission the old cluster. +This process will involve joining a new set of master, data and client nodes to +an existing cluster that has been deployed using the [helm/charts][] community +chart. Nodes will then be replaced one by one in a controlled fashion to +decommission the old cluster. -This example will be using the default values for the existing helm/charts release and for the elastic helm-charts release. If you have changed any of the default values then you will need to first make sure that your values are configured in a compatible way before starting the migration. +This example will be using the default values for the existing helm/charts +release and for the Elastic helm-charts release. If you have changed any of the +default values then you will need to first make sure that your values are +configured in a compatible way before starting the migration. -The process will involve a re-sync and a rolling restart of all of your data nodes. Therefore it is important to disable shard allocation and perform a synced flush like you normally would during any other rolling upgrade. See the [rolling upgrades guide](https://www.elastic.co/guide/en/elasticsearch/reference/6.6/rolling-upgrades.html) for more information. +The process will involve a re-sync and a rolling restart of all of your data +nodes. Therefore it is important to disable shard allocation and perform a synced +flush like you normally would during any other rolling upgrade. See the +[rolling upgrades guide][] for more information. + +* The default image for this chart is +`docker.elastic.co/elasticsearch/elasticsearch` which contains the default +distribution of Elasticsearch with a [basic license][]. Make sure to update the +`image` and `imageTag` values to the correct Docker image and Elasticsearch +version that you currently have deployed. + +* Convert your current helm/charts configuration into something that is +compatible with this chart. + +* Take a fresh snapshot of your cluster. If something goes wrong you want to be +able to restore your data no matter what. + +* Check that your clusters health is green. If not abort and make sure your +cluster is healthy before continuing: -* The default image for this chart is `docker.elastic.co/elasticsearch/elasticsearch` which contains the default distribution of Elasticsearch with a [basic license](https://www.elastic.co/subscriptions). Make sure to update the `image` and `imageTag` values to the correct Docker image and Elasticsearch version that you currently have deployed. -* Convert your current helm/charts configuration into something that is compatible with this chart. -* Take a fresh snapshot of your cluster. If something goes wrong you want to be able to restore your data no matter what. -* Check that your clusters health is green. If not abort and make sure your cluster is healthy before continuing. ``` curl localhost:9200/_cluster/health ``` -* Deploy new data nodes which will join the existing cluster. Take a look at the configuration in [data.yml](./data.yml) + +* Deploy new data nodes which will join the existing cluster. Take a look at the +configuration in [data. yml][]: + ``` make data ``` -* Check that the new nodes have joined the cluster (run this and any other curl commands from within one of your pods). + +* Check that the new nodes have joined the cluster (run this and any other curl +commands from within one of your pods): + ``` curl localhost:9200/_cat/nodes ``` -* Check that your cluster is still green. If so we can now start to scale down the existing data nodes. Assuming you have the default amount of data nodes (2) we now want to scale it down to 1. + +* Check that your cluster is still green. If so we can now start to scale down +the existing data nodes. Assuming you have the default amount of data nodes (2) +we now want to scale it down to 1: + ``` kubectl scale statefulsets my-release-elasticsearch-data --replicas=1 ``` -* Wait for your cluster to become green again + +* Wait for your cluster to become green again: + ``` watch 'curl -s localhost:9200/_cluster/health' ``` -* Once the cluster is green we can scale down again. + +* Once the cluster is green we can scale down again: + ``` kubectl scale statefulsets my-release-elasticsearch-data --replicas=0 ``` + * Wait for the cluster to be green again. -* OK. We now have all data nodes running in the new cluster. Time to replace the masters by firstly scaling down the masters from 3 to 2. Between each step make sure to wait for the cluster to become green again, and check with `curl localhost:9200/_cat/nodes` that you see the correct amount of master nodes. During this process we will always make sure to keep at least 2 master nodes as to not lose quorum. +* OK. We now have all data nodes running in the new cluster. Time to replace the +masters by firstly scaling down the masters from 3 to 2. Between each step make +sure to wait for the cluster to become green again, and check with +`curl localhost:9200/_cat/nodes` that you see the correct amount of master +nodes. During this process we will always make sure to keep at least 2 master +nodes as to not lose quorum: + ``` kubectl scale statefulsets my-release-elasticsearch-master --replicas=2 ``` -* Now deploy a single new master so that we have 3 masters again. See [master.yml](./master.yml) for the configuration. + +* Now deploy a single new master so that we have 3 masters again. See +[master. yml][] for the configuration: + ``` make master ``` -* Scale down old masters to 1 + +* Scale down old masters to 1: + ``` kubectl scale statefulsets my-release-elasticsearch-master --replicas=1 ``` -* Edit the masters in [masters.yml](./masters.yml) to 2 and redeploy + +* Edit the masters in [masters. yml][] to 2 and redeploy: + ``` make master ``` -* Scale down the old masters to 0 + +* Scale down the old masters to 0: + ``` kubectl scale statefulsets my-release-elasticsearch-master --replicas=0 ``` -* Edit the [masters.yml](./masters.yml) to have 3 replicas and remove the `discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then redeploy the masters. This will make sure all 3 masters are running in the new cluster and are pointing at each other for discovery. + +* Edit the [masters. yml][] to have 3 replicas and remove the +`discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then redeploy the +masters. This will make sure all 3 masters are running in the new cluster and +are pointing at each other for discovery: + ``` make master ``` -* Remove the `discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then redeploy the data nodes to make sure they are pointing at the new masters. + +* Remove the `discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then +redeploy the data nodes to make sure they are pointing at the new masters: + ``` make data ``` -* Deploy the client nodes + +* Deploy the client nodes: + ``` make client ``` -* Update any processes that are talking to the existing client nodes and point them to the new client nodes. Once this is done you can scale down the old client nodes + +* Update any processes that are talking to the existing client nodes and point +them to the new client nodes. Once this is done you can scale down the old +client nodes: + ``` kubectl scale deployment my-release-elasticsearch-client --replicas=0 ``` -* The migration should now be complete. After verifying that everything is working correctly you can cleanup leftover resources from your old cluster. + +* The migration should now be complete. After verifying that everything is +working correctly you can cleanup leftover resources from your old cluster. + +[basic license]: https://www.elastic.co/subscriptions +[data. yml]: https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/migration/data.yml +[helm/charts]: https://github.com/helm/charts/tree/master/stable/elasticsearch +[master. yml]: https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/migration/master.yml +[restoring to a different cluster guide]: https://www.elastic.co/guide/en/elasticsearch/reference/6.6/modules-snapshots.html#_restoring_to_a_different_cluster +[rolling upgrades guide]: https://www.elastic.co/guide/en/elasticsearch/reference/6.6/rolling-upgrades.html diff --git a/elasticsearch/values.yaml b/elasticsearch/values.yaml index 888d14f51..93ea93ded 100755 --- a/elasticsearch/values.yaml +++ b/elasticsearch/values.yaml @@ -184,10 +184,6 @@ podSecurityContext: fsGroup: 1000 runAsUser: 1000 -# The following value is deprecated, -# please use the above podSecurityContext.fsGroup instead -fsGroup: "" - securityContext: capabilities: drop: @@ -265,3 +261,7 @@ sysctlInitContainer: enabled: true keystore: [] + +# Deprecated +# please use the above podSecurityContext.fsGroup instead +fsGroup: "" diff --git a/filebeat/README.md b/filebeat/README.md index 388c09b68..21dab903b 100644 --- a/filebeat/README.md +++ b/filebeat/README.md @@ -1,138 +1,191 @@ # Filebeat Helm Chart + + -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -This helm chart is a lightweight way to configure and run our official [Filebeat docker image](https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html). +- [Requirements](#requirements) +- [Installing](#installing) + - [Using Helm repository](#using-helm-repository) + - [Using master branch](#using-master-branch) +- [Upgrading](#upgrading) +- [Compatibility](#compatibility) +- [Usage notes](#usage-notes) +- [Configuration](#configuration) +- [Examples](#examples) + - [Default](#default) +- [Contributing](#contributing) + + + + + + +This functionality is in beta and is subject to change. The design and code is +less mature than official GA features and is being provided as-is with no +warranties. Beta features are not subject to the support SLA of official GA +features. + +This Helm chart is a lightweight way to configure and run our official +[Filebeat Docker image][]. + ## Requirements -* [Helm](https://helm.sh/) >=2.8.0 and <3.0.0 (see parent [README](https://github.com/elastic/helm-charts/tree/master/README.md) for more details) +* [Helm][] >=2.8.0 and <3.0.0 (see [parent README][] for more details) * Kubernetes >=1.9 -## Usage notes and getting started -* The default Filebeat configuration file for this chart is configured to use an Elasticsearch endpoint. Without any additional changes, Filebeat will send documents to the service URL that the Elasticsearch helm chart sets up by default. You may either set the `ELASTICSEARCH_HOSTS` environment variable in `extraEnvs` to override this endpoint or modify the default `filebeatConfig` to change this behavior. -* The default Filebeat configuration file is also configured to capture container logs and enrich them with Kubernetes metadata by default. This will capture all container logs in the cluster. -* This chart disables the [HostNetwork](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces) setting by default for compatibility reasons with the majority of kubernetes providers and scenarios. Some kubernetes providers may not allow enabling `hostNetwork` and deploying multiple Filebeat pods on the same node isn't possible with `hostNetwork`. However Filebeat does recommend activating it. If your kubernetes provider is compatible with `hostNetwork` and you don't need to run multiple Filebeat daemonsets, you can activate it by setting `hostNetworking: true` in [values.yaml](https://github.com/elastic/helm-charts/tree/master/filebeat/values.yaml). ## Installing ### Using Helm repository -* Add the elastic helm charts repo - ``` - helm repo add elastic https://helm.elastic.co - ``` -* Install it - ``` - helm install --name filebeat elastic/filebeat - ``` +* Add the Elastic Helm charts repo: +`helm repo add elastic https://helm.elastic.co` + +* Install it: `helm install --name filebeat elastic/filebeat` ### Using master branch -* Clone the git repo - ``` - git clone git@github.com:elastic/helm-charts.git - ``` -* Install it - ``` - helm install --name filebeat ./helm-charts/filebeat - ``` +* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git` + +* Install it: `helm install --name filebeat ./helm-charts/filebeat` + + +## Upgrading + +Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before +upgrading to a new chart version. + ## Compatibility -This chart is tested with the latest supported versions. The currently tested versions are: +This chart is tested with the latest supported versions. The currently tested +versions are: | 6.x | 7.x | -| ----- | ----- | +|-------|-------| | 6.8.8 | 7.6.2 | -Examples of installing older major versions can be found in the [examples](https://github.com/elastic/helm-charts/tree/master/filebeat/examples) directory. +Examples of installing older major versions can be found in the [examples][] +directory. -While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.6.2` of Filebeat it would look like this: +While only the latest releases are tested, it is possible to easily install old +or new releases by overriding the `imageTag`. To install version `7.6.2` of +Filebeat it would look like this: ``` helm install --name filebeat elastic/filebeat --set imageTag=7.6.2 ``` +## Usage notes + +* The default Filebeat configuration file for this chart is configured to use an +Elasticsearch endpoint. Without any additional changes, Filebeat will send +documents to the service URL that the Elasticsearch Helm chart sets up by +default. You may either set the `ELASTICSEARCH_HOSTS` environment variable in +`extraEnvs` to override this endpoint or modify the default `filebeatConfig` to +change this behavior. +* The default Filebeat configuration file is also configured to capture +container logs and enrich them with Kubernetes metadata by default. This will +capture all container logs in the cluster. +* This chart disables the [HostNetwork][] setting by default for compatibility +reasons with the majority of kubernetes providers and scenarios. Some kubernetes +providers may not allow enabling `hostNetwork` and deploying multiple Filebeat +pods on the same node isn't possible with `hostNetwork` However Filebeat does +recommend activating it. If your kubernetes provider is compatible with +`hostNetwork` and you don't need to run multiple Filebeat DaemonSets, you can +activate it by setting `hostNetworking: true` in [values.yaml][]. + + ## Configuration -| Parameter | Description | Default | -| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------- | -| `filebeatConfig` | Allows you to add any config files in `/usr/share/filebeat` such as `filebeat.yml`. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/filebeat/values.yaml) for an example of the formatting with the default configuration. | see [values.yaml](https://github.com/elastic/helm-charts/tree/master/filebeat/values.yaml) | -| `extraContainers` | List of additional init containers to be added at the Daemonset | `""` | -| `extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `[]` | -| `extraInitContainers` | List of additional init containers to be added at the Daemonset. It also accepts a templatable string of additional containers to be passed to the `tpl` function | `[]` | -| `extraVolumeMounts` | List of additional volumeMounts to be mounted on the Daemonset | `[]` | -| `extraVolumes` | List of additional volumes to be mounted on the Daemonset | `[]` | -| `envFrom` | Templatable string of envFrom to be passed to the [environment from variables](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables) which will be appended to the `envFrom:` definition for the container | `[]` | -| `hostPathRoot` | Fully-qualified [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) that will be used to persist Filebeat registry data | `/var/lib` | -| `hostNetworking` | Use host networking in the daemonset so that hostname is reported correctly | `false` | -| `image` | The Filebeat docker image | `docker.elastic.co/beats/filebeat` | -| `imageTag` | The Filebeat docker image tag | `7.6.2` | -| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` | -| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` | -| `managedServiceAccount` | Whether the `serviceAccount` should be managed by this helm chart. Set this to `false` in order to manage your own service account and related roles. | `true` | -| `podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all Filebeat pods | `{}` | -| `labels` | Configurable [label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) applied to all Filebeat pods | `{}` | -| `podSecurityContext` | Configurable [podSecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for Filebeat pod execution environment | `runAsUser: 0`
`privileged: false` | -| `livenessProbe` | Parameters to pass to [liveness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) checks for values such as timeouts and thresholds. | `failureThreshold: 3`
`initialDelaySeconds: 10`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `readinessProbe` | Parameters to pass to [readiness probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) checks for values such as timeouts and thresholds. | `failureThreshold: 3`
`initialDelaySeconds: 10`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the `DaemonSet` | `requests.cpu: 100m`
`requests.memory: 100Mi`
`limits.cpu: 1000m`
`limits.memory: 200Mi` | -| `serviceAccount` | Custom [serviceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) that Filebeat will use during execution. By default will use the service account created by this chart. | `""` | -| `secretMounts` | Allows you easily mount a secret as a file inside the `DaemonSet`. Useful for mounting certificates and other secrets. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/filebeat/values.yaml) for an example | `[]` | -| `terminationGracePeriod` | Termination period (in seconds) to wait before killing Filebeat pod process on pod shutdown | `30` | -| `tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` | -| `nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) | `{}` | -| `affinity` | Configurable [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) | `{}` | -| `priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` | -| `updateStrategy` | The [updateStrategy](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy) for the `DaemonSet`. By default Kubernetes will kill and recreate pods on updates. Setting this to `OnDelete` will require that pods be deleted manually. | `RollingUpdate` | -| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to "`.Release.Name`-`.Values.nameOverride or .Chart.Name`" | `""` | + +| Parameter | Description | Default | +|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------| +| `affinity` | Configurable [affinity][] | `{}` | +| `envFrom` | Templatable string of envFrom to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` | +| `extraContainers` | List of additional init containers to be added at the DaemonSet | `""` | +| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | `[]` | +| `extraInitContainers` | List of additional init containers to be added at the DaemonSet. It also accepts a templatable string of additional containers to be passed to the `tpl` function | `[]` | +| `extraVolumeMounts` | List of additional volumeMounts to be mounted on the DaemonSet | `[]` | +| `extraVolumes` | List of additional volumes to be mounted on the DaemonSet | `[]` | +| `filebeatConfig` | Allows you to add any config files in `/usr/share/filebeat` such as `filebeat.yml` | see [values.yaml][] | +| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to " `.Release.Name` - `.Values.nameOverride or .Chart.Name` " | `""` | +| `hostNetworking` | Use host networking in the DaemonSet so that hostname is reported correctly | `false` | +| `hostPathRoot` | Fully-qualified [hostPath][] that will be used to persist Filebeat registry data | `/var/lib` | +| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` | +| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` | +| `imageTag` | The Filebeat Docker image tag | `7.6.2` | +| `image` | The Filebeat Docker image | `docker.elastic.co/beats/filebeat` | +| `labels` | Configurable [labels][] applied to all Filebeat pods | `{}` | +| `livenessProbe` | Parameters to pass to liveness [probe][] checks for values such as timeouts and thresholds | see [values.yaml][] | +| `managedServiceAccount` | Whether the `serviceAccount` should be managed by this Helm chart. Set this to `false` in order to manage your own service account and related roles | `true` | +| `nameOverride` | Overrides the chart name for resources. If not set the name will default to `.Chart.Name` | `""` | +| `nodeSelector` | Configurable [nodeSelector][] | `{}` | +| `podAnnotations` | Configurable [annotations][] applied to all Filebeat pods | `{}` | +| `podSecurityContext` | Configurable [podSecurityContext][] for Filebeat pod execution environment | see [values.yaml][] | +| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` | +| `readinessProbe` | Parameters to pass to readiness [probe][] checks for values such as timeouts and thresholds | see [values.yaml][] | +| `resources` | Allows you to set the [resources][] for the `DaemonSet` | see [values.yaml][] | +| `secretMounts` | Allows you easily mount a secret as a file inside the `DaemonSet`. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` | +| `serviceAccount` | Custom [serviceAccount][] that Filebeat will use during execution. By default will use the service account created by this chart | `""` | +| `terminationGracePeriod` | Termination period (in seconds) to wait before killing Filebeat pod process on pod shutdown | `30` | +| `tolerations` | Configurable [tolerations][] | `[]` | +| `updateStrategy` | The [updateStrategy][] for the `DaemonSet`. By default Kubernetes will kill and recreate pods on updates. Setting this to `OnDelete` will require that pods be deleted manually | `RollingUpdate` | + ## Examples -In [examples/](https://github.com/elastic/helm-charts/tree/master/filebeat/examples) you will find some example configurations. These examples are used for the automated testing of this helm chart. +In [examples][] you will find some example configurations. These examples are +used for the automated testing of this Helm chart. ### Default -* Deploy the [default Elasticsearch helm chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#default) -* Deploy Filebeat with the default values +* Deploy the [default Elasticsearch Helm chart][]. +* Deploy Filebeat with the default values: + ``` cd examples/default make ``` -* You can now setup a port forward for Elasticsearch to observe Filebeat indices + +* You can now setup a port forward for Elasticsearch to observe Filebeat +indices: + ``` kubectl port-forward svc/elasticsearch-master 9200 curl localhost:9200/_cat/indices ``` -## Testing - -This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating logic. The dependencies for testing can be installed from the [`requirements.txt`](https://github.com/elastic/helm-charts/tree/master/requirements.txt) in the parent directory. -``` -pip install -r ../requirements.txt -make pytest -``` - -You can also use `helm template` to look at the YAML being generated - -``` -make template -``` - -It is possible to run all of the tests and linting inside of a docker container - -``` -make test -``` - -## Integration Testing - -Integration tests are run using [goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) which is a serverspec like tool written in golang. See [goss.yaml](https://github.com/elastic/helm-charts/tree/master/filebeat/examples/default/test/goss.yaml) for an example of what the tests look like. - -To run the goss tests against the default example: -``` -cd examples/default -make goss -``` +## Contributing + +Please check [CONTRIBUTING.md][] before any contribution or for any questions +about our development and testing process. + + +[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/master/BREAKING_CHANGES.md +[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/master/CHANGELOG.md +[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/master/CONTRIBUTING.md +[affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +[default Elasticsearch Helm chart]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#default +[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config +[examples]: https://github.com/elastic/helm-charts/tree/master/filebeat/examples +[filebeat docker image]: https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html +[helm]: https://helm.sh +[hostNetwork]: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces +[hostPath]: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath +[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images +[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret +[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +[parent readme]: https://github.com/elastic/helm-charts/tree/master/README.md +[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector +[podSecurityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass +[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ +[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ +[serviceAccount]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +[updateStrategy]: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy +[values.yaml]: https://github.com/elastic/helm-charts/tree/master/filebeat/values.yaml diff --git a/kibana/README.md b/kibana/README.md index 606edcd1a..a3f5b14c8 100644 --- a/kibana/README.md +++ b/kibana/README.md @@ -1,117 +1,164 @@ # Kibana Helm Chart + + -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -This helm chart is a lightweight way to configure and run our official [Kibana docker image](https://www.elastic.co/guide/en/kibana/current/docker.html) +- [Requirements](#requirements) +- [Installing](#installing) + - [Using Helm repository](#using-helm-repository) + - [Using master branch](#using-master-branch) +- [Upgrading](#upgrading) +- [Compatibility](#compatibility) +- [Configuration](#configuration) + - [Deprecated](#deprecated) +- [Examples](#examples) + - [Default](#default) + - [Security](#security) +- [FAQ](#faq) + - [How to install plugins?](#how-to-install-plugins) +- [Contributing](#contributing) + + + + + + +This functionality is in beta and is subject to change. The design and code is +less mature than official GA features and is being provided as-is with no +warranties. Beta features are not subject to the support SLA of official GA +features. + +This Helm chart is a lightweight way to configure and run our official +[Kibana Docker image][]. + ## Requirements -* [Helm](https://helm.sh/) >=2.8.0 and <3.0.0 (see parent [README](https://github.com/elastic/helm-charts/tree/master/README.md) for more details) +* [Helm][] >=2.8.0 and <3.0.0 (see [parent README][] for more details) * Kubernetes >=1.9 + ## Installing ### Using Helm repository -* Add the elastic helm charts repo - ``` - helm repo add elastic https://helm.elastic.co - ``` -* Install it - ``` - helm install --name kibana elastic/kibana - ``` +* Add the Elastic Helm charts repo: +`helm repo add elastic https://helm.elastic.co` + +* Install it: `helm install --name kibana elastic/kibana` ### Using master branch -* Clone the git repo - ``` - git clone git@github.com:elastic/helm-charts.git - ``` -* Install it - ``` - helm install --name kibana ./helm-charts/kibana - ``` +* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git` + +* Install it: `helm install --name kibana ./helm-charts/kibana` + + +## Upgrading + +Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before +upgrading to a new chart version. + ## Compatibility -This chart is tested with the latest supported versions. The currently tested versions are: +This chart is tested with the latest supported versions. The currently tested +versions are: | 6.x | 7.x | -| ----- | ----- | +|-------|-------| | 6.8.8 | 7.6.2 | -Examples of installing older major versions can be found in the [examples](https://github.com/elastic/helm-charts/tree/master/kibana/examples) directory. +Examples of installing older major versions can be found in the [examples][] +directory. -While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.6.2` of Kibana it would look like this: +While only the latest releases are tested, it is possible to easily install old +or new releases by overriding the `imageTag`. To install version `7.6.2` of +Kibana it would look like this: ``` helm install --name kibana elastic/kibana --set imageTag=7.6.2 ``` + ## Configuration -| Parameter | Description | Default | -| ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | -| `elasticsearchHosts` | The URLs used to connect to Elasticsearch. | `http://elasticsearch-master:9200` | -| `elasticsearchURL` | The URL used to connect to Elasticsearch. Deprecated, needs to be used for Kibana versions < 6.6 | | -| `replicas` | Kubernetes replica count for the deployment (i.e. how many pods) | `1` | -| `extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `name: NODE_OPTIONS`
`value: "--max-old-space-size=1800"` | -| `envFrom` | Templatable string of envFrom to be passed to the [environment from variables](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables) which will be appended to the `envFrom:` definition for the container | `[]` | -| `secretMounts` | Allows you easily mount a secret as a file inside the deployment. Useful for mounting certificates and other secrets. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/kibana/values.yaml) for an example | `[]` | -| `image` | The Kibana docker image | `docker.elastic.co/kibana/kibana` | -| `imageTag` | The Kibana docker image tag | `7.6.2` | -| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` | -| `podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all Kibana pods | `{}` | -| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the statefulset | `requests.cpu: 1000m`
`requests.memory: 2Gi`
`limits.cpu: 1000m`
`limits.memory: 2Gi` | -| `protocol` | The protocol that will be used for the readinessProbe. Change this to `https` if you have `server.ssl.enabled: true` set | `http` | -| `serverHost` | The [`server.host`](https://www.elastic.co/guide/en/kibana/current/settings.html) Kibana setting. This is set explicitly so that the default always matches what comes with the docker image. | `0.0.0.0` | -| `healthCheckPath` | The path used for the readinessProbe to check that Kibana is ready. If you are setting `server.basePath` you will also need to update this to `/${basePath}/app/kibana` | `/app/kibana` | -| `kibanaConfig` | Allows you to add any config files in `/usr/share/kibana/config/` such as `kibana.yml`. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/kibana/values.yaml) for an example of the formatting. | `{}` | -| `podSecurityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) for the pod | `fsGroup: 1000` | -| `securityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container) for the container | `capabilities.drop:[ALL]`
`runAsNonRoot: true`
`runAsUser: 1000` | -| `serviceAccount` | Allows you to overwrite the "default" [serviceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) for the pod | `[]` | -| `priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` | -| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. | `5601` | -| `updateStrategy` | Allows you to change the default update [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) for the deployment. A [standard upgrade](https://www.elastic.co/guide/en/kibana/current/upgrade-standard.html) of Kibana requires a full stop and start which is why the default strategy is set to `Recreate` | `Recreate` | -| `readinessProbe` | Configuration for the [readinessProbe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) | `failureThreshold: 3`
`initialDelaySeconds: 10`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` | -| `nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) so that you can target specific nodes for your Kibana instances | `{}` | -| `tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` | -| `ingress` | Configurable [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) to expose the Kibana service. See [`values.yaml`](https://github.com/elastic/helm-charts/tree/master/kibana/values.yaml) for an example | `enabled: false` | -| `service` | Configurable [service](https://kubernetes.io/docs/concepts/services-networking/service/) to expose the Kibana service. See [`values.yaml`](https://github.com/elastic/helm-charts/tree/master/kibana/values.yaml) for an example | `type: ClusterIP`
`port: 5601`
`nodePort:`
`annotations: {}`
`loadBalancerSourceRanges: {}` | -| `labels` | Configurable [label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) applied to all Kibana pods | `{}` | -| `lifecycle` | Allows you to add lifecycle configuration. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/kibana/values.yaml) for an example of the formatting. | `{}` | -| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to "`.Release.Name`-`.Values.nameOverride or .Chart.Name`" | `""` | -| `extraContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | -| `extraInitContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | +| Parameter | Description | Default | +|-----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------| +| `affinity` | Configurable [affinity][] | `{}` | +| `elasticsearchHosts` | The URLs used to connect to Elasticsearch | `http://elasticsearch-master:9200` | +| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` | +| `extraContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | +| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | see [values.yaml][] | +| `extraInitContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | +| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to " `.Release.Name` - `.Values.nameOverride orChart.Name` " | `""` | +| `healthCheckPath` | The path used for the readinessProbe to check that Kibana is ready. If you are setting `server.basePath` you will also need to update this to `/${basePath}/app/kibana` | `/app/kibana` | +| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service | `5601` | +| `imagePullPolicy` | The Kubernetes [imagePullPolicy][]value | `IfNotPresent` | +| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` | +| `imageTag` | The Kibana Docker image tag | `7.6.2` | +| `image` | The Kibana Docker image | `docker.elastic.co/kibana/kibana` | +| `ingress` | Configurable [ingress][] to expose the Kibana service. | see [values.yaml][] | +| `kibanaConfig` | Allows you to add any config files in `/usr/share/kibana/config/` such as `kibana.yml` See [values.yaml][] for an example of the formatting | `{}` | +| `labels` | Configurable [labels][] applied to all Kibana pods | `{}` | +| `lifecycle` | Allows you to add lifecycle configuration. See [values.yaml][] for an example of the formatting | `{}` | +| `nameOverride` | Overrides the chart name for resources. If not set the name will default to `.Chart.Name` | `""` | +| `nodeSelector` | Configurable [nodeSelector][] so that you can target specific nodes for your Kibana instances | `{}` | +| `podAnnotations` | Configurable [annotations][] applied to all Kibana pods | `{}` | +| `podSecurityContext` | Allows you to set the [securityContext][] for the pod | see [values.yaml][] | +| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` | +| `protocol` | The protocol that will be used for the readinessProbe. Change this to `https` if you have `server.ssl.enabled: true` set | `http` | +| `readinessProbe` | Configuration for the readiness [probe][] | see [values.yaml][] | +| `replicas` | Kubernetes replica count for the Deployment (i.e. how many pods) | `1` | +| `resources` | Allows you to set the [resources][] for the Deployment | see [values.yaml][] | +| `secretMounts` | Allows you easily mount a secret as a file inside the Deployment. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` | +| `securityContext` | Allows you to set the [securityContext][] for the container | see [values.yaml][] | +| `serverHost` | The [server.host][] Kibana setting. This is set explicitly so that the default always matches what comes with the Docker image | `0.0.0.0` | +| `serviceAccount` | Allows you to overwrite the "default" [serviceAccount][] for the pod | `[]` | +| `service` | Configurable [service][] to expose the Kibana service. | see [values.yaml][] | +| `tolerations` | Configurable [tolerations][]) | `[]` | +| `updateStrategy` | Allows you to change the default [updateStrategy][] for the Deployment. A [standard upgrade][] of Kibana requires a full stop and start which is why the default strategy is set to `Recreate` | `type: Recreate` | + +### Deprecated + +| Parameter | Description | Default | +|--------------------|--------------------------------------------------------------------------------------|---------| +| `elasticsearchURL` | The URL used to connect to Elasticsearch. needs to be used for Kibana versions < 6.6 | `""` | + ## Examples -In [examples/](https://github.com/elastic/helm-charts/tree/master/kibana/examples) you will find some example configurations. These examples are used for the automated testing of this helm chart +In [examples][] you will find some example configurations. These examples are +used for the automated testing of this Helm chart. ### Default -* Deploy the [default Elasticsearch helm chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#default) -* Deploy Kibana with the default values +* Deploy the [default Elasticsearch Helm chart][]. +* Deploy Kibana with the default values: + ``` cd examples/default make ``` -* You can now setup a port forward and access Kibana at http://localhost:5601 + +* You can now setup a port forward and access Kibana at http://localhost:5601: + ``` kubectl port-forward deployment/helm-kibana-default-kibana 5601 ``` ### Security -* Deploy a [security enabled Elasticsearch cluster](https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#security) -* Deploy Kibana with the security example +* Deploy a [security enabled Elasticsearch cluster][]. +* Deploy Kibana with the security example: + ``` cd examples/security make ``` -* Setup a port forward and access Kibana at https://localhost:5601 + +* Setup a port forward and access Kibana at https://localhost:5601: + ``` # Setup the port forward kubectl port-forward deployment/helm-kibana-security-kibana 5601 @@ -119,18 +166,20 @@ In [examples/](https://github.com/elastic/helm-charts/tree/master/kibana/example # Run this in a seperate terminal # Get the auto generated password password=$(kubectl get secret elastic-credentials -o jsonpath='{.data.password}' | base64 --decode) - echo $password + echo password # Test Kibana is working with curl or access it with your browser at https://localhost:5601 # The example certificate is self signed so you may see a warning about the certificate curl -I -k -u elastic:$password https://localhost:5601/app/kibana ``` + ## FAQ ### How to install plugins? -The recommended way to install plugins into our docker images is to create a custom docker image. +The recommended way to install plugins into our Docker images is to create a +custom Docker image. The Dockerfile would look something like: @@ -143,29 +192,46 @@ RUN bin/kibana-plugin install And then updating the `image` in values to point to your custom image. -There are a couple reasons we recommend this. - -1. Tying the availability of Kibana to the download service to install plugins is not a great idea or something that we recommend. Especially in Kubernetes where it is normal and expected for a container to be moved to another host at random times. -2. Mutating the state of a running docker image (by installing plugins) goes against best practices of containers and immutable infrastructure. - -## Testing - -This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating logic. The dependencies for testing can be installed from the [`requirements.txt`](https://github.com/elastic/helm-charts/tree/master/requirements.txt) in the parent directory. - -``` -pip install -r ../requirements.txt -make test -``` - - -You can also use `helm template` to look at the YAML being generated - -``` -make template -``` - -It is possible to run all of the tests and linting inside of a docker container - -``` -make test -``` +There are a couple reasons we recommend this: + +1. Tying the availability of Kibana to the download service to install plugins +is not a great idea or something that we recommend. Especially in Kubernetes +where it is normal and expected for a container to be moved to another host at +random times. +2. Mutating the state of a running Docker image (by installing plugins) goes +against best practices of containers and immutable infrastructure. + + +## Contributing + +Please check [CONTRIBUTING.md][] before any contribution or for any questions +about our development and testing process. + + +[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/master/BREAKING_CHANGES.md +[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/master/CHANGELOG.md +[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/master/CONTRIBUTING.md +[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +[default elasticsearch helm chart]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#default +[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config +[kibana docker image]: https://www.elastic.co/guide/en/kibana/current/docker.html +[examples]: https://github.com/elastic/helm-charts/tree/master/kibana/examples +[helm]: https://helm.sh +[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images +[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret +[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/ +[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector +[parent readme]: https://github.com/elastic/helm-charts/tree/master/README.md +[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass +[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ +[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ +[security enabled elasticsearch cluster]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#security +[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod +[server.host]: https://www.elastic.co/guide/en/kibana/current/settings.html +[service]: https://kubernetes.io/docs/concepts/services-networking/service/ +[serviceAccount]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +[standard upgrade]: https://www.elastic.co/guide/en/kibana/current/upgrade-standard.html +[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +[updateStrategy]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment +[values.yaml]: https://github.com/elastic/helm-charts/tree/master/kibana/values.yaml diff --git a/kibana/values.yaml b/kibana/values.yaml index 185a98377..2f9ccd111 100755 --- a/kibana/values.yaml +++ b/kibana/values.yaml @@ -1,6 +1,4 @@ --- - -elasticsearchURL: "" # "http://elasticsearch-master:9200" elasticsearchHosts: "http://elasticsearch-master:9200" replicas: 1 @@ -144,3 +142,6 @@ lifecycle: {} # postStart: # exec: # command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"] + +# Deprecated - use only with versions < 6.6 +elasticsearchURL: "" # "http://elasticsearch-master:9200" diff --git a/logstash/README.md b/logstash/README.md index f0aaa1af3..20e3e03a4 100644 --- a/logstash/README.md +++ b/logstash/README.md @@ -1,173 +1,236 @@ # Logstash Helm Chart + + -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -This helm chart is a lightweight way to configure and run our official [Logstash docker image](https://www.elastic.co/guide/en/logstash/current/docker.html) +- [Requirements](#requirements) +- [Installing](#installing) + - [Using Helm repository](#using-helm-repository) + - [Using master branch](#using-master-branch) +- [Upgrading](#upgrading) +- [Compatibility](#compatibility) +- [Usage notes](#usage-notes) +- [Configuration](#configuration) +- [Try it out](#try-it-out) + - [Default](#default) +- [FAQ](#faq) + - [How to install plugins?](#how-to-install-plugins) +- [Contributing](#contributing) + + + + + + +This functionality is in beta and is subject to change. The design and code is +less mature than official GA features and is being provided as-is with no +warranties. Beta features are not subject to the support SLA of official GA +features. + +This Helm chart is a lightweight way to configure and run our official +[Logstash Docker image][]. + ## Requirements -* [Helm](https://helm.sh/) >=2.8.0 and <3.0.0 (see parent [README](https://github.com/elastic/helm-charts/tree/master/README.md) for more details) +* [Helm][] >=2.8.0 and <3.0.0 (see [parent README][] for more details) * Kubernetes >=1.8 -## Usage notes and getting started - -* This repo includes a number of [example](https://github.com/elastic/helm-charts/tree/master/logstash/examples) configurations which can be used as a reference. They are also used in the automated testing of this chart -* Automated testing of this chart is currently only run against GKE (Google Kubernetes Engine). -* The chart deploys a statefulset and by default will do an automated rolling update of your cluster. It does this by waiting for the cluster health to become green after each instance is updated. If you prefer to update manually you can set [`updateStrategy: OnDelete`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#on-delete) -* It is important to verify that the JVM heap size in `logstashJavaOpts` and to set the CPU/Memory `resources` to something suitable for your cluster -* We have designed this chart to be very un-opinionated about how to configure Logstash. It exposes ways to set environment variables and mount secrets inside of the container. Doing this makes it much easier for this chart to support multiple versions with minimal changes. -* `logstash.yml` configuration files can be set either by a ConfigMap using `logstashConfig` in `values.yml` or by environment variables using `extraEnvs` in `values.yml`, however Logstash Docker image can't mix both methods as defining settings with environment variables causes `logstash.yml` to be modified in place while using ConfigMap bind-mount the same file (more details in this [Note](https://www.elastic.co/guide/en/logstash/6.7/docker-config.html#docker-env-config)). -* When overriding `logstash.yml`, `http.host: 0.0.0.0` should always be included to make default probes work. If restricting HTTP API to 127.0.0.1 is required by using `http.host: 127.0.0.1`, default probes should be disabled or overrided (see [values.yaml](https://github.com/elastic/helm-charts/tree/master/logstash/values.yaml) for the good syntax). ## Installing ### Using Helm repository -* Add the elastic helm charts repo - ``` - helm repo add elastic https://helm.elastic.co - ``` -* Install it - ``` - helm install --name logstash elastic/logstash +* Add the Elastic Helm charts repo: +`helm repo add elastic https://helm.elastic.co` + +* Install it: `helm install --name logstash elastic/logstash` ### Using master branch -* Clone the git repo - ``` - git clone git@github.com:elastic/helm-charts.git - ``` -* Install it - ``` - helm install --name logstash ./helm-charts/logstash - ``` +* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git` + +* Install it: `helm install --name logstash ./helm-charts/logstash` + + +## Upgrading + +Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before +upgrading to a new chart version. + ## Compatibility -This chart is tested with the latest supported versions. The currently tested versions are: +This chart is tested with the latest supported versions. The currently tested +versions are: | 6.x | 7.x | -| ----- | ----- | +|-------|-------| | 6.8.8 | 7.6.2 | -Examples of installing older major versions can be found in the [examples](https://github.com/elastic/helm-charts/tree/master/logstash/examples) directory. +Examples of installing older major versions can be found in the [examples][] +directory. -While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.6.2` of Logstash it would look like this: +While only the latest releases are tested, it is possible to easily install old +or new releases by overriding the `imageTag` To install version `7.6.2` of +Logstash it would look like this: ``` helm install --name logstash elastic/logstash --set imageTag=7.6.2 ``` +## Usage notes + +* This repo includes a number of [examples][] configurations which can be used +as a reference. They are also used in the automated testing of this chart +* Automated testing of this chart is currently only run against GKE (Google +Kubernetes Engine). +* The chart deploys a StatefulSet and by default will do an automated rolling +update of your cluster. It does this by waiting for the cluster health to become +green after each instance is updated. If you prefer to update manually you can +set `OnDelete` [updateStrategy][]. +* It is important to verify that the JVM heap size in `logstashJavaOpts` and to +set the CPU/Memory `resources` to something suitable for your cluster. +* We have designed this chart to be very un-opinionated about how to configure +Logstash. It exposes ways to set environment variables and mount secrets inside +of the container. Doing this makes it much easier for this chart to support +multiple versions with minimal changes. +* `logstash.yml` configuration files can be set either by a ConfigMap using +`logstashConfig` in `values.yml` or by environment variables using `extraEnvs` +in `values.yml` , however Logstash Docker image can't mix both methods as +defining settings with environment variables causes `logstash.yml` to be +modified in place while using ConfigMap bind-mount the same file (more details +in this [note][]). +* When overriding `logstash.yml`, `http.host: 0.0.0.0` should always be included +to make default probes work. If restricting HTTP API to 127.0.0.1 is required by +using `http.host: 127.0.0.1`, default probes should be disabled or overrided +(see [values.yaml][] for the good syntax). + + ## Configuration -| Parameter | Description | Default | -| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | -| `antiAffinity` | Setting this to hard enforces the [anti-affinity rules](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). If it is set to soft it will be done "best effort". Other values will be ignored. | `hard` | -| `antiAffinityTopologyKey` | The [anti-affinity topology key](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). By default this will prevent multiple Logstash nodes from running on the same Kubernetes node | `kubernetes.io/hostname` | -| `extraContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | -| `extraEnvs` | Extra [environment variables](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config) which will be appended to the `env:` definition for the container | `[]` | -| `envFrom` | Templatable string of envFrom to be passed to the [environment from variables](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables) which will be appended to the `envFrom:` definition for the container | `[]` | -| `extraInitContainers` | Templatable string of additional init containers to be passed to the `tpl` function | `""` | -| `extraVolumes` | Templatable string of additional volumes to be passed to the `tpl` function | `""` | -| `extraVolumeMounts` | Templatable string of additional volumeMounts to be passed to the `tpl` function | `""` | -| `image` | The Logstash docker image | `docker.elastic.co/logstash/logstash` | -| `imagePullPolicy` | The Kubernetes [imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) value | `IfNotPresent` | -| `imagePullSecrets` | Configuration for [imagePullSecrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) so that you can use a private registry for your image | `[]` | -| `imageTag` | The Logstash docker image tag | `7.6.2` | -| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. | `9600` | -| `extraPorts` | An array of extra ports to open on the pod | `[]` | -| `labels` | Configurable [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) applied to all Logstash pods | `{}` | -| `lifecycle` | Allows you to add lifecycle configuration. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/logstash/values.yaml) for an example of the formatting. | `{}` | -| `livenessProbe` | Configuration fields for the [livenessProbe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) | `failureThreshold: 3`
`initialDelaySeconds: 300`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `logstashConfig` | Allows you to add any config files in `/usr/share/logstash/config/` such as `logstash.yml` and `log4j2.properties`. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/logstash/values.yaml) for an example of the formatting. | `{}` | -| `logstashJavaOpts` | Java options for Logstash. This is where you should configure the jvm heap size | `-Xmx1g -Xms1g` | -| `logstashPipeline` | Allows you to add any pipeline files in `/usr/share/logstash/pipeline/`. | `{}` | -| `maxUnavailable` | The [maxUnavailable](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget) value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` | -| `nodeAffinity` | Value for the [node affinity settings](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature) | `{}` | -| `nodeSelector` | Configurable [nodeSelector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) so that you can target specific nodes for your Logstash cluster | `{}` | -| `persistence.annotations` | Additional persistence annotations for the `volumeClaimTemplate` | `{}` | -| `persistence.enabled` | Enables a persistent volume for Logstash data | `false` | -| `podAnnotations` | Configurable [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) applied to all Logstash pods | `{}` | -| `podManagementPolicy` | By default Kubernetes [deploys statefulsets serially](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies). This deploys them in parallel so that they can discover each other | `Parallel` | -| `podSecurityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) for the pod | `fsGroup: 1000`
`runAsUser: 1000` | -| `podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true`. Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | `create: false`
`name: ""` | -| `priorityClassName` | The [name of the PriorityClass](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass). No default is supplied as the PriorityClass must be created first. | `""` | -| `readinessProbe` | Configuration fields for the [readinessProbe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) | `failureThreshold: 3`
`initialDelaySeconds: 60`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `replicas` | Kubernetes replica count for the statefulset (i.e. how many pods) | `1` | -| `resources` | Allows you to set the [resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for the statefulset | `requests.cpu: 100m`
`requests.memory: 1536Mi`
`limits.cpu: 1000m`
`limits.memory: 1536Mi` | -| `schedulerName` | Name of the [alternate scheduler](https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods) | `""` | -| `secretMounts` | Allows you easily mount a secret as a file inside the statefulset. Useful for mounting certificates and other secrets. See [values.yaml](https://github.com/elastic/helm-charts/tree/master/logstash/values.yaml) for an example | `[]` | -| `securityContext` | Allows you to set the [securityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container) for the container | `capabilities.drop:[ALL]`
`runAsNonRoot: true`
`runAsUser: 1000` | -| `terminationGracePeriod` | The [terminationGracePeriod](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) in seconds used when trying to stop the pod | `120` | -| `tolerations` | Configurable [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) | `[]` | -| `updateStrategy` | The [updateStrategy](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) for the statefulset. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` | -| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for statefulsets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage). You will want to adjust the storage (default `30Gi`) and the `storageClassName` if you are using a different storage class | `accessModes: [ "ReadWriteOnce" ]`
`resources.requests.storage: 1Gi` | -| `rbac` | Configuration for creating a role, role binding and service account as part of this helm chart with `create: true`. Also can be used to reference an external service account with `serviceAccountName: "externalServiceAccountName"`. | `create: false`
`serviceAccountName: ""` | -| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to "`.Release.Name`-`.Values.nameOverride or .Chart.Name`" | `""` | +| Parameter | Description | Default | +|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------| +| `antiAffinityTopologyKey` | The [anti-affinity][] topology key]. By default this will prevent multiple Logstash nodes from running on the same Kubernetes node | `kubernetes.io/hostname` | +| `antiAffinity` | Setting this to hard enforces the [anti-affinity][] rules. If it is set to soft it will be done "best effort". Other values will be ignored | `hard` | +| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` | +| `extraContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | +| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | `[]` | +| `extraInitContainers` | Templatable string of additional `initContainers` to be passed to the `tpl` function | `""` | +| `extraPorts` | An array of extra ports to open on the pod | `[]` | +| `extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function | `""` | +| `extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function | `""` | +| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to " `.Release.Name` - `.Values.nameOverride or .Chart.Name` " | `""` | +| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service | `9600` | +| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` | +| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` | +| `imageTag` | The Logstash Docker image tag | `7.6.2` | +| `image` | The Logstash Docker image | `docker.elastic.co/logstash/logstash` | +| `labels` | Configurable [labels][] applied to all Logstash pods | `{}` | +| `lifecycle` | Allows you to add lifecycle configuration. See [values.yaml][] for an example of the formatting | `{}` | +| `livenessProbe` | Configuration fields for the liveness [probe][] | see [values.yaml][] | +| `logstashConfig` | Allows you to add any config files in `/usr/share/logstash/config/` such as `logstash.yml` and `log4j2.properties` See [values.yaml][] for an example of the formatting | `{}` | +| `logstashJavaOpts` | Java options for Logstash. This is where you should configure the JVM heap size | `-Xmx1g -Xms1g` | +| `logstashPipeline` | Allows you to add any pipeline files in `/usr/share/logstash/pipeline/` | `{}` | +| `maxUnavailable` | The [maxUnavailable][] value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` | +| `nameOverride` | Overrides the chart name for resources. If not set the name will default to `.Chart.Name` | `""` | +| `nodeAffinity` | Value for the [node affinity settings][] | `{}` | +| `nodeSelector` | Configurable [nodeSelector][] so that you can target specific nodes for your Logstash cluster | `{}` | +| `persistence` | Enables a persistent volume for Logstash data | see [values.yaml][] | +| `podAnnotations` | Configurable [annotations][] applied to all Logstash pods | `{}` | +| `podManagementPolicy` | By default Kubernetes [deploys StatefulSets serially][]. This deploys them in parallel so that they can discover each other | `Parallel` | +| `podSecurityContext` | Allows you to set the [securityContext][] for the pod | see [values.yaml][] | +| `podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true` Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | see [values.yaml][] | +| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` | +| `rbac` | Configuration for creating a role, role binding and service account as part of this Helm chart with `create: true` Also can be used to reference an external service account with `serviceAccountName: "externalServiceAccountName"` | see [values.yaml][] | +| `readinessProbe` | Configuration fields for the readiness [probe][] | see [values.yaml][] | +| `replicas` | Kubernetes replica count for the StatefulSet (i.e. how many pods) | `1` | +| `resources` | Allows you to set the [resources][] for the StatefulSet | see [values.yaml][] | +| `schedulerName` | Name of the [alternate scheduler][] | `""` | +| `secretMounts` | Allows you easily mount a secret as a file inside the StatefulSet. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` | +| `securityContext` | Allows you to set the [securityContext][] for the container | see [values.yaml][] | +| `service` | Configurable [service][] to expose the Logstash service. | see [values.yaml][] | +| `terminationGracePeriod` | The [terminationGracePeriod][] in seconds used when trying to stop the pod | `120` | +| `tolerations` | Configurable [tolerations][] | `[]` | +| `updateStrategy` | The [updateStrategy][] for the StatefulSet. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` | +| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for StatefulSets][]. You will want to adjust the storage (default `30Gi` ) and the `storageClassName` if you are using a different storage class | see [values.yaml][] | + ## Try it out -In [examples/](https://github.com/elastic/helm-charts/tree/master/logstash/examples) you will find some example configurations. These examples are used for the automated testing of this helm chart +In [examples][] you will find some example configurations. These examples are +used for the automated testing of this Helm chart. ### Default -To deploy a cluster with all default values and run the integration tests +To deploy a cluster with all default values and run the integration tests: ``` cd examples/default make ``` -### FAQ -#### How to install plugins? +## FAQ -The [recommended](https://www.elastic.co/guide/en/logstash/current/docker-config.html#_custom_images) way to install plugins into our docker images is to create a custom docker image. +### How to install plugins? + +The recommended way to install plugins into our Docker images is to create a +[custom Docker image][]. The Dockerfile would look something like: ``` ARG logstash_version FROM docker.elastic.co/logstash/logstash:${logstash_version} - RUN bin/logstash-plugin install logstash-output-kafka ``` And then updating the `image` in values to point to your custom image. -There are a couple reasons we recommend this. - -1. Tying the availability of Logstash to the download service to install plugins is not a great idea or something that we recommend. Especially in Kubernetes where it is normal and expected for a container to be moved to another host at random times. -2. Mutating the state of a running docker image (by installing plugins) goes against best practices of containers and immutable infrastructure. - -## Testing - -This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating logic. The dependencies for testing can be installed from the [`requirements.txt`](https://github.com/elastic/helm-charts/tree/master/requirements.txt) in the parent directory. - -``` -pip install -r ../requirements.txt -make pytest -``` - -You can also use `helm template` to look at the YAML being generated - -``` -make template -``` - -It is possible to run all of the tests and linting inside of a docker container - -``` -make test -``` - -## Integration Testing - -Integration tests are run using [goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) which is a serverspec like tool written in golang. See [goss.yaml](https://github.com/elastic/helm-charts/tree/master/logstash/examples/default/test/goss.yaml) for an example of what the tests look like. - -To run the goss tests against the default example: - -``` -cd examples/default -make goss -``` +There are a couple reasons we recommend this: + +1. Tying the availability of Logstash to the download service to install plugins +is not a great idea or something that we recommend. Especially in Kubernetes +where it is normal and expected for a container to be moved to another host at +random times. +2. Mutating the state of a running Docker image (by installing plugins) goes +against best practices of containers and immutable infrastructure. + + +## Contributing + +Please check [CONTRIBUTING.md][] before any contribution or for any questions +about our development and testing process. + + +[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/master/BREAKING_CHANGES.md +[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/master/CHANGELOG.md +[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/master/CONTRIBUTING.md +[alternate scheduler]: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods +[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +[anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +[deploys statefulsets serially]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies +[custom docker image]: https://www.elastic.co/guide/en/logstash/current/docker-config.html#_custom_images +[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config +[examples]: https://github.com/elastic/helm-charts/tree/master/logstash/examples +[helm]: https://helm.sh +[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images +[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret +[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +[logstash docker image]: https://www.elastic.co/guide/en/logstash/current/docker.html +[maxUnavailable]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget +[node affinity settings]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature +[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector +[note]: https://www.elastic.co/guide/en/logstash/current/docker-config.html#docker-env-config +[parent readme]: https://github.com/elastic/helm-charts/tree/master/README.md +[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass +[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ +[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ +[updateStrategy]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ +[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod +[terminationGracePeriod]: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods +[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +[values.yaml]: https://github.com/elastic/helm-charts/tree/master/logstash/values.yaml +[updateStrategy]: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets +[volumeClaimTemplate for statefulsets]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage diff --git a/metricbeat/README.md b/metricbeat/README.md index 8e3e2ed36..3094e6f36 100644 --- a/metricbeat/README.md +++ b/metricbeat/README.md @@ -1,63 +1,77 @@ # Metricbeat Helm Chart + + -This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features. -This helm chart is a lightweight way to configure and run our official [Metricbeat docker image](https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-docker.html). +- [Requirements](#requirements) +- [Installing](#installing) + - [Using Helm repository](#using-helm-repository) + - [Using master branch](#using-master-branch) +- [Upgrading](#upgrading) +- [Compatibility](#compatibility) +- [Configuration](#configuration) + - [Deprecated](#deprecated) +- [Examples](#examples) + - [Default](#default) +- [Contributing](#contributing) -## Breaking Changes + + + -[7.5.1](https://github.com/elastic/helm-charts/releases/tag/7.5.1) release is introducing a breaking change for Metricbeat users upgrading from a previous chart version. -The breaking change tracked in [#395](https://github.com/elastic/helm-charts/issues/395) is failing `helm upgrade` command with the following error: -``` -UPGRADE FAILED -Error: Deployment.apps "metricbeat-kube-state-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"kube-state-metrics"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && Deployment.apps "metricbeat-metricbeat-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"metricbeat-metricbeat-metrics", "chart":"metricbeat-7.5.1", "heritage":"Tiller", "release":"metricbeat"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable -Error: UPGRADE FAILED: Deployment.apps "metricbeat-kube-state-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"kube-state-metrics"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && Deployment.apps "metricbeat-metricbeat-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"metricbeat-metricbeat-metrics", "chart":"metricbeat-7.5.1", "heritage":"Tiller", "release":"metricbeat"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable -``` -This is caused by the update of [kube-state-metrics](https://github.com/helm/charts/tree/master/stable/kube-state-metrics) chart dependency which is renaming some labels in [helm/charts#15261](https://github.com/helm/charts/pull/15261). +This functionality is in beta and is subject to change. The design and code is +less mature than official GA features and is being provided as-is with no +warranties. Beta features are not subject to the support SLA of official GA +features. + +This Helm chart is a lightweight way to configure and run our official +[Metricbeat Docker image][]. -The workaround is to use `--force` argument for `helm upgrade` command which will force Metricbeat resources update through delete/recreate. ## Requirements -* [Helm](https://helm.sh/) >=2.8.0 and <3.0.0 (see parent [README](https://github.com/elastic/helm-charts/tree/master/README.md) for more details) +* [Helm][] >=2.8.0 and <3.0.0 (see [parent README][] for more details) * Kubernetes >=1.9 + ## Installing ### Using Helm repository -* Add the elastic helm charts repo - ``` - helm repo add elastic https://helm.elastic.co - ``` -* Install it - ``` - helm install --name metricbeat elastic/metricbeat - ``` +* Add the Elastic Helm charts repo: +`helm repo add elastic https://helm.elastic.co` + +* Install it: `helm install --name metricbeat elastic/metricbeat` ### Using master branch -* Clone the git repo - ``` - git clone git@github.com:elastic/helm-charts.git - ``` -* Install it - ``` - helm install --name metricbeat ./helm-charts/metricbeat - ``` +* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git` + +* Install it: `helm install --name metricbeat ./helm-charts/metricbeat` + + +## Upgrading + +Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before +upgrading to a new chart version. + ## Compatibility -This chart is tested with the latest supported versions. The currently tested versions are: +This chart is tested with the latest supported versions. The currently tested +versions are: | 6.x | 7.x | -| ----- | ----- | +|-------|-------| | 6.8.8 | 7.6.2 | -Examples of installing older major versions can be found in the [examples](https://github.com/elastic/helm-charts/tree/master/metricbeat/examples) directory. +Examples of installing older major versions can be found in the [examples][] +directory. -While only the latest releases are tested, it is possible to easily install old or new releases by overriding the `imageTag`. To install version `7.6.2` of metricbeat it would look like this: +While only the latest releases are tested, it is possible to easily install old +or new releases by overriding the `imageTag` To install version `7.6.2` of +Metricbeat it would look like this: ``` helm install --name metricbeat elastic/metricbeat --set imageTag=7.6.2 @@ -65,130 +79,123 @@ helm install --name metricbeat elastic/metricbeat --set imageTag=7.6.2 ## Configuration -| Parameter | Description | Default | -| --- | --- | --- | -| `daemonset.affinity` | Configurable [affinity][] for Metricbeat `DaemonSet`. | `{}` | -| `daemonset.envFrom` | Templatable string of `envFrom` to be passed to the [environment from variables][] which will be appended to Metricbeat container for `DaemonSet`. | `[]` | -| `daemonset.extraEnvs` | Extra [environment variables][] which will be appended to Metricbeat container for `DaemonSet`. | `[]` | -| `daemonset.extraVolumes` | Templatable string of additional volumes to be passed to the `tpl` function or `DaemonSet`. | `[]` | -| `daemonset.extraVolumeMounts` | Templatable string of additional volumeMounts to be passed to the `tpl` function or `DaemonSet`. | `[]` | -| `daemonset.hostNetworking` | Enable Metricbeat `DaemonSet` to use host network | `false` | -| `daemonset.metricbeatConfig` | Allows you to add any config files in `/usr/share/metricbeat` such as `metricbeat.yml` for Metricbeat `DaemonSet`. | see [values.yaml][] | -| `daemonset.nodeSelector` | Configurable [nodeSelector][] for Metricbeat `DaemonSet`. | `{}` | -| `daemonset.secretMounts` | Allows you easily mount a secret as a file inside the `DaemonSet`. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` | -| `daemonset.securityContext` | Configurable [securityContext][] for Metricbeat `DaemonSet` pod execution environment. | `runAsUser: 0`
`privileged: false` | -| `daemonset.resources` | Allows you to set the [resources][] for Metricbeat `DaemonSet`. | `requests.cpu: 100m`
`requests.memory: 100Mi`
`limits.cpu: 1000m`
`limits.memory: 200Mi` | -| `daemonset.tolerations` | Configurable [tolerations][] for Metricbeat `DaemonSet`. | `[]` | -| `deployment.affinity` | Configurable [affinity][] for Metricbeat `Deployment`. | `{}` | -| `deployment.envFrom` | Templatable string of `envFrom` to be passed to the [environment from variables][] which will be appended to Metricbeat container for `Deployment`. | `[]` | -| `deployment.extraEnvs` | Extra [environment variables][] which will be appended to Metricbeat container for `Deployment`. | `[]` | -| `deployment.extraVolumes` | Templatable string of additional volumes to be passed to the `tpl` function or `Deployment`. | `[]` | -| `deployment.extraVolumeMounts` | Templatable string of additional volumeMounts to be passed to the `tpl` function or `DaemonSet`. | `[]` | -| `deployment.metricbeatConfig` | Allows you to add any config files in `/usr/share/metricbeat` such as `metricbeat.yml` for Metricbeat `Deployment`. | see [values.yaml][] | -| `deployment.nodeSelector` | Configurable [nodeSelector][] for Metricbeat `Deployment`. | `{}` | -| `deployment.secretMounts` | Allows you easily mount a secret as a file inside the `Deployment`. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` | -| `deployment.securityContext` | Configurable [securityContext][] for Metricbeat `Deployment` pod execution environment. | `runAsUser: 0`
`privileged: false` | -| `deployment.resources` | Allows you to set the [resources][] for Metricbeat `Deployment`. | `requests.cpu: 100m`
`requests.memory: 100Mi`
`limits.cpu: 1000m`
`limits.memory: 200Mi` | -| `deployment.tolerations` | Configurable [tolerations][] for Metricbeat `Deployment`. | `[]` | -| `extraContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | -| `extraInitContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | -| `hostPathRoot` | Fully-qualified [hostPath][] that will be used to persist Metricbeat registry data | `/var/lib` | -| `image` | The Metricbeat docker image | `docker.elastic.co/beats/metricbeat` | -| `imageTag` | The Metricbeat docker image tag | `7.6.2` | -| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` | -| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` | -| `labels` | Configurable [label][] applied to all Metricbeat pods | `{}` | -| `managedServiceAccount` | Whether the `serviceAccount` should be managed by this helm chart. Set this to `false` in order to manage your own service account and related roles. | `true` | -| `clusterRoleRules` | Configurable [cluster role rules][] that Metricbeat uses to access Kubernetes resources. | see [values.yaml][] | -| `podAnnotations` | Configurable [annotations][] applied to all Metricbeat pods | `{}` | -| `livenessProbe` | Parameters to pass to [liveness probe][] checks for values such as timeouts and thresholds. | `failureThreshold: 3`
`initialDelaySeconds: 10`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `readinessProbe` | Parameters to pass to [readiness probe][] checks for values such as timeouts and thresholds. | `failureThreshold: 3`
`initialDelaySeconds: 10`
`periodSeconds: 10`
`successThreshold: 3`
`timeoutSeconds: 5` | -| `serviceAccount` | Custom [serviceAccount][] that Metricbeat will use during execution. By default will use the service account created by this chart. | `""` | -| `terminationGracePeriod` | Termination period (in seconds) to wait before killing Metricbeat pod process on pod shutdown | `30` | -| `updateStrategy` | The [updateStrategy][] for the `DaemonSet`. By default Kubernetes will kill and recreate pods on updates. Setting this to `OnDelete` will require that pods be deleted manually. | `RollingUpdate` | -| `priorityClassName` | The [name of the PriorityClass][]. No default is supplied as the PriorityClass must be created first. | `""` | -| `replicas` | The replica count for the metricbeat deployment talking to kube-state-metrics | `1` | -| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to "`.Release.Name`-`.Values.nameOverride or .Chart.Name`" | `""` | + +| Parameter | Description | Default | +|--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------| +| `clusterRoleRules` | Configurable [cluster role rules][] that Metricbeat uses to access Kubernetes resources | see [values.yaml][] | +| `daemonset.affinity` | Configurable [affinity][] for Metricbeat daemonset | `{}` | +| `daemonset.envFrom` | Templatable string of `envFrom` to be passed to the [environment from variables][] which will be appended to Metricbeat container for DaemonSet | `[]` | +| `daemonset.extraEnvs` | Extra [environment variables][] which will be appended to Metricbeat container for DaemonSet | `[]` | +| `daemonset.extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function or DaemonSet | `[]` | +| `daemonset.extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function or DaemonSet | `[]` | +| `daemonset.hostNetworking` | Enable Metricbeat DaemonSet to use `hostNetwork` | `false` | +| `daemonset.metricbeatConfig` | Allows you to add any config files in `/usr/share/metricbeat` such as `metricbeat.yml` for Metricbeat DaemonSet | see [values.yaml][] | +| `daemonset.nodeSelector` | Configurable [nodeSelector][] for Metricbeat DaemonSet | `{}` | +| `daemonset.resources` | Allows you to set the [resources][] for Metricbeat DaemonSet | see [values.yaml][] | +| `daemonset.secretMounts` | Allows you easily mount a secret as a file inside the DaemonSet. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` | +| `daemonset.securityContext` | Configurable [securityContext][] for Metricbeat DaemonSet pod execution environment | see [values.yaml][] | +| `daemonset.tolerations` | Configurable [tolerations][] for Metricbeat DaemonSet | `[]` | +| `deployment.affinity` | Configurable [affinity][] for Metricbeat Deployment | `{}` | +| `deployment.envFrom` | Templatable string of `envFrom` to be passed to the [environment from variables][] which will be appended to Metricbeat container for Deployment | `[]` | +| `deployment.extraEnvs` | Extra [environment variables][] which will be appended to Metricbeat container for Deployment | `[]` | +| `deployment.extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function or DaemonSet | `[]` | +| `deployment.extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function or Deployment | `[]` | +| `deployment.metricbeatConfig` | Allows you to add any config files in `/usr/share/metricbeat` such as `metricbeat.yml` for Metricbeat Deployment | see [values.yaml][] | +| `deployment.nodeSelector` | Configurable [nodeSelector][] for Metricbeat Deployment | `{}` | +| `deployment.resources` | Allows you to set the [resources][] for Metricbeat Deployment | see [values.yaml][] | +| `deployment.secretMounts` | Allows you easily mount a secret as a file inside the Deployment Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` | +| `deployment.securityContext` | Configurable [securityContext][] for Metricbeat Deployment pod execution environment | see [values.yaml][] | +| `deployment.tolerations` | Configurable [tolerations][] for Metricbeat Deployment | `[]` | +| `extraContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | +| `extraInitContainers` | Templatable string of additional containers to be passed to the `tpl` function | `""` | +| `fullnameOverride` | Overrides the full name of the resources. If not set the name will default to " `.Release.Name` - `.Values.nameOverride or .Chart.Name` " | `""` | +| `hostPathRoot` | Fully-qualified [hostPath][] that will be used to persist Metricbeat registry data | `/var/lib` | +| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` | +| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` | +| `imageTag` | The Metricbeat Docker image tag | `7.6.2` | +| `image` | The Metricbeat Docker image | `docker.elastic.co/beats/metricbeat` | +| `labels` | Configurable [labels][] applied to all Metricbeat pods | `{}` | +| `livenessProbe` | Parameters to pass to liveness [probe][] checks for values such as timeouts and thresholds | see [values.yaml][] | +| `managedServiceAccount` | Whether the `serviceAccount` should be managed by this helm chart. Set this to `false` in order to manage your own service account and related roles | `true` | +| `nameOverride` | Overrides the chart name for resources. If not set the name will default to `.Chart.Name` | `""` | +| `podAnnotations` | Configurable [annotations][] applied to all Metricbeat pods | `{}` | +| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` | +| `readinessProbe` | Parameters to pass to readiness [probe][] checks for values such as timeouts and thresholds | see [values.yaml][] | +| `replicas` | The replica count for the Metricbeat deployment talking to kube-state-metrics | `1` | +| `serviceAccount` | Custom [serviceAccount][] that Metricbeat will use during execution. By default will use the service account created by this chart | `""` | +| `terminationGracePeriod` | Termination period (in seconds) to wait before killing Metricbeat pod process on pod shutdown | `30` | +| `updateStrategy` | The [updateStrategy][] for the DaemonSet By default Kubernetes will kill and recreate pods on updates. Setting this to `OnDelete` will require that pods be deleted manually | `RollingUpdate` | ### Deprecated -| Parameter | Description | Default | -| --- | --- | --- | -| `affinity` | Configurable [affinity][] for Metricbeat `DaemonSet`. | `{}` | -| `extraEnvs` | Extra [environment variables][] which will be appended to Metricbeat container for both `DaemonSet` and `Deployment`. | `[]` | -| `extraVolumes` | Templatable string of additional volumes to be passed to the `tpl` function for both `DaemonSet` and `Deployment`. | `[]` | -| `extraVolumeMounts` | Templatable string of additional volumeMounts to be passed to the `tpl` function for both `DaemonSet` and `Deployment`. | `[]` | -| `deployment.envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to Metricbeat container for both `DaemonSet` and `Deployment`. | `[]` | -| `metricbeatConfig` | Allows you to add any config files in `/usr/share/metricbeat` such as `metricbeat.yml` for both Metricbeat `DaemonSet` and `Deployment`. | `{}` | -| `nodeSelector` | Configurable [nodeSelector][] for Metricbeat `DaemonSet`. | `{}` | -| `podSecurityContext` | Configurable [securityContext][] for Metricbeat `DaemonSet` and `Deployment` pod execution environment. | `{}` | -| `resources` | Allows you to set the [resources][] for both Metricbeat `DaemonSet` and `Deployment`. | `{}` | -| `secretMounts` | Allows you easily mount a secret as a file inside `DaemonSet` and `Deployment`. Useful for mounting certificates and other secrets. | `[]` | -| `tolerations` | Configurable [tolerations][] for both Metricbeat `DaemonSet` and `Deployment`. | `[]` | + +| Parameter | Description | Default | +|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|---------| +| `affinity` | Configurable [affinity][] for Metricbeat DaemonSet | `{}` | +| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to Metricbeat container for both DaemonSet and Deployment | `[]` | +| `extraEnvs` | Extra [environment variables][] which will be appended to Metricbeat container for both DaemonSet and Deployment | `[]` | +| `extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function for both DaemonSet and Deployment | `[]` | +| `extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function for both DaemonSet and Deployment | `[]` | +| `metricbeatConfig` | Allows you to add any config files in `/usr/share/metricbeat` such as `metricbeat.yml` for both Metricbeat DaemonSet and Deployment | `{}` | +| `nodeSelector` | Configurable [nodeSelector][] for Metricbeat DaemonSet | `{}` | +| `podSecurityContext` | Configurable [securityContext][] for Metricbeat DaemonSet and Deployment pod execution environment | `{}` | +| `resources` | Allows you to set the [resources][] for both Metricbeat DaemonSet and Deployment | `{}` | +| `secretMounts` | Allows you easily mount a secret as a file inside DaemonSet and Deployment Useful for mounting certificates and other secrets | `[]` | +| `tolerations` | Configurable [tolerations][] for both Metricbeat DaemonSet and Deployment | `[]` | + ## Examples -In [examples/](https://github.com/elastic/helm-charts/tree/master/metricbeat/examples) you will find some example configurations. These examples are used for the automated testing of this helm chart. +In [examples][] you will find some example configurations. These examples are +used for the automated testing of this Helm chart. ### Default -* Deploy the [default Elasticsearch helm chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#default) -* Deploy Metricbeat with the default values +* Deploy the [default Elasticsearch Helm chart][]. +* Deploy Metricbeat with the default values: + ``` cd examples/default make ``` -* You can now setup a port forward for Elasticsearch to observe Metricbeat indices + +* You can now setup a port forward for Elasticsearch to observe Metricbeat +indices: + ``` kubectl port-forward svc/elasticsearch-master 9200 curl localhost:9200/_cat/indices ``` -## Testing - -This chart uses [pytest](https://docs.pytest.org/en/latest/) to test the templating logic. The dependencies for testing can be installed from the [`requirements.txt`](https://github.com/elastic/helm-charts/tree/master/requirements.txt) in the parent directory. - -``` -pip install -r ../requirements.txt -make pytest -``` - -You can also use `helm template` to look at the YAML being generated - -``` -make template -``` - -It is possible to run all of the tests and linting inside of a docker container - -``` -make test -``` -## Integration Testing +## Contributing -Integration tests are run using [goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) which is a serverspec like tool written in golang. See [goss.yaml](https://github.com/elastic/helm-charts/tree/master/metricbeat/examples/default/test/goss.yaml) for an example of what the tests look like. +Please check [CONTRIBUTING.md][] before any contribution or for any questions +about our development and testing process. -To run the goss tests against the default example: -``` -cd examples/default -make goss -``` +[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/master/BREAKING_CHANGES.md +[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/master/CHANGELOG.md +[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/master/CONTRIBUTING.md [affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity [annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +[default elasticsearch helm chart]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/README.md#default [cluster role rules]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole [environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config [environment from variables]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables +[examples]: https://github.com/elastic/helm-charts/tree/master/metricbeat/examples +[helm]: https://helm.sh [hostPath]: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath [imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images [imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret -[label]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ -[liveness probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ -[name of the PriorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass +[kube-state-metrics]: https://github.com/helm/charts/tree/master/stable/kube-state-metrics +[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +[metricbeat docker image]: https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-docker.html +[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass [nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector -[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ -[readiness probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ +[parent readme]: https://github.com/elastic/helm-charts/tree/master/README.md +[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes [resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ +[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ [serviceAccount]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ [tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ [updateStrategy]: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy