-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autocreate data
folder in Kubernetes deployments
#7437
Autocreate data
folder in Kubernetes deployments
#7437
Conversation
We need to persist beats data folder, so Pod restarts can still access to the previous data. This is particularly important in the case of Filebeat, as it needs the registry file to avoid sending all logs again.
- name: data | ||
emptyDir: {} | ||
hostPath: | ||
path: /var/lib/filebeat-data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First reading this I thought it should be filebeat/data
as that is kind of a standard directory we use for our data files. How does k8s now about this directory? Is it configured somewhere or is this based on convention?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is mapping a folder from the container to the host. Kubernetes doesn't know anything about it, we request it to be created here, and mount it in the container, so data folder is persisted to the host.
I wanted to avoid using the default folder, in case someone is messing around by deploying Filebeat both in the host and as a container, which could end up in 2 Filebeat instances sharing the data folder.
It seems the travis tests suggest that the yaml is now invalid? https://travis-ci.org/elastic/beats/jobs/397307177 Related? |
It is related indeed, it seems the path autocreate ( We should think about this twice, as it would break manifests compatibility with old versions, but is also a much better default for new deploys. |
So our options are:
|
|
@exekias Just a small question, don't you need to update the filebeat config map too? |
What kind of update? this change should be transparent to Filebeat, as we only change the mount point |
@exekias |
No, as this is mounted to the default path inside the container: https://github.com/exekias/beats/blob/43340ee3b983638bd7972623db5d6a056bf6cd05/deploy/kubernetes/filebeat-kubernetes.yaml#L101 |
573424a
to
10208b6
Compare
I've updated tests to remove k8s 1.7 and earlier, and add the last 1.11 release. I also updated docs with manual settings to use the manifest in deprecated versions. This should be ready to go |
I think we should not backport this to 6.3, so it will go to 6.4 once merged |
7915b26
to
117b3d6
Compare
117b3d6
to
833e317
Compare
I think the travis failures are related? |
It seems new minikube is failing to deploy a local k8s, will have a look |
Until kubernetes/minikube#2704 is fixed we are stuck to this version of minikube, I rolled back my changes to ensure tests pass. |
@@ -14,6 +14,7 @@ env: | |||
- GOX_FLAGS="-arch amd64" | |||
- DOCKER_COMPOSE_VERSION=1.11.1 | |||
- GO_VERSION="$(cat .go-version)" | |||
# Newer versions of minikube fail on travis, see: https://github.com/kubernetes/minikube/issues/2704 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to get this tests on Jenkins where we could test newer versions because we control the environment.
We need to persist beats data folder, so Pod restarts can still access
to the previous data.
This is particularly important in the case of Filebeat, as it needs the
registry file to avoid sending all logs again.