diff --git a/docs/KUBERNETES.md b/docs/KUBERNETES.md index 5b932e093..0ec8f8174 100644 --- a/docs/KUBERNETES.md +++ b/docs/KUBERNETES.md @@ -20,7 +20,7 @@ The agent is compatible with Kubernetes versions 1.9 and greater; however, we al ## Installing -The agent can be effortless installed in your cluster using a set of yamls we provide. These yamls contain the minimum necessary Kubernetes Objects and settings to run the agent. Teams should review and modify these yamls for the specific needs of their clusters. +The agent can be installed in your cluster using a set of YAML files we provide. These files contain the minimum necessary Kubernetes Objects and settings to run the agent. Teams should review and modify these YAML files for the specific needs of their clusters. ### Installation Prerequisites @@ -54,9 +54,9 @@ There are two components that can be upgraded independent of each other for each ### Upgrading the Configuration -Not every version update of the agent makes a change to our supplied configuration yamls. These changes will be outlined in our release page to help you determine if you need to update your configuration. +Not every version update of the agent makes a change to our supplied configuration YAML files. These changes will be outlined in our release page to help you determine if you need to update your configuration. -Due to how the agent has evolved over time, certain versions of the agent configuration yamls require different paths to be updated successfully. +Due to how the agent has evolved over time, certain versions of the agent configuration YAML files require different paths to be updated successfully. If you are unsure of what version of the configuration you have, you can always check the `app.kubernetes.io/version` label of the DaemonSet: @@ -101,11 +101,11 @@ Older versions of our configurations do not provide these labels. In that case, 2. Apply the latest configuration yaml 1. Run `kubectl apply -f k8s/agent-resources.yaml` -> :warning: Exporting Kubernetes Objects with "kubectl get \ -o yaml" includes extra information about the Object's state. This data does not need to be copied over to the new yaml. +> :warning: Exporting Kubernetes Objects with "kubectl get \ -o yaml" includes extra information about the Object's state. This data does not need to be copied over to the new YAML file. ### Upgrading the Image -The image contains the actual agent code that is run on the Pods created by the DaemonSet. New versions of the agent always strive to be backwards compatibility with old configuration versions. Any breaking changes will be outlined on our release page. We always recommend upgrading to the latest configuration to get the best feature support for the agent. +The image contains the actual agent code that is run on the Pods created by the DaemonSet. New versions of the agent always strive for backwards compatibility with old configuration versions. Any breaking changes will be outlined on our release page. We always recommend upgrading to the latest configuration to get the best feature support for the agent. The upgrade path for the image depends on which image tag you are using in your DaemonSet. @@ -121,12 +121,13 @@ Otherwise, if your DaemonSet is configured with a different tag e.g. `logdna/log kubectl patch daemonset -n logdna-agent logdna-agent --type json -p '[{"op":"replace","path":"/spec/template/spec/containers/0/image","value":"logdna/logdna-agent:2.2.0"}]' ``` -The specific tag you should use depends on your requirements, we offer a list of tags for varying compatibility: -1. `stable` - Updates with each major, minor, and patch version updates -2. `2` - Updates with each minor and patch version updates under `2.x.x` -3. `2.2` - Updates with each patch version update under `2.2.x` -4. `2.2.0` - Targets a specific version of the agent -5. **Note:** This list isn't exhaustive; for a full list check out the [logdna-agent dockerhub page](https://hub.docker.com/r/logdna/logdna-agent) +1. `latest` - Update with each new revision including public betas +2. `stable` - Updates with each major, minor, and patch version updates +3. `2` - Updates with each minor and patch version updates under `2.x.x` +4. `2.2` - Updates with each patch version update under `2.2.x` +5. `2.2.0` - Targets a specific version of the agent + +**Note:** This list isn't exhaustive; for a full list check out the [logdna-agent dockerhub page](https://hub.docker.com/r/logdna/logdna-agent) ## Uninstalling @@ -190,7 +191,7 @@ To enable Journald monitoring in the agent, add a new environment variable, `LOG ```console kubectl patch daemonset -n logdna-agent logdna-agent --type json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"LOGDNA_JOURNALD_PATHS","value":"/var/log/journald/-"}}]' ``` -* If you are modifying a yaml: +* If you are modifying a YAML file: 1. Add the new environment variable to the envs section of the DaemonSet Object in `k8s/agent-resources.yaml` [`spec.template.spec.containers.0.env`] 2. Apply the new configuration file, run `kubectl apply -f k8s/agent-resources.yaml` diff --git a/docs/OPENSHIFT.md b/docs/OPENSHIFT.md index c96d5c759..b605facce 100644 --- a/docs/OPENSHIFT.md +++ b/docs/OPENSHIFT.md @@ -1,6 +1,6 @@ # LogDNA Agent on OpenShift -The agent has been tested on OpenShift 4.4, but should be compatible with any OpenShift cluster packaged with Kubernetes version 1.9 or greater. +The agent is supported for OpenShift 4.6. ## Table of Contents @@ -19,7 +19,7 @@ The agent has been tested on OpenShift 4.4, but should be compatible with any Op ## Installing -The agent can be effortless installed in your cluster using a set of yamls we provide. These yamls contain the minimum necessary OpenShift Objects and settings to run the agent. Teams should review and modify these yamls for the specific needs of their clusters. +The agent can be installed in your cluster using a set of YAML files we provide. These files contain the minimum necessary OpenShift Objects and settings to run the agent. Teams should review and modify these YAML files for the specific needs of their clusters. ### Installation Prerequisites @@ -58,9 +58,9 @@ There are two components that can be upgraded independent of each other for each ### Upgrading the Configuration -Not every version update of the agent makes a change to our supplied configuration yamls. These changes will be outlined in our release page to help you determine if you need to update your configuration. +Not every version update of the agent makes a change to our supplied configuration YAML files. These changes will be outlined in our release page to help you determine if you need to update your configuration. -Due to how the agent has evolved over time, certain versions of the agent configuration yamls require different paths to be updated successfully. +Due to how the agent has evolved over time, certain versions of the agent configuration YAML files require different paths to be updated successfully. If you are unsure of what version of the configuration you have, you can always check the `app.kubernetes.io/version` label of the DaemonSet: @@ -90,11 +90,11 @@ Older versions of our configurations do not provide these labels. In that case, 3. Overwrite the DaemonSet as well as create the new OpenShift Objects. 1. Run `oc apply -f k8s/agent-resources-openshift.yaml` -> :warning: Exporting OpenShift Objects with "oc get \ -o yaml" includes extra information about the Object's state. This data does not need to be copied over to the new yaml. +> :warning: Exporting OpenShift Objects with "oc get \ -o yaml" includes extra information about the Object's state. This data does not need to be copied over to the new YAML file. ### Upgrading the Image -The image contains the actual agent code that is run on the Pods created by the DaemonSet. New versions of the agent always strive to be backwards compatibility with old configuration versions. Any breaking changes will be outlined on our release page. We always recommend upgrading to the latest configuration to get the best feature support for the agent. +The image contains the actual agent code that is run on the Pods created by the DaemonSet. New versions of the agent always strive for backwards compatibility with old configuration versions. Any breaking changes will be outlined on our release page. We always recommend upgrading to the latest configuration to get the best feature support for the agent. The upgrade path for the image depends on which image tag you are using in your DaemonSet. @@ -111,11 +111,13 @@ oc patch daemonset logdna-agent --type json -p '[{"op":"replace","path":"/spec/t ``` The specific tag you should use depends on your requirements, we offer a list of tags for varying compatibility: -1. `stable` - Updates with each major, minor, and patch version updates -2. `2` - Updates with each minor and patch version updates under `2.x.x` -3. `2.2` - Updates with each patch version update under `2.2.x` -4. `2.2.0` - Targets a specific version of the agent -5. **Note:** This list isn't exhaustive; for a full list check out the [logdna-agent dockerhub page](https://hub.docker.com/r/logdna/logdna-agent) +1. `latest` - Update with each new revision including public betas +2. `stable` - Updates with each major, minor, and patch version updates +3. `2` - Updates with each minor and patch version updates under `2.x.x` +4. `2.2` - Updates with each patch version update under `2.2.x` +5. `2.2.0` - Targets a specific version of the agent + +**Note:** This list isn't exhaustive; for a full list check out the [logdna-agent dockerhub page](https://hub.docker.com/r/logdna/logdna-agent) ## Uninstalling @@ -175,7 +177,7 @@ To enable Journald monitoring in the agent, add a new environment variable, `LOG ```console oc patch daemonset -n logdna-agent logdna-agent --type json -p '[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"LOGDNA_JOURNALD_PATHS","value":"/var/log/journald/-"}}]' ``` -* If you are modifying a yaml: +* If you are modifying a YAML file: 1. Add the new environment variable to the envs section of the DaemonSet Object in `k8s/agent-resources-openshift.yaml` [`spec.template.spec.containers.0.env`] 2. Apply the new configuration file, run `oc apply -f k8s/agent-resources-openshift.yaml` diff --git a/docs/README.md b/docs/README.md index e47226969..71809ec48 100644 --- a/docs/README.md +++ b/docs/README.md @@ -4,15 +4,12 @@ The LogDNA agent is a blazingly fast, resource efficient log collection client, that forwards logs to [LogDNA]. The 2.0+ version of this agent is written in [Rust] to ensure maximum performance, and when coupled with LogDNA's web application, provides a powerful log management tool for distributed systems, including [Kubernetes] clusters. -![LogDNA Dashboard] - [Rustc Version 1.42+]: https://img.shields.io/badge/rustc-1.42+-lightgray.svg [rustc]: https://blog.rust-lang.org/2020/03/12/Rust-1.42.html [Join us on LogDNA's Public Slack]: http://chat.logdna.com/ [LogDNA]: https://logdna.com [Rust]: https://www.rust-lang.org/ [Kubernetes]: https://kubernetes.io/ -[LogDNA Dashboard]: https://files.readme.io/ac5200b-Screen_Shot_2019-07-09_at_7.52.28_AM.png ## Table of Contents @@ -20,7 +17,8 @@ The LogDNA agent is a blazingly fast, resource efficient log collection client, * [Installing](#installing) * [Upgrading](#upgrading) * [Uninstalling](#uninstalling) - * [More Information](#more-information) + * [Run as Non-Root](#run-as-non-root) + * [Additional Installation Options](#additional-installation-options) * [Building](#building) * [Building on Linux](#building-on-linux) * [Building on Docker](#building-on-docker) @@ -31,30 +29,42 @@ The LogDNA agent is a blazingly fast, resource efficient log collection client, ## Managing Deployments -The agent has been tested for deployment to Kubernetes 1.9+ and OpenShift 4.6+ environments. +The agent is supported for Kubernetes 1.9+ and OpenShift 4.6+ environments. ### Installing +Environment specifc instructions regarding installing and deploying the agent to your cluster. + * [Installing Kubernetes](KUBERNETES.md#installing) * [Installing OpenShift](OPENSHIFT.md#installing) ### Upgrading +Environment specific instructions for upgrading from old versions of the agent. + * [Upgrading Kubernetes](KUBERNETES.md#upgrading) * [Upgrading OpenShift](OPENSHIFT.md#upgrading) ### Uninstalling +Environment specific instructions for removing the agent entirely. + * [Uninstalling Kubernetes](KUBERNETES.md#uninstalling) * [Uninstalling OpenShift](OPENSHIFT.md#uninstalling) -### More Information +### Run as Non-Root + +By default the agent is ran as root. Below are environment specific instructions for running the agent as a non-root user. + +* [Running as Non-Root on Kubernetes](KUBERNETES.md#run-as-non-root) +* [Running as Non-Root on OpenShift](OPENSHIFT.md#run-as-non-root) + +### Additional Installation Options More information about managing your deployments is documented for [Kubernetes](KUBERNETES.md) or [OpenShift](OPENSHIFT.md). This includes topics such as * Version specific upgrade paths -* Running the agent as a non-root user -* Collecting node logs through Journald +* Collecting system logs through Journald ## Building @@ -80,15 +90,15 @@ The resulting image can be found by listing the images: ```console foo@bar:~$ docker images -REPOSITORY TAG IMAGE ID CREATED SIZE -logdna-agent-v2 dcd54a0 e471b3d8a409 22 seconds ago 135MB +REPOSITORY TAG IMAGE ID CREATED SIZE +logdna-agent-v2 dcd54a0 e471b3d8a409 22 seconds ago 135MB ``` ## Configuration ### Options -The agent accepts configuration from two sources, environment variables and a configuration yaml. The default configuration yaml location is `/etc/logdna/config.yaml`. The following options are available: +The agent accepts configuration from two sources, environment variables and a configuration YAML file. The default configuration yaml file is located at `/etc/logdna/config.yaml`. The following options are available: | Variable Name(s) | Description | Default | |-|-|-| @@ -149,4 +159,4 @@ Kubernetes and OpenShift Events are by default automatically captured by the age * `never` - Never capture Events * __Note:__ The default option is `always` -> :warning: Due to a number of issues with Kubernetes, the agent collects events from the entire cluster including multiple nodes. To prevent duplicate logs when running multiple pods, the agents defer responsibilty of capturing Events to the oldest pod in the cluster. If that pod is killed, the next oldest pod will take over responsibility and continue from where the previous pod left off. +> :warning: Due to a ["won't fix" bug in the Kubernetes API](https://github.com/kubernetes/kubernetes/issues/41743), the agent collects events from the entire cluster including multiple nodes. To prevent duplicate logs when running multiple pods, the agents defer responsibilty of capturing Events to the oldest pod in the cluster. If that pod is killed, the next oldest pod will take over responsibility and continue from where the previous pod left off.