Skip to content

Commit

Permalink
Merge branch 'master' into az/update-readme
Browse files Browse the repository at this point in the history
  • Loading branch information
yzhan289 authored Nov 8, 2021
2 parents ea543d6 + b7b251f commit 9d05b85
Show file tree
Hide file tree
Showing 96 changed files with 3,296 additions and 820 deletions.
10 changes: 5 additions & 5 deletions activemq/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Overview

The ActiveMQ check collects metrics for brokers and queues, producers and consumers, and more.
The ActiveMQ check collects metrics for brokers, queues, producers, consumers, and more.

**Note:** This check also supports ActiveMQ Artemis (future ActiveMQ version `6`) and reports metrics under the `activemq.artemis` namespace. See [metadata.csv][1] for a list of metrics provided by this integration.

Expand All @@ -14,7 +14,7 @@ The ActiveMQ check collects metrics for brokers and queues, producers and consum

The Agent's ActiveMQ check is included in the [Datadog Agent][3] package, so you don't need to install anything else on your ActiveMQ nodes.

The check collects metrics via JMX, so you need a JVM on each node so the Agent can fork [jmxfetch][4]. We recommend using an Oracle-provided JVM.
The check collects metrics through JMX, so you need a JVM on each node so the Agent can fork [jmxfetch][4]. Datadog recommends using an Oracle-provided JVM.

### Configuration

Expand Down Expand Up @@ -100,7 +100,7 @@ partial -->

_Available for Agent versions >6.0_

Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes log collection documentation][11].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes Log Collection][11].

| Parameter | Value |
| -------------- | ------------------------------------------------------ |
Expand All @@ -117,7 +117,7 @@ Collecting logs is disabled by default in the Datadog Agent. To enable it, see [

### Metrics

See [metadata.csv][1] for a list of metrics provided by this integration. Metrics associated with ActiveMQ Artemis flavor have `artemis` in their metric name, all others are reported for ActiveMQ "classic".
See [metadata.csv][1] for a list of metrics provided by this integration. Metrics associated with ActiveMQ Artemis flavor have `artemis` in their metric name, all others are reported for ActiveMQ "classic".

### Events

Expand Down Expand Up @@ -148,7 +148,7 @@ Additional helpful documentation, links, and articles:
[8]: https://github.com/DataDog/integrations-core/blob/master/activemq/datadog_checks/activemq/data/metrics.yaml
[9]: https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent
[10]: https://docs.datadoghq.com/agent/kubernetes/integrations/
[11]: https://docs.datadoghq.com/agent/kubernetes/log/?tab=containerinstallation#setup
[11]: https://docs.datadoghq.com/agent/kubernetes/log/
[12]: https://docs.datadoghq.com/agent/guide/agent-commands/#agent-status-and-information
[13]: https://github.com/DataDog/integrations-core/blob/master/activemq/assets/service_checks.json
[14]: https://docs.datadoghq.com/help/
Expand Down
2 changes: 1 addition & 1 deletion aerospike/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ For containerized environments, see the [Autodiscovery Integration Templates][5]

_Available for Agent versions >6.0_

Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes log collection documentation][6].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes Log Collection][6].

| Parameter | Value |
| -------------- | --------------------------------------------------- |
Expand Down
20 changes: 10 additions & 10 deletions airflow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,22 +20,22 @@ In addition to metrics, the Datadog Agent also sends service checks related to A
All steps below are needed for the Airflow integration to work properly. Before you begin, [install the Datadog Agent][3] version `>=6.17` or `>=7.17`, which includes the StatsD/DogStatsD mapping feature.

### Configuration
There are two forms of the Airflow integration. There is the Datadog Agent integration which will make requests to a provided endpoint for Airflow to report whether it can connect and is healthy. Then there is the Airflow StatsD portion where Airflow can be configured to send metrics to the Datadog Agent, which can remap the Airflow notation to a Datadog notation.
There are two forms of the Airflow integration. There is the Datadog Agent integration which makes requests to a provided endpoint for Airflow to report whether it can connect and is healthy. Then there is the Airflow StatsD portion where Airflow can be configured to send metrics to the Datadog Agent, which can remap the Airflow notation to a Datadog notation.

<!-- xxx tabs xxx -->
<!-- xxx tab "Host" xxx -->

#### Host

##### Configure Datadog Agent Airflow Integration
##### Configure Datadog Agent Airflow integration

Configure the Airflow check included in the [Datadog Agent][4] package to collect health metrics and service checks. This can be done by editing the `url` within the `airflow.d/conf.yaml` file, in the `conf.d/` folder at the root of your Agent's configuration directory, to start collecting your Airflow service checks. See the [sample airflow.d/conf.yaml][5] for all available configuration options.

##### Connect Airflow to DogStatsD

Connect Airflow to DogStatsD (included in the Datadog Agent) by using the Airflow `statsd` feature to collect metrics. For more information about the metrics reported by the Airflow version used and the additional configuration options, see the Airflow documentation below:
- [Airflow Metrics Documentation][6]
- [Airflow Metrics Configuration Documentation][7]
- [Airflow Metrics][6]
- [Airflow Metrics Configuration][7]

**Note**: Presence or absence of StatsD metrics reported by Airflow might vary depending on the Airflow Executor used. For example: `airflow.ti_failures/successes, airflow.operator_failures/successes, airflow.dag.task.duration` are [not reported for `KubernetesExecutor`][8].

Expand Down Expand Up @@ -243,7 +243,7 @@ _Available for Agent versions >6.0_
pattern: \[\d{4}\-\d{2}\-\d{2}
```

Caveat: By default Airflow uses this log file template for tasks: `log_filename_template = {{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log`. The number of log files will grow quickly if not cleaned regularly. This pattern is used by Airflow UI to display logs individually for each executed task.
Caveat: By default Airflow uses this log file template for tasks: `log_filename_template = {{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log`. The number of log files grow quickly if not cleaned regularly. This pattern is used by Airflow UI to display logs individually for each executed task.

If you do not view logs in Airflow UI, Datadog recommends this configuration in `airflow.cfg`: `log_filename_template = dag_tasks.log`. Then log rotate this file and use this configuration:

Expand All @@ -265,7 +265,7 @@ _Available for Agent versions >6.0_

#### Containerized

##### Configure Datadog Agent Airflow Integration
##### Configure Datadog Agent Airflow integration

For containerized environments, see the [Autodiscovery Integration Templates][8] for guidance on applying the parameters below.

Expand All @@ -278,8 +278,8 @@ For containerized environments, see the [Autodiscovery Integration Templates][8]
##### Connect Airflow to DogStatsD

Connect Airflow to DogStatsD (included in the Datadog Agent) by using the Airflow `statsd` feature to collect metrics. For more information about the metrics reported by the Airflow version used and the additional configuration options, see the Airflow documentation below:
- [Airflow Metrics Documentation][6]
- [Airflow Metrics Configuration Documentation][7]
- [Airflow Metrics][6]
- [Airflow Metrics Configuration][7]

**Note**: Presence or absence of StatsD metrics reported by Airflow might vary depending on the Airflow Executor used. For example: `airflow.ti_failures/successes, airflow.operator_failures/successes, airflow.dag.task.duration` are [not reported for `KubernetesExecutor`][8].

Expand All @@ -299,7 +299,7 @@ The Airflow StatsD configuration can be enabled with the following environment v
fieldRef:
fieldPath: status.hostIP
```
The environment variable for the host endpoint `AIRFLOW__SCHEDULER__STATSD_HOST` is supplied with the node's host IP address to route the StatsD data to the Datadog Agent pod on the same node as the Airflow pod. This setup also requires the Agent to have a `hostPort` open for this port `8125` and accepting non-local StatsD traffic. For more information, see [DogStatsD on Kubernetes Setup here][12].
The environment variable for the host endpoint `AIRFLOW__SCHEDULER__STATSD_HOST` is supplied with the node's host IP address to route the StatsD data to the Datadog Agent pod on the same node as the Airflow pod. This setup also requires the Agent to have a `hostPort` open for this port `8125` and accepting non-local StatsD traffic. For more information, see [DogStatsD on Kubernetes Setup][12].

This should direct the StatsD traffic from the Airflow container to a Datadog Agent ready to accept the incoming data. The last portion is to update the Datadog Agent with the corresponding `dogstatsd_mapper_profiles` . This can be done by copying the `dogstatsd_mapper_profiles` provided in the [Host installation][13] into your `datadog.yaml` file. Or by deploying your Datadog Agent with the equivalent JSON configuration in the environment variable `DD_DOGSTATSD_MAPPER_PROFILES`. With respect to Kubernetes the equivalent environment variable notation is:
```yaml
Expand All @@ -315,7 +315,7 @@ See the Datadog `integrations-core` repo for an [example setup][14].

_Available for Agent versions >6.0_

Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes log collection documentation][15].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes Log Collection][15].

| Parameter | Value |
|----------------|-------------------------------------------------------|
Expand Down
4 changes: 1 addition & 3 deletions amazon_eks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,12 @@ Additionally, [Amazon EKS Managed Node Groups][2] and [Amazon EKS on AWS Outpost

### Metric collection

Monitoring EKS requires that you set up the Datadog integrations for:
Monitoring EKS requires that you set up one of the following Datadog integrations along with integrations for any other AWS services you're running with EKS, such as [ELB][7].

- [Kubernetes][4]
- [AWS][5]
- [AWS EC2][6]

along with integrations for any other AWS services you're running with EKS (e.g., [ELB][7])

### Log collection

_Available for Agent versions >6.0_
Expand Down
4 changes: 2 additions & 2 deletions ambari/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ For containerized environments, see the [Autodiscovery Integration Templates][5]

_Available for Agent versions >6.0_

Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes log collection documentation][6].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes Log Collection][6].

| Parameter | Value |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
Expand All @@ -107,7 +107,7 @@ This integration collects for every host in every cluster the following system m
- network
- process

If service metrics collection is enabled with `collect_service_metrics` this integration collects for each whitelisted service component the metrics with headers in the white list.
If service metrics collection is enabled with `collect_service_metrics` this integration collects for each included service component the metrics with headers in the inclusion list.

### Metrics

Expand Down
6 changes: 3 additions & 3 deletions apache/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ LABEL "com.datadoghq.ad.instances"='[{"apache_status_url": "http://%%host%%/serv
##### Log collection


Collecting logs is disabled by default in the Datadog Agent. To enable it, see the [Docker log collection documentation][8].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Docker Log Collection][8].

Then, set [Log Integrations][9] as Docker labels:

Expand Down Expand Up @@ -133,7 +133,7 @@ spec:
##### Log collection


Collecting logs is disabled by default in the Datadog Agent. To enable it, see the [Kubernetes log collection documentation][12].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes Log Collection][12].

Then, set [Log Integrations][9] as pod annotations. This can also be configured with [a file, a configmap, or a key-value store][13].

Expand Down Expand Up @@ -178,7 +178,7 @@ Set [Autodiscovery Integrations Templates][7] as Docker labels on your applicati
##### Log collection


Collecting logs is disabled by default in the Datadog Agent. To enable it, see the [ECS log collection documentation][14].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [ECS Log Collection][14].

Then, set [Log Integrations][9] as Docker labels:

Expand Down
2 changes: 1 addition & 1 deletion aspdotnet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ The ASP.NET check is included in the [Datadog Agent][1] package, so you don't ne
#### Log collection
ASP.NET uses IIS logging. Follow the [setup instructions for IIS][5] in order to view logs related to ASP.NET requests and failures.

Unhandled 500 level exceptions and events related to your ASP.NET application can be viewed via the Windows Application EventLog.
Unhandled 500 level exceptions and events related to your ASP.NET application can be viewed with the Windows Application EventLog.

### Validation

Expand Down
2 changes: 1 addition & 1 deletion azure_iot_edge/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Overview

[Azure IoT Edge][1] is a fully managed service to deploy Cloud workloads to run on Internet of Things (IoT) Edge devices via standard containers.
[Azure IoT Edge][1] is a fully managed service to deploy Cloud workloads to run on Internet of Things (IoT) Edge devices using standard containers.

Use the Datadog-Azure IoT Edge integration to collect metrics and health status from IoT Edge devices.

Expand Down
2 changes: 1 addition & 1 deletion cacti/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ sudo yum install rrdtool-devel

#### Python bindings

Now add the `rrdtool` Python package to the Agent with the following command:
Add the `rrdtool` Python package to the Agent with the following command:

```shell
sudo -u dd-agent /opt/datadog-agent/embedded/bin/pip install rrdtool
Expand Down
4 changes: 2 additions & 2 deletions cassandra/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Get metrics from Cassandra in real time to:

The Cassandra check is included in the [Datadog Agent][2] package, so you don't need to install anything else on your Cassandra nodes. It's recommended to use Oracle's JDK for this integration.

**Note**: This check has a limit of 350 metrics per instance. The number of returned metrics is indicated in the info page. You can specify the metrics you are interested in by editing the configuration below. To learn how to customize the metrics to collect visit the [JMX Checks documentation][3] for more detailed instructions. If you need to monitor more metrics, contact [Datadog support][4].
**Note**: This check has a limit of 350 metrics per instance. The number of returned metrics is indicated in the info page. You can specify the metrics you are interested in by editing the configuration below. To learn how to customize the metrics to collect see the [JMX documentation][3] for detailed instructions. If you need to monitor more metrics, contact [Datadog support][4].

### Configuration

Expand Down Expand Up @@ -80,7 +80,7 @@ For containerized environments, see the [Autodiscovery with JMX][9] guide.

_Available for Agent versions >6.0_

Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes log collection documentation][10].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes Log Collection][10].

| Parameter | Value |
| -------------- | ------------------------------------------------------ |
Expand Down
2 changes: 1 addition & 1 deletion ceph/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ _Available for Agent versions >6.0_

See [metadata.csv][7] for a list of metrics provided by this integration.

**Note**: If you are running ceph luminous or later, you will not see the metric `ceph.osd.pct_used`.
**Note**: If you are running Ceph luminous or later, the `ceph.osd.pct_used` metric is not included.

### Events

Expand Down
8 changes: 4 additions & 4 deletions cilium/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Cilium contains two types of logs: `cilium-agent` and `cilium-operator`.
# (...)
```

2. Mount the Docker socket to the Datadog Agent as done in [this manifest][6] or mount the `/var/log/pods` directory if you are not using Docker.
2. Mount the Docker socket to the Datadog Agent through the manifest or mount the `/var/log/pods` directory if you are not using Docker. For example manifests see the [Kubernetes Installation instructions for DaemonSet][6].

3. [Restart the Agent][5].

Expand All @@ -91,7 +91,7 @@ For containerized environments, see the [Autodiscovery Integration Templates][2]

##### Log collection

Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes log collection documentation][7].
Collecting logs is disabled by default in the Datadog Agent. To enable it, see [Kubernetes Log Collection][7].

| Parameter | Value |
|----------------|-------------------------------------------|
Expand All @@ -112,7 +112,7 @@ See [metadata.csv][9] for a list of all metrics provided by this integration.

### Events

Cilium does not include any events.
The Cilium integration does not include any events.

### Service Checks

Expand All @@ -127,7 +127,7 @@ Need help? Contact [Datadog support][11].
[3]: https://app.datadoghq.com/account/settings#agent
[4]: https://github.com/DataDog/integrations-core/blob/master/cilium/datadog_checks/cilium/data/conf.yaml.example
[5]: https://docs.datadoghq.com/agent/guide/agent-commands/#start-stop-and-restart-the-agent
[6]: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest
[6]: https://docs.datadoghq.com/agent/kubernetes/?tab=daemonset#installation
[7]: https://docs.datadoghq.com/agent/kubernetes/log/
[8]: https://docs.datadoghq.com/agent/guide/agent-commands/#agent-status-and-information
[9]: https://github.com/DataDog/integrations-core/blob/master/cilium/metadata.csv
Expand Down
2 changes: 1 addition & 1 deletion clickhouse/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ def dd_environment():
'clickhouse-0{}'.format(i + 1), 'Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log'
)
)
with docker_run(common.COMPOSE_FILE, conditions=conditions, sleep=10):
with docker_run(common.COMPOSE_FILE, conditions=conditions, sleep=10, attempts=2):
yield common.CONFIG


Expand Down
2 changes: 0 additions & 2 deletions couch/assets/configuration/spec.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ files:
example:
- <DATABASE_1>
- <DATABASE_2>
display_default: all
- name: db_exclude
description: |
The `db_exclude` should contain the names of any databases meant to be excluded
Expand All @@ -43,7 +42,6 @@ files:
example:
- <DATABASE_1>
- <DATABASE_2>
display_default: null
- name: max_dbs_per_check
description: Number of databases to scan per check.
value:
Expand Down
18 changes: 18 additions & 0 deletions couch/datadog_checks/couch/config_models/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# (C) Datadog, Inc. 2021-present
# All rights reserved
# Licensed under a 3-clause BSD style license (see LICENSE)
from .instance import InstanceConfig
from .shared import SharedConfig


class ConfigMixin:
_config_model_instance: InstanceConfig
_config_model_shared: SharedConfig

@property
def config(self) -> InstanceConfig:
return self._config_model_instance

@property
def shared_config(self) -> SharedConfig:
return self._config_model_shared
Loading

0 comments on commit 9d05b85

Please sign in to comment.