Skip to content

Commit

Permalink
fix(readmes): adding code block annotations (#7963)
Browse files Browse the repository at this point in the history
  • Loading branch information
russorat authored Aug 10, 2020
1 parent 2427142 commit 75e701c
Show file tree
Hide file tree
Showing 36 changed files with 64 additions and 62 deletions.
2 changes: 1 addition & 1 deletion plugins/common/shim/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ execd plugins:
1. Configure Telegraf to call your new plugin binary. For an input, this would
look something like:

```
```toml
[[inputs.execd]]
command = ["/path/to/rand", "-config", "/path/to/plugin.conf"]
signal = "none"
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/bcache/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ cache_readaheads

Using this configuration:

```
```toml
[bcache]
# Bcache sets path
# If not specified, then default is:
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/bind/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ for more information.
These are some useful queries (to generate dashboards or other) to run against data from this
plugin:

```
```sql
SELECT non_negative_derivative(mean(/^A$|^PTR$/), 5m) FROM bind_counter \
WHERE "url" = 'localhost:8053' AND "type" = 'qtype' AND time > now() - 1h \
GROUP BY time(5m), "type"
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/burrow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Supported Burrow version: `1.x`

### Configuration

```
```toml
[[inputs.burrow]]
## Burrow API endpoints in format "schema://host:port".
## Default is "http://localhost:8000".
Expand Down
4 changes: 2 additions & 2 deletions plugins/inputs/ceph/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ a MON socket, it runs **ceph --admin-daemon $file perfcounters_dump**. For OSDs
The resulting JSON is parsed and grouped into collections, based on top-level key. Top-level keys are
used as collection tags, and all sub-keys are flattened. For example:

```
```json
{
"paxos": {
"refresh": 9363435,
Expand Down Expand Up @@ -44,7 +44,7 @@ the cluster. The currently supported commands are:

### Configuration:

```
```toml
# Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster.
[[inputs.ceph]]
## This is the recommended interval to poll. Too frequent and you will lose
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/couchbase/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Configuration:

```
```toml
# Read per-node and per-bucket metrics from Couchbase
[[inputs.couchbase]]
## specify servers via a url matching:
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/dovecot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ the [upgrading steps][upgrading].

### Configuration:

```
```toml
# Read metrics about dovecot servers
[[inputs.dovecot]]
## specify dovecot servers via an address:port list
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/http_response/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This input plugin checks HTTP/HTTPS connections.

### Configuration:

```
```toml
# HTTP/HTTPS request given an address a method and a timeout
[[inputs.http_response]]
## Deprecated in 1.12, use 'urls'
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/icinga2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ services and hosts. You can read Icinga2's documentation for their remote API

### Sample Queries:

```
```sql
SELECT * FROM "icinga2_services" WHERE state_code = 0 AND time > now() - 24h // Service with OK status
SELECT * FROM "icinga2_services" WHERE state_code = 1 AND time > now() - 24h // Service with WARNING status
SELECT * FROM "icinga2_services" WHERE state_code = 2 AND time > now() - 24h // Service with CRITICAL status
Expand Down
6 changes: 3 additions & 3 deletions plugins/inputs/lanz/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,17 +62,17 @@ For more details on the metrics see https://github.com/aristanetworks/goarista/b
### Sample Queries

Get the max tx_latency for the last hour for all interfaces on all switches.
```
```sql
SELECT max("tx_latency") AS "max_tx_latency" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
```

Get the max tx_latency for the last hour for all interfaces on all switches.
```
```sql
SELECT max("queue_size") AS "max_queue_size" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
```

Get the max buffer_size for over the last hour for all switches.
```
```sql
SELECT max("buffer_size") AS "max_buffer_size" FROM "global_buffer_usage_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname"
```

Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/minecraft/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ View the current scores with a command, substituting your player name:
### Sample Queries:

Get the number of jumps per player in the last hour:
```
```sql
SELECT SPREAD("jumps") FROM "minecraft" WHERE time > now() - 1h GROUP BY "player"
```

Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/modbus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ from unsigned values).

### Example Output

```
```sh
$ ./telegraf -config telegraf.conf -input-filter modbus -test
modbus.InputRegisters,host=orangepizero Current=0,Energy=0,Frecuency=60,Power=0,PowerFactor=0,Voltage=123.9000015258789 1554079521000000000
```
2 changes: 1 addition & 1 deletion plugins/inputs/mysql/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ InfluxDB due to the change of types. For this reason, you should keep the

If preserving your old data is not required you may wish to drop conflicting
measurements:
```
```sql
DROP SERIES from mysql
DROP SERIES from mysql_variables
DROP SERIES from mysql_innodb
Expand Down
4 changes: 2 additions & 2 deletions plugins/inputs/neptune_apex/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,15 +71,15 @@ programming. These tags are clearly marked in the list below and should be consi


Get the max, mean, and min for the temperature in the last hour:
```
```sql
SELECT mean("value") FROM "neptune_apex" WHERE ("probe_type" = 'Temp') AND time >= now() - 6h GROUP BY time(20s)
```

### Troubleshooting

#### sendRequest failure
This indicates a problem communicating with the local Apex controller. If on Mac/Linux, try curl:
```
```sh
$ curl apex.local/cgi-bin/status.xml
```
to isolate the problem.
Expand Down
4 changes: 2 additions & 2 deletions plugins/inputs/net/NET_README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Under Linux the system wide protocol metrics have the interface=all tag.

You can use the following query to get the upload/download traffic rate per second for all interfaces in the last hour. The query uses the [derivative function](https://docs.influxdata.com/influxdb/v1.2/query_language/functions#derivative) which calculates the rate of change between subsequent field values.

```
```sql
SELECT derivative(first(bytes_recv), 1s) as "download bytes/sec", derivative(first(bytes_sent), 1s) as "upload bytes/sec" FROM net WHERE time > now() - 1h AND interface != 'all' GROUP BY time(10s), interface fill(0);
```

Expand All @@ -70,4 +70,4 @@ net,interface=eth0,host=HOST bytes_sent=451838509i,bytes_recv=3284081640i,packet
$ ./telegraf --config telegraf.conf --input-filter net --test
net,interface=eth0,host=HOST bytes_sent=451838509i,bytes_recv=3284081640i,packets_sent=2663590i,packets_recv=3585442i,err_in=0i,err_out=0i,drop_in=4i,drop_out=0i 1492834180000000000
net,interface=all,host=HOST ip_reasmfails=0i,icmp_insrcquenchs=0i,icmp_outtimestamps=0i,ip_inhdrerrors=0i,ip_inunknownprotos=0i,icmp_intimeexcds=10i,icmp_outaddrmasks=0i,icmp_indestunreachs=11005i,icmpmsg_outtype0=6i,tcp_retranssegs=14669i,udplite_outdatagrams=0i,ip_reasmtimeout=0i,ip_outnoroutes=2577i,ip_inaddrerrors=186i,icmp_outaddrmaskreps=0i,tcp_incsumerrors=0i,tcp_activeopens=55965i,ip_reasmoks=0i,icmp_inechos=6i,icmp_outdestunreachs=9417i,ip_reasmreqds=0i,icmp_outtimestampreps=0i,tcp_rtoalgorithm=1i,icmpmsg_intype3=11005i,icmpmsg_outtype69=129i,tcp_outsegs=2777459i,udplite_rcvbuferrors=0i,ip_fragoks=0i,icmp_inmsgs=13398i,icmp_outerrors=0i,tcp_outrsts=14951i,udplite_noports=0i,icmp_outmsgs=11517i,icmp_outechoreps=6i,icmpmsg_intype11=10i,icmp_inparmprobs=0i,ip_forwdatagrams=0i,icmp_inechoreps=1909i,icmp_outredirects=0i,icmp_intimestampreps=0i,icmpmsg_intype5=468i,tcp_rtomax=120000i,tcp_maxconn=-1i,ip_fragcreates=0i,ip_fragfails=0i,icmp_inredirects=468i,icmp_outtimeexcds=0i,icmp_outechos=1965i,icmp_inaddrmasks=0i,tcp_inerrs=389i,tcp_rtomin=200i,ip_defaultttl=64i,ip_outrequests=3366408i,ip_forwarding=2i,udp_incsumerrors=0i,udp_indatagrams=522136i,udplite_incsumerrors=0i,ip_outdiscards=871i,icmp_inerrors=958i,icmp_outsrcquenchs=0i,icmpmsg_intype0=1909i,tcp_insegs=3580226i,udp_outdatagrams=577265i,udp_rcvbuferrors=0i,udplite_sndbuferrors=0i,icmp_incsumerrors=0i,icmp_outparmprobs=0i,icmpmsg_outtype3=9417i,tcp_attemptfails=2652i,udplite_inerrors=0i,udplite_indatagrams=0i,ip_inreceives=4172969i,icmpmsg_outtype8=1965i,tcp_currestab=59i,udp_noports=5961i,ip_indelivers=4099279i,ip_indiscards=0i,tcp_estabresets=5818i,udp_sndbuferrors=3i,icmp_intimestamps=0i,icmpmsg_intype8=6i,udp_inerrors=0i,icmp_inaddrmaskreps=0i,tcp_passiveopens=452i 1492831540000000000
``
```
6 changes: 3 additions & 3 deletions plugins/inputs/nginx/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

### Configuration:

```
```toml
# Read Nginx's basic status information (ngx_http_stub_status_module)
[[inputs.nginx]]
## An array of Nginx stub_status URI to gather stats.
Expand Down Expand Up @@ -39,14 +39,14 @@
### Example Output:

Using this configuration:
```
```toml
[[inputs.nginx]]
## An array of Nginx stub_status URI to gather stats.
urls = ["http://localhost/status"]
```

When run with:
```
```sh
./telegraf --config telegraf.conf --input-filter nginx --test
```

Expand Down
6 changes: 3 additions & 3 deletions plugins/inputs/nginx_plus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Structures for Nginx Plus have been built based on history of

### Configuration:

```
```toml
# Read Nginx Plus' advanced status information
[[inputs.nginx_plus]]
## An array of Nginx status URIs to gather stats.
Expand Down Expand Up @@ -81,14 +81,14 @@ Structures for Nginx Plus have been built based on history of
### Example Output:

Using this configuration:
```
```toml
[[inputs.nginx_plus]]
## An array of Nginx Plus status URIs to gather stats.
urls = ["http://localhost/status"]
```

When run with:
```
```sh
./telegraf -config telegraf.conf -input-filter nginx_plus -test
```

Expand Down
6 changes: 3 additions & 3 deletions plugins/inputs/nginx_plus_api/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use

### Configuration:

```
```toml
# Read Nginx Plus API advanced status information
[[inputs.nginx_plus_api]]
## An array of Nginx API URIs to gather stats.
Expand Down Expand Up @@ -201,14 +201,14 @@ Nginx Plus is a commercial version of the open source web server Nginx. The use
### Example Output:

Using this configuration:
```
```toml
[[inputs.nginx_plus_api]]
## An array of Nginx Plus API URIs to gather stats.
urls = ["http://localhost/api"]
```

When run with:
```
```sh
./telegraf -config telegraf.conf -input-filter nginx_plus_api -test
```

Expand Down
4 changes: 2 additions & 2 deletions plugins/inputs/nginx_upstream_check/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ checks. This information can be exported in JSON format and parsed by this input

### Configuration:

```
```toml
## An URL where Nginx Upstream check module is enabled
## It should be set to return a JSON formatted response
url = "http://127.0.0.1/status?format=json"
Expand Down Expand Up @@ -63,7 +63,7 @@ state of every server and, possible, add some monitoring to watch over it. Influ
### Example Output:

When run with:
```
```sh
./telegraf --config telegraf.conf --input-filter nginx_upstream_check --test
```

Expand Down
6 changes: 3 additions & 3 deletions plugins/inputs/nginx_vts/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ For module configuration details please see its [documentation](https://github.c

### Configuration:

```
```toml
# Read nginx status information using nginx-module-vts module
[[inputs.nginx_vts]]
## An array of Nginx status URIs to gather stats.
Expand Down Expand Up @@ -99,14 +99,14 @@ For module configuration details please see its [documentation](https://github.c
### Example Output:

Using this configuration:
```
```toml
[[inputs.nginx_vts]]
## An array of Nginx status URIs to gather stats.
urls = ["http://localhost/status"]
```

When run with:
```
```sh
./telegraf -config telegraf.conf -input-filter nginx_vts -test
```

Expand Down
4 changes: 2 additions & 2 deletions plugins/inputs/nvidia_smi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ You'll need to escape the `\` within the `telegraf.conf` like this: `C:\\Program

The below query could be used to alert on the average temperature of the your GPUs over the last minute

```
```sql
SELECT mean("temperature_gpu") FROM "nvidia_smi" WHERE time > now() - 5m GROUP BY time(1m), "index", "name", "host"
```

Expand All @@ -66,7 +66,7 @@ SELECT mean("temperature_gpu") FROM "nvidia_smi" WHERE time > now() - 5m GROUP B
Check the full output by running `nvidia-smi` binary manually.

Linux:
```
```sh
sudo -u telegraf -- /usr/bin/nvidia-smi -q -x
```

Expand Down
4 changes: 3 additions & 1 deletion plugins/inputs/openldap/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,9 @@ To use this plugin you must enable the [slapd monitoring](https://www.openldap.o

All **monitorCounter**, **monitoredInfo**, **monitorOpInitiated**, and **monitorOpCompleted** attributes are gathered based on this LDAP query:

```(|(objectClass=monitorCounterObject)(objectClass=monitorOperation)(objectClass=monitoredObject))```
```
(|(objectClass=monitorCounterObject)(objectClass=monitorOperation)(objectClass=monitoredObject))
```

Metric names are based on their entry DN with the cn=Monitor base removed. If `reverse_metric_names` is not set, metrics are based on their DN. If `reverse_metric_names` is set to `true`, the names are reversed. This is recommended as it allows the names to sort more naturally.

Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/postgresql/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ host=localhost user=pgotest dbname=app_production sslmode=require sslkey=/etc/te
```

### Configuration example
```
```toml
[[inputs.postgresql]]
address = "postgres://telegraf@localhost/someDB"
ignored_databases = ["template0", "template1"]
Expand Down
4 changes: 2 additions & 2 deletions plugins/inputs/postgresql_extensible/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The example below has two queries are specified, with the following parameters:
* The name of the measurement
* A list of the columns to be defined as tags

```
```toml
[[inputs.postgresql_extensible]]
# specify address via a url matching:
# postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=...
Expand Down Expand Up @@ -76,7 +76,7 @@ using postgresql extensions ([pg_stat_statements](http://www.postgresql.org/docs
# Sample Queries :
- telegraf.conf postgresql_extensible queries (assuming that you have configured
correctly your connection)
```
```toml
[[inputs.postgresql_extensible.query]]
sqlquery="SELECT * FROM pg_stat_database"
version=901
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/powerdns/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The powerdns plugin gathers metrics about PowerDNS using unix socket.

### Configuration:

```
```toml
# Description
[[inputs.powerdns]]
# An array of sockets to gather stats about.
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/prometheus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ If you want to monitor Caddy, you need to use Caddy with its Prometheus plugin:
* Restart Caddy
* Configure Telegraf to fetch metrics on it:

```
```toml
[[inputs.prometheus]]
# ## An array of urls to scrape metrics from.
urls = ["http://localhost:9180/metrics"]
Expand Down
4 changes: 2 additions & 2 deletions plugins/inputs/redis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

### Configuration:

```
```toml
# Read Redis's basic status information
[[inputs.redis]]
## specify servers via a url matching:
Expand Down Expand Up @@ -153,7 +153,7 @@ Additionally the plugin also calculates the hit/miss ratio (keyspace\_hitrate) a
### Example Output:

Using this configuration:
```
```toml
[[inputs.redis]]
## specify servers via a url matching:
## [protocol://][:password]@address[:port]
Expand Down
2 changes: 1 addition & 1 deletion plugins/inputs/sensors/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ package installed.
This plugin collects sensor metrics with the `sensors` executable from the lm-sensor package.

### Configuration:
```
```toml
# Monitor sensors, requires lm-sensors package
[[inputs.sensors]]
## Remove numbers from field names.
Expand Down
Loading

0 comments on commit 75e701c

Please sign in to comment.