Skip to content

Commit b5809f6

Browse files
author
Daniel Martins
committed
Improve formating
1 parent d9f3ca1 commit b5809f6

File tree

1 file changed

+15
-14
lines changed

1 file changed

+15
-14
lines changed

README.md

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,10 @@ me. Pull Requests are welcome!
1515
- [Cleaning Up](#cleaning-up)
1616
- [Prometheus Overview](#prometheus-overview)
1717
- [Push vs Pull](#push-vs-pull)
18-
- [Metrics Endpoint](#metrics-endpoint)
19-
- [Time Series and Data Points](#time-series-and-data-points)
18+
- [Metrics Endpoint](#metrics-endpoint)
2019
- [Duplicate Metrics Names?](#duplicate-metrics-names)
2120
- [Monitoring Uptime](#monitoring-uptime)
22-
- [A Basic Uptime Alert](#a-basic-uptime-alert)
21+
- [A Basic Uptime Alert](#a-basic-uptime-alert)
2322
- [Instrumenting Your Applications](#instrumenting-your-applications)
2423
- [Measuring Request Durations](#measuring-request-durations)
2524
- [Quantile Estimation Errors](#quantile-estimation-errors)
@@ -124,7 +123,7 @@ other tools in the monitoring space regarding scope, data model, and storage.
124123
Now, if the application doesn't push metrics to the metrics server, how does
125124
the applications metrics end up in Prometheus?
126125

127-
#### Metrics Endpoint
126+
### Metrics Endpoint
128127

129128
Applications expose metrics to Prometheus via a _metrics endpoint_. To see how
130129
this works, let's start everything by running `docker-compose up -d` if you
@@ -199,8 +198,6 @@ In this snippet alone we can notice a few interesting things:
199198
But how does this text-based response turns into data points in a time series
200199
database?
201200

202-
### Time Series and Data Points
203-
204201
The best way to understand this is by running a few simple queries.
205202

206203
Open the Prometheus UI at <http://localhost:9090/graph>, type
@@ -231,7 +228,7 @@ Prometheus UI):
231228

232229
| Element | Value |
233230
|---------|-------|
234-
| process_resident_memory_bytes{instance="grafana:3000",job="grafana"} | 40861696@1530461477.446 43298816@1530461482.447 43778048@1530461487.451 44785664@1530461492.447 44785664@1530461497.447 45043712@1530461502.448 45043712@1530461507.448 45301760@1530461512.451 45301760@1530461517.448 45301760@1530461522.448 45895680@1530461527.448 45895680@1530461532.447 |
231+
| process_resident_memory_bytes{instance="grafana:3000",job="grafana"} | 40861696@1530461477.446<br/>43298816@1530461482.447<br/>43778048@1530461487.451<br/>44785664@1530461492.447<br/>44785664@1530461497.447<br/>45043712@1530461502.448<br/>45043712@1530461507.44<br/>45301760@1530461512.451<br/>45301760@1530461517.448<br/>45301760@1530461522.448<br/>45895680@1530461527.448<br/>45895680@1530461532.447 |
235232

236233
### Duplicate Metrics Names?
237234

@@ -247,7 +244,8 @@ Prometheus, and our sample application all export a gauge metric under the
247244
same name. However, did you notice in the previous plot that somehow we were
248245
able to get a separate time series from each application?
249246

250-
Quoting the [documentation](https://prometheus.io/docs/concepts/jobs_instances/):
247+
Quoting the
248+
[documentation](https://prometheus.io/docs/concepts/jobs_instances/):
251249

252250
> In Prometheus terms, an endpoint you can scrape is called an **instance**,
253251
> usually corresponding to a single process. A collection of instances with
@@ -264,8 +262,8 @@ exposing this metric, we can see three lines in that plot.
264262

265263
### Monitoring Uptime
266264

267-
For each instance scrape, Prometheus stores a `up` metric with the value `1` when
268-
the instance is healthy, i.e. reachable, or `0` if the scrape failed.
265+
For each instance scrape, Prometheus stores a `up` metric with the value `1`
266+
when the instance is healthy, i.e. reachable, or `0` if the scrape failed.
269267

270268
Try plotting the query `up` in the Prometheus UI.
271269

@@ -291,7 +289,7 @@ handles usage (in %) for all targets? **Tip:** the metric names end with
291289

292290
---
293291

294-
##### A Basic Uptime Alert
292+
#### A Basic Uptime Alert
295293

296294
We don't want to keep staring at dashboards in a big TV screen all day
297295
to be able to quickly detect issues in our applications, afterall, we have
@@ -307,7 +305,8 @@ OpsGenie). It also takes care of silencing and inhibition of alerts.
307305
Configuring Alertmanager to send metrics to PagerDuty, or Slack, or whatever,
308306
is out of the scope of this workshop, but we can still play around with alerts.
309307

310-
Let's define our first alerting rule in `config/prometheus/prometheus.rules.yml`:
308+
Let's define our first alerting rule in
309+
`config/prometheus/prometheus.rules.yml`:
311310

312311
```yaml
313312
# Uptime alerting rule
@@ -368,7 +367,8 @@ We can measure request durations with
368367
[percentiles](https://en.wikipedia.org/wiki/Quantile) or
369368
[averages](https://en.wikipedia.org/wiki/Arithmetic_mean). However,
370369
it's not recommended relying on averages to track request durations because
371-
averages can be very misleading (see the [References](#references) for a few posts on the pitfalls of averages and how percentiles can help).
370+
averages can be very misleading (see the [References](#references) for a few
371+
posts on the pitfalls of averages and how percentiles can help).
372372

373373
In Prometheus, we can generate percentiles with summaries or histograms.
374374

@@ -471,7 +471,8 @@ The result of these queries may seem surprising.
471471
The first thing to notice is how the average response time fails to
472472
communicate the actual behavior of the response duration distribution
473473
(avg: 50ms; p99: 1s); the second is how the 99th percentile reported by the
474-
the summary (1s) is quite different than the one estimated by the `histogram_quantile()` function (~2.2s). How can this be?
474+
the summary (1s) is quite different than the one estimated by the
475+
`histogram_quantile()` function (~2.2s). How can this be?
475476

476477
#### Quantile Estimation Errors
477478

0 commit comments

Comments
 (0)