@@ -97,7 +97,7 @@ metrics from our **target** applications.
97
97
98
98
This means Prometheus took a different approach than other "traditional"
99
99
monitoring tools, such as [ StatsD] ( https://github.com/etsy/statsd ) , in
100
- which applications _ push_ metrics to the metrics server or aggregator,
100
+ which applications _ push_ metrics to the metrics server or aggregator,
101
101
instead of having the metrics server _ pulling_ metrics from applications.
102
102
103
103
The consequence of this design is a better separation of concerns; when
@@ -189,7 +189,7 @@ In this snippet alone we can notice a few interesting things:
189
189
1. Each metric has a user friendly description that explains its purpose
190
190
2. Each metric may define additional dimensions, also known as **labels**. For
191
191
instance, the metric `go_info` has a `version` label
192
- - Every time series is uniquely identified by its metric name and the set of
192
+ - Every time series is uniquely identified by its metric name and the set of
193
193
label-value pairs
194
194
3. Each metric is defined as a specific type, such as `summary`, `gauge`,
195
195
` counter` , and `histogram`. More information on each data type can be found
@@ -252,7 +252,7 @@ Quoting the
252
252
> the same purpose, a process replicated for scalability or reliability for
253
253
> example, is called a **job**.
254
254
>
255
- > When Prometheus scrapes a target, it attaches some labels automatically to
255
+ > When Prometheus scrapes a target, it attaches some labels automatically to
256
256
> the scraped time series which serve to identify the scraped target:
257
257
> - `job` - The configured job name that the target belongs to.
258
258
> - `instance` - The `<host>:<port>` part of the target's URL that was scraped.
@@ -292,7 +292,7 @@ handles usage (in %) for all targets? **Tip:** the metric names end with
292
292
# ### A Basic Uptime Alert
293
293
294
294
We don't want to keep staring at dashboards in a big TV screen all day
295
- to be able to quickly detect issues in our applications, afterall , we have
295
+ to be able to quickly detect issues in our applications, after all , we have
296
296
better things to do with our time, right?
297
297
298
298
Luckily, Prometheus provides a facility for defining alerting rules that,
@@ -358,7 +358,7 @@ should go back to a green state after a few seconds.
358
358
Let's examine a sample Node.js application we created for this workshop.
359
359
360
360
Open the `./sample-app/index.js` file in your favorite text editor. The
361
- code is fully commented, so you should not have a hard time understading
361
+ code is fully commented, so you should not have a hard time understanding
362
362
it.
363
363
364
364
# ## Measuring Request Durations
@@ -537,6 +537,7 @@ const requestDurationHistogram = new prometheusClient.Histogram({
537
537
538
538
// Experimenting a different bucket layout
539
539
buckets: [0.005, 0.01, 0.02, 0.05, 0.1, 0.25, 0.5, 0.8, 1, 1.2, 1.5]
540
+ });
540
541
` ` `
541
542
542
543
Let's start a clean Prometheus server with the modified bucket configuration
@@ -655,7 +656,7 @@ comprehensive list of official and third-party exporters for a variety of
655
656
systems, such as databases, messaging systems, cloud providers, and so forth.
656
657
657
658
For a very simplistic example, check out the
658
- [aws-limits-exporter](https://github.com/danielfm/aws-limits-exporter)
659
+ [aws-limits-exporter](https://github.com/danielfm/aws-limits-exporter)
659
660
project, which is about 200 lines of Go code.
660
661
661
662
# ## Final Gotchas
0 commit comments