Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 11 additions & 8 deletions content/docs/alerting/notification_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,18 @@ In this example we again customize the text sent to our Slack receiver accessing
Alert

```
ALERT InstanceDown
IF up == 0
FOR 5m
LABELS { severity = "page" }
ANNOTATIONS {
groups:
- name: Instances
rules:
- alert: InstanceDown
expr: up == 0
for: 5m
labels:
severity: page
# Prometheus templates apply here in the annotation and label fields of the alert.
summary = "Instance {{ $labels.instance }} down",
description = "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes."
}
annotations:
description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes.'
summary: 'Instance {{ $labels.instance }} down'
```

Receiver
Expand Down
15 changes: 10 additions & 5 deletions content/docs/instrumenting/pushing.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,9 +82,14 @@ Set up an alert to fire if the job has not run recently. Add the following to
the rules of a Prometheus server that is scraping the Pushgateway:

```
ALERT MyBatchJobNotCompleted
IF min(time() - my_batch_job_last_success_unixtime{job="my_batch_job"}) > 60 * 60
FOR 5m
WITH { severity="page" }
SUMMARY "MyBatchJob has not completed successfully in over an hour"
groups:
- name: MyBatchJob
rules:
- alert: MyBatchJobNotCompleted
expr: min(time() - my_batch_job_last_success_unixtime{job="my_batch_job"}) > 60 * 60
for: 5m
labels:
severity: page
annotations:
summary: MyBatchJob has not completed successfully in over an hour
```