Skip to content

Commit

Permalink
Merge branch 'main' into patch-1
Browse files Browse the repository at this point in the history
  • Loading branch information
jai authored Oct 5, 2020
2 parents c02686c + 89b1338 commit d8c35a2
Show file tree
Hide file tree
Showing 16 changed files with 236 additions and 67 deletions.
32 changes: 16 additions & 16 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -7,20 +7,20 @@
* @prometheus-community/helm-charts-admins

/charts/alertmanager/ @monotek @naseemkullah
/charts/kube-prometheus-stack/ @vsliouniaev @bismarck @gianrubio @gkarthiks @scottrigby @Xtigyro
/charts/prometheus/ @gianrubio @zanhsieh @Xtigyro @monotek @naseemkullah
/charts/prometheus-adapter/ @mattiasgees @steven-sheehy @hectorj2f
/charts/kube-prometheus-stack/ @bismarck @gianrubio @gkarthiks @scottrigby @vsliouniaev @Xtigyro
/charts/prometheus/ @gianrubio @monotek @naseemkullah @Xtigyro @zanhsieh
/charts/prometheus-adapter/ @hectorj2f @mattiasgees @steven-sheehy
/charts/prometheus-blackbox-exporter/ @desaintmartin @gianrubio @monotek @rsotnychenko
/charts/prometheus-cloudwatch-exporter/ @gianrubio @torstenwalter @asherf
/charts/prometheus-consul-exporter @timm088 @gkarthiks
/charts/prometheus-couchdb-exporter @gkarthiks
/charts/prometheus-mongodb-exporter @steven-sheehy
/charts/prometheus-mysql-exporter @juanchimienti @monotek
/charts/prometheus-nats-exporter @okgolove @caarlos0
/charts/prometheus-node-exporter @gianrubio @vsliouniaev
/charts/prometheus-postgres-exporter @gianrubio @zanhsieh
/charts/prometheus-pushgateway @gianrubio @cstaud
/charts/prometheus-rabbitmq-exporter @juanchimienti
/charts/prometheus-redis-exporter @acondrat @zanhsieh
/charts/prometheus-snmp-exporter @miouge1
/charts/prometheus-to-sd @acondrat
/charts/prometheus-cloudwatch-exporter/ @asherf @gianrubio @torstenwalter
/charts/prometheus-consul-exporter/ @gkarthiks @timm088
/charts/prometheus-couchdb-exporter/ @gkarthiks
/charts/prometheus-mongodb-exporter/ @steven-sheehy
/charts/prometheus-mysql-exporter/ @juanchimienti @monotek
/charts/prometheus-nats-exporter/ @caarlos0 @okgolove
/charts/prometheus-node-exporter/ @gianrubio @vsliouniaev
/charts/prometheus-postgres-exporter/ @gianrubio @zanhsieh
/charts/prometheus-pushgateway/ @cstaud @gianrubio
/charts/prometheus-rabbitmq-exporter/ @juanchimienti
/charts/prometheus-redis-exporter/ @acondrat @zanhsieh
/charts/prometheus-snmp-exporter/ @miouge1
/charts/prometheus-to-sd/ @acondrat
94 changes: 93 additions & 1 deletion PROCESSES.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,102 @@

This document outlines processes and procedures for some common tasks in the charts repository.

## Review Process

One of the Chart maintainers should review the PR.
If everything is fine (passes [Technical Requirements](https://github.com/prometheus-community/helm-charts/blob/main/CONTRIBUTING.md#technical-requirements), etc) then the PR should be [approved](https://docs.github.com/en/free-pro-team@latest/github/collaborating-with-issues-and-pull-requests/approving-a-pull-request-with-required-reviews).
The one who approves the PR should also merge it directly.
In case the reviewer wants someone else to have a look on it,
it should be mentioned as a comment so that it's transparent for everyone.

As a chart maintainer can not approve it's own PRs every chart should have at least two maintainers.
For charts where this is not the case or where none of the other maintainers does a review within two weeks the maintainer who created the PR could request a review from a repository admin instead.

## Adding chart maintainers

Chart maintainers are defined within the chart itself.
So the procedure for adding maintainers is to add them there.
The pull request which does that should also update [CODEOWNERS](./github/CODEOWNERS) file to that the new maintainer is able to approve pull requests.
The pull request which does that should also update [CODEOWNERS](./.github/CODEOWNERS) file to that the new maintainer is able to approve pull requests.
One of the existing chart maintainers needs to approve the PR in addition one of the repository admins needs to approve it.
They are then responsible for also granting the new maintainer write permissions to this repository.

## GitHub Settings

As not everyone is able to see which settings are configured for this repository these are also documented here.
Changing settings outlined should only be done once a PR is approved which documents those changes.

### Merge Settings

Only squash merge is allowed in this repository:

> Allow squash merging
> Combine all commits from the head branch into a single commit in the base branch.
"Allow merge commits" and "Allow rebase merging" are disabled to keep the history simple and clean.

### Repository Access

Repository access and permissions are managed via the GitHub teams.

| GitHub Team | Repository Access |
| ----------- | ---- |
| [helm-charts-maintainers](https://github.com/orgs/prometheus-community/teams/helm-charts-maintainers) | Write |
| [helm-charts-admins](https://github.com/orgs/prometheus-community/teams/helm-charts-admins) | Admin |

Chart maintainers are members of [@prometheus-community/helm-charts-maintainers](https://github.com/orgs/prometheus-community/teams/helm-charts-maintainers).
This allows them to manage issues, review PRs etc according to the rules in [CODEOWNERS](./.github/CODEOWNERS).
To request adding a user to [@prometheus-community/helm-charts-maintainers](https://github.com/orgs/prometheus-community/teams/helm-charts-maintainers), ask [@prometheus-community/helm-charts-admins](https://github.com/orgs/prometheus-community/teams/helm-charts-admins) in the corresponding issue or pull request.

Admin permissions allow you to modify repository settings, that's nothing which is needed on a daily basis.
The goals is to limit the number of admins to avoid misconfigurations.
At the same time it makes sense to have more than one admin so that changes from one admin can be reviewed by another one.
At the moment there are three admins.

### Branch Protection Rules

The `main` branch is protected and the following settings are configured:

- Require pull request reviews before merging: 1
> When enabled, all commits must be made to a non-protected branch and submitted via a pull request with the required number of approving reviews and no changes requested before it can be merged into a branch that matches this rule.
As many people rely on charts hosted in this repository each PR must be reviewed before it can be merged.

- Dismiss stale pull request approvals when new commits are pushed

> New reviewable commits pushed to a matching branch will dismiss pull request review approvals.
This prevents that changes can be made unnoticed to already approved PRs.
As a consequence of this every change made to an already approved PR will need another approval.

- Require review from Code Owners

> Require an approved review in pull requests including files with a designated code owner.
This repository hosts multiple helm charts with different maintainers.
This setting helps us to ensure, that every change to a chart needs to be approved by at least one of the maintainers of a that chart.

As a consequence CODEOWNERS and maintainers of a chart defined in `Chart.yaml` needs to be in sync.

- Require status checks to pass before merging
> Choose which [status checks](https://docs.github.com/en/free-pro-team@latest/rest/reference/repos#statuses) must pass before branches can be merged into a branch that matches this rule. When enabled, commits must first be pushed to another branch, then merged or pushed directly to a branch that matches this rule after status checks have passed.
- DCO

[Developer Certificate of Origin](https://developercertificate.org/) (DCO) check is performed by [DCO GitHub App](https://github.com/apps/dco)

- Lint Code Base

Linting is done using [Super-Linter](https://github.com/github/super-linter).
It is configured in [linter.yaml](.github/workflows/linter.yml)

- lint-test

Helm charts are tested using [Chart Testing](https://github.com/helm/chart-testing), which is configured in [lint-test.yaml](.github/workflows/lint-test.yaml).

- Include administrators
> Enforce all configured restrictions above for administrators.
To play fair all the settings above are also applied for administrators.

- Force pushes and deletions are disabled

Force pushes and deletions on the `main` branch should never be done.
2 changes: 1 addition & 1 deletion charts/alertmanager/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/a
sources:
- https://github.com/prometheus/alertmanager
type: application
version: 0.1.0
version: 0.1.1
appVersion: v0.21.0
maintainers:
- name: naseemkullah
Expand Down
38 changes: 19 additions & 19 deletions charts/alertmanager/templates/statefulset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,26 +52,26 @@ spec:
- name: config
configMap:
name: {{ include "alertmanager.fullname" . }}
{{- if (not .Values.persistence.enabled) }}
- name: storage
emptyDir: {}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: storage
spec:
accessModes:
{{- toYaml .Values.persistence.accessModes | nindent 10 }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
{{- end }}
{{- else }}
volumeClaimTemplates:
- metadata:
name: storage
spec:
accessModes:
{{- toYaml .Values.persistence.accessModes | nindent 4 }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
{{- end }}
- name: storage
emptyDir: {}
{{- end -}}
{{- with .Values.nodeSelector }}
nodeSelector:
Expand Down
4 changes: 2 additions & 2 deletions charts/alertmanager/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ tolerations: []
affinity: {}

persistence:
enabled: false
enabled: true
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
Expand All @@ -79,7 +79,7 @@ persistence:
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 1Gi
size: 50Mi

config:
global: {}
Expand Down
2 changes: 1 addition & 1 deletion charts/kube-prometheus-stack/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ name: kube-prometheus-stack
sources:
- https://github.com/prometheus-community/helm-charts
- https://github.com/prometheus-operator/kube-prometheus
version: 9.4.5
version: 9.4.8
appVersion: 0.38.1
tillerVersion: ">=2.12.0"
home: https://github.com/prometheus-operator/kube-prometheus
Expand Down
13 changes: 10 additions & 3 deletions charts/kube-prometheus-stack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,9 +196,16 @@ For more in-depth documentation of configuration options meanings, please see

## prometheus.io/scrape

The prometheus operator does not support annotation-based discovery of services, using the `serviceMonitor` CRD in its place as it provides far more configuration options. For information on how to use servicemonitors, please see the documentation on the `prometheus-operator/prometheus-operator` documentation here: [Running Exporters](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/running-exporters.md)

By default, Prometheus discovers ServiceMonitors within its namespace, that are labeled with the same release tag as the prometheus-operator release. Sometimes, you may need to discover custom ServiceMonitors, for example used to scrape data from third-party applications. An easy way of doing this, without compromising the default ServiceMonitors discovery, is allowing Prometheus to discover all ServiceMonitors within its namespace, without applying label filtering. To do so, you can set `prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues` to `false`.
The prometheus operator does not support annotation-based discovery of services, using the `PodMonitor` or `ServiceMonitor` CRD in its place as they provide far more configuration options.
For information on how to use PodMonitors/ServiceMonitors, please see the documentation on the `prometheus-operator/prometheus-operator` documentation here:
- [ServiceMonitors](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-servicemonitors)
- [PodMonitors](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md#include-podmonitors)
- [Running Exporters](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/running-exporters.md)

By default, Prometheus discovers PodMonitors and ServiceMonitors within its namespace, that are labeled with the same release tag as the prometheus-operator release.
Sometimes, you may need to discover custom PodMonitors/ServiceMonitors, for example used to scrape data from third-party applications.
An easy way of doing this, without compromising the default PodMonitors/ServiceMonitors discovery, is allowing Prometheus to discover all PodMonitors/ServiceMonitors within its namespace, without applying label filtering.
To do so, you can set `prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues` and `prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues` to `false`.

## Migrating from coreos/prometheus-operator chart

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,15 @@ spec:
{{- if .Values.alertmanager.serviceMonitor.interval }}
interval: {{ .Values.alertmanager.serviceMonitor.interval }}
{{- end }}
{{- if .Values.alertmanager.serviceMonitor.scheme }}
scheme: {{ .Values.alertmanager.serviceMonitor.scheme }}
{{- end }}
{{- if .Values.alertmanager.serviceMonitor.bearerTokenFile }}
bearerTokenFile: {{ .Values.alertmanager.serviceMonitor.bearerTokenFile }}
{{- end }}
{{- if .Values.alertmanager.serviceMonitor.tlsConfig }}
tlsConfig: {{ toYaml .Values.alertmanager.serviceMonitor.tlsConfig | nindent 6 }}
{{- end }}
path: "{{ trimSuffix "/" .Values.alertmanager.alertmanagerSpec.routePrefix }}/metrics"
{{- if .Values.alertmanager.serviceMonitor.metricRelabelings }}
metricRelabelings:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,12 @@ spec:
{{- end }}
containers:
- name: kubectl
{{- if .Values.prometheusOperator.hyperkubeImage.sha }}
image: {{ .Values.prometheusOperator.hyperkubeImage.repository }}:{{ .Values.prometheusOperator.hyperkubeImage.tag }}@sha256:{{ .Values.prometheusOperator.hyperkubeImage.sha }}
{{- if .Values.prometheusOperator.kubectlImage.sha }}
image: "{{ .Values.prometheusOperator.kubectlImage.repository }}:{{ .Values.prometheusOperator.kubectlImage.tag }}@sha256:{{ .Values.prometheusOperator.kubectlImage.sha }}"
{{- else }}
image: "{{ .Values.prometheusOperator.hyperkubeImage.repository }}:{{ .Values.prometheusOperator.hyperkubeImage.tag }}"
image: "{{ .Values.prometheusOperator.kubectlImage.repository }}:{{ .Values.prometheusOperator.kubectlImage.tag }}"
{{- end }}
imagePullPolicy: "{{ .Values.prometheusOperator.hyperkubeImage.pullPolicy }}"
imagePullPolicy: "{{ .Values.prometheusOperator.kubectlImage.pullPolicy }}"
command:
- /bin/sh
- -c
Expand Down
17 changes: 13 additions & 4 deletions charts/kube-prometheus-stack/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -315,6 +315,15 @@ alertmanager:
interval: ""
selfMonitor: true

## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS.
scheme: ""

## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS.
## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfig
tlsConfig: {}

bearerTokenFile:

## metric relabel configs to apply to samples before ingestion.
##
metricRelabelings: []
Expand Down Expand Up @@ -1376,11 +1385,11 @@ prometheusOperator:
##
secretFieldSelector: ""

## Hyperkube image to use when cleaning up
## kubectl image to use when cleaning up
##
hyperkubeImage:
repository: k8s.gcr.io/hyperkube
tag: v1.16.12
kubectlImage:
repository: docker.io/bitnami/kubectl
tag: 1.16.15
sha: ""
pullPolicy: IfNotPresent

Expand Down
2 changes: 1 addition & 1 deletion charts/prometheus-consul-exporter/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "0.4.0"
description: A Helm chart for the Prometheus Consul Exporter
name: prometheus-consul-exporter
version: 0.1.7
version: 0.2.0
keywords:
- metrics
- consul
Expand Down
36 changes: 24 additions & 12 deletions charts/prometheus-consul-exporter/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,17 +20,19 @@ spec:
release: {{ .Release.Name }}
spec:
serviceAccountName: {{ template "prometheus-consul-exporter.serviceAccountName" . }}
{{- with .Values.initContainers }}
initContainers: {{ toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["consul_exporter"]
args: [
"--consul.server={{ .Values.consulServer }}",
args:
- "--consul.server={{ .Values.consulServer }}"
{{- range $key, $value := .Values.options }}
"--{{ $key }}{{ if $value }}={{ $value }}{{ end }}",
- "--{{ $key }}{{ if $value }}={{ $value }}{{ end }}"
{{- end }}
]
ports:
- name: http
containerPort: {{ .Values.service.port }}
Expand All @@ -47,17 +49,27 @@ spec:
port: http
initialDelaySeconds: 30
timeoutSeconds: 10
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.resources }}
resources: {{ toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.extraVolumeMounts }}
volumeMounts: {{ toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.extraEnv }}
env: {{ toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.extraContainers }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
nodeSelector: {{ toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
affinity: {{ toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
tolerations: {{ toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.extraVolumes }}
volumes: {{ toYaml . | nindent 8 }}
{{- end }}
Loading

0 comments on commit d8c35a2

Please sign in to comment.