From f4db6a76b811e66a98f5e67c9e2ccec448fa97b1 Mon Sep 17 00:00:00 2001 From: Abdelkrim Date: Fri, 14 Apr 2023 21:12:35 +0200 Subject: [PATCH 01/14] Update _index.md (#9138) Removed the single binary from the scalability section as this is confusing. This should be called monolithic also. **What this PR does / why we need it**: **Which issue(s) this PR fixes**: Fixes # **Special notes for your reviewer**: **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --- docs/sources/fundamentals/overview/_index.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/docs/sources/fundamentals/overview/_index.md b/docs/sources/fundamentals/overview/_index.md index 2a60bb9096ea..f759423699ba 100644 --- a/docs/sources/fundamentals/overview/_index.md +++ b/docs/sources/fundamentals/overview/_index.md @@ -56,9 +56,6 @@ and allows for efficient query execution. - **Scalability** - Loki can be run as a single binary; - all the components run in one process. - Loki is designed for scalability, as each of Loki's components can be run as microservices. Configuration permits scaling the microservices individually, From d107fbe538a6f1942fbb27ea719a6a2dd31b09e5 Mon Sep 17 00:00:00 2001 From: melGL <81323402+melgl@users.noreply.github.com> Date: Fri, 14 Apr 2023 14:12:50 -0500 Subject: [PATCH 02/14] Removed deprecated navigation names (#9133) Changed "integrations and connections" to "connections" and removed the lightning bolt icon reference. --------- Co-authored-by: J Stickler --- .../installation/helm/monitor-and-alert/with-grafana-cloud.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/installation/helm/monitor-and-alert/with-grafana-cloud.md b/docs/sources/installation/helm/monitor-and-alert/with-grafana-cloud.md index 1ed3caea15f0..284d48e8492e 100644 --- a/docs/sources/installation/helm/monitor-and-alert/with-grafana-cloud.md +++ b/docs/sources/installation/helm/monitor-and-alert/with-grafana-cloud.md @@ -95,6 +95,6 @@ Walking through this installation will create two Grafana Agent configurations, key: password ``` -1. Install the self-hosted Grafana Loki integration by going to your hosted Grafana instance, clicking the lightning bolt icon labeled **Integrations and Connections**, then search for and install the **Self-hosted Grafana Loki** integration. +1. Install the self-hosted Grafana Loki integration by going to your hosted Grafana instance, selecting **Connections** from the Home menu, then search for and install the **Self-hosted Grafana Loki** integration. 1. Once the self-hosted Grafana Loki integration is installed, click the **View Dashboards** button to see the installed dashboards. From df1b34898bea812b197ca7404b29094f740df26c Mon Sep 17 00:00:00 2001 From: Jack Baldry Date: Fri, 14 Apr 2023 20:13:26 +0100 Subject: [PATCH 03/14] Use centralized make-docs script from Writers' Toolkit (#9126) - Adds ability to build multiple projects simultaneously using, for example, `make docs PROJECTS='grafana grafana-cloud'`. - Adds `make doc-validator` which runs [`doc-validator`](https://github.com/grafana/technical-documentation/tree/main/tools/cmd/doc-validator) on all documentation. Using a centralized script will help ensure consistency in workflow across all projects. Signed-off-by: Jack Baldry Signed-off-by: Jack Baldry --- .gitignore | 1 + docs/Makefile | 26 +++++--------- docs/docs.mk | 87 +++++++++++++++++++++++++++++++++++++++++++++++ docs/variables.mk | 2 ++ 4 files changed, 99 insertions(+), 17 deletions(-) create mode 100644 docs/docs.mk create mode 100644 docs/variables.mk diff --git a/.gitignore b/.gitignore index a0cc2550cb20..3926dcad0207 100644 --- a/.gitignore +++ b/.gitignore @@ -51,3 +51,4 @@ pkg/loki/wal # nix result +/docs/make-docs diff --git a/docs/Makefile b/docs/Makefile index e3a60c369815..016b74d291ce 100644 --- a/docs/Makefile +++ b/docs/Makefile @@ -1,22 +1,14 @@ -PODMAN := $(shell if command -v podman >/dev/null 2>&1; then echo podman; else echo docker; fi) -IMAGE := grafana/docs-base:latest -BUILD_IN_CONTAINER ?= true - -.PHONY: pull -pull: - $(PODMAN) pull ${IMAGE} +.ONESHELL: +.DELETE_ON_ERROR: +export SHELL := bash +export SHELLOPTS := pipefail:errexit +MAKEFLAGS += --warn-undefined-variables +MAKEFLAGS += --no-builtin-rule -.PHONY: docs -docs: pull - $(PODMAN) run --rm -it -v ${PWD}/sources:/hugo/content/docs/loki/latest -p 3002:3002 $(IMAGE) +include docs.mk -.PHONY: docs-next -docs-next: pull - $(PODMAN) run --rm -it -v ${PWD}/sources:/hugo/content/docs/loki/next -p 3002:3002 $(IMAGE) - -.PHONY: docs-test -docs-test: pull - $(PODMAN) run --rm -it -v ${PWD}/sources:/hugo/content/docs/loki/latest -p 3002:3002 $(IMAGE) /bin/bash -c 'make prod' +PODMAN := $(shell if command -v podman >/dev/null 2>&1; then echo podman; else echo docker; fi) +BUILD_IN_CONTAINER ?= true sources/installation/helm/reference.md: ../production/helm/loki/reference.md.gotmpl ifeq ($(BUILD_IN_CONTAINER),true) diff --git a/docs/docs.mk b/docs/docs.mk new file mode 100644 index 000000000000..f556b52f47d3 --- /dev/null +++ b/docs/docs.mk @@ -0,0 +1,87 @@ +include variables.mk +-include variables.mk.local + +.ONESHELL: +.DELETE_ON_ERROR: +export SHELL := bash +export SHELLOPTS := pipefail:errexit +MAKEFLAGS += --warn-undefined-variables +MAKEFLAGS += --no-builtin-rule + +.DEFAULT_GOAL: help + +# Adapted from https://www.thapaliya.com/en/writings/well-documented-makefiles/ +.PHONY: help +help: ## Display this help. +help: + @awk 'BEGIN {FS = ": ##"; printf "Usage:\n make \n\nTargets:\n"} /^[a-zA-Z0-9_\.\-\/%]+: ##/ { printf " %-45s %s\n", $$1, $$2 }' $(MAKEFILE_LIST) + +GIT_ROOT := $(shell git rev-parse --show-toplevel) + +PODMAN := $(shell if command -v podman >/dev/null 2>&1; then echo podman; else echo docker; fi) + +ifeq ($(PROJECTS),) +$(error "PROJECTS variable must be defined in variables.mk") +endif + +# First project is considered the primary one used for doc-validator. +PRIMARY_PROJECT := $(firstword $(subst /,-,$(PROJECTS))) + +# Name for the container. +export DOCS_CONTAINER := $(PRIMARY_PROJECT)-docs + +# Host port to publish container port to. +export DOCS_HOST_PORT := 3002 + +# Container image used to perform Hugo build. +export DOCS_IMAGE := grafana/docs-base:latest + +# Container image used for doc-validator linting. +export DOC_VALIDATOR_IMAGE := grafana/doc-validator:latest + +# PATH-like list of directories within which to find projects. +# If all projects are checked out into the same directory, ~/repos/ for example, then the default should work. +export REPOS_PATH := $(realpath $(GIT_ROOT)/..) + +# How to treat Hugo relref errors. +export HUGO_REFLINKSERRORLEVEL := WARNING + +.PHONY: docs-rm +docs-rm: ## Remove the docs container. + $(PODMAN) rm -f $(DOCS_CONTAINER) + +.PHONY: docs-pull +docs-pull: ## Pull documentation base image. + $(PODMAN) pull $(DOCS_IMAGE) + +make-docs: ## Fetch the latest make-docs script. +make-docs: + curl -s -LO https://raw.githubusercontent.com/grafana/writers-toolkit/main/scripts/make-docs + chmod +x make-docs + +.PHONY: docs +docs: ## Serve documentation locally. +docs: docs-pull make-docs + $(PWD)/make-docs $(PROJECTS) + +.PHONY: docs-no-pull +docs-no-pull: ## Serve documentation locally without pulling the latest docs-base image. +docs-no-pull: make-docs + $(PWD)/make-docs $(PROJECTS) + +.PHONY: docs-debug +docs-debug: ## Run Hugo web server with debugging enabled. TODO: support all SERVER_FLAGS defined in website Makefile. +docs-debug: make-docs + WEBSITE_EXEC='hugo server --debug' $(PWD)/make-docs $(PROJECTS) + +.PHONY: doc-validator +doc-validator: ## Run docs-validator on the entire docs folder. + DOCS_IMAGE=$(DOC_VALIDATOR_IMAGE) $(PWD)/make-docs $(PROJECTS) + +.PHONY: doc-validator/% +doc-validator/%: ## Run doc-validator on a specific path. To lint the path /docs/sources/administration, run 'make doc-validator/administration'. +doc-validator/%: + DOCS_IMAGE=$(DOC_VALIDATOR_IMAGE) DOC_VALIDATOR_INCLUDE=$(subst doc-validator/,,$@) $(PWD)/make-docs $(PROJECTS) + +docs.mk: ## Fetch the latest version of this Makefile from Writers' Toolkit. + curl -s -LO https://raw.githubusercontent.com/grafana/writers-toolkit/main/docs/docs.mk diff --git a/docs/variables.mk b/docs/variables.mk new file mode 100644 index 000000000000..d3bf4fa1cb01 --- /dev/null +++ b/docs/variables.mk @@ -0,0 +1,2 @@ +# List of projects to provide to the make-docs script. +PROJECTS = loki From 46e835fb3cd3e6252322fc8e5deee9ba65c4b496 Mon Sep 17 00:00:00 2001 From: "W.T. Chang" <1546333+wtchangdm@users.noreply.github.com> Date: Sat, 15 Apr 2023 03:14:01 +0800 Subject: [PATCH 04/14] Docs: Replace mimir example config path with a loki one (#9125) **What this PR does / why we need it**: The example config path on Loki's query scheduler is using `-config.file=/mimir/config/mimir.yaml`, which looks a little bit confusing on Loki's document, even though there are multiple references to Mimir. https://github.com/grafana/loki/blob/7bec727c6dff8e6268f9a8f9b64fee725d4195f8/docs/sources/operations/scalability.md?plain=1#L23 This PR replaces the Mimir config path with the one found in query-frontend's document. https://github.com/grafana/loki/blob/7bec727c6dff8e6268f9a8f9b64fee725d4195f8/docs/sources/configuration/query-frontend.md?plain=1#L115 **Which issue(s) this PR fixes**: N/A **Special notes for your reviewer**: I noticed this when checking configuration of 2.6, but this is still the case in the [latest version](https://grafana.com/docs/loki/v2.8.x/operations/scalability/). **Checklist** - [X] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --- docs/sources/operations/scalability.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/operations/scalability.md b/docs/sources/operations/scalability.md index b3b112d7c303..3b7769da9e99 100644 --- a/docs/sources/operations/scalability.md +++ b/docs/sources/operations/scalability.md @@ -20,7 +20,7 @@ To run with the Query Scheduler, the frontend needs to be passed the scheduler's It is not valid to start the querier with both a configured frontend and a scheduler address. -The query scheduler process itself can be started via the `-target=query-scheduler` option of the Loki Docker image. For instance, `docker run grafana/loki:latest -config.file=/mimir/config/mimir.yaml -target=query-scheduler -server.http-listen-port=8009 -server.grpc-listen-port=9009` starts the query scheduler listening on ports `8009` and `9009`. +The query scheduler process itself can be started via the `-target=query-scheduler` option of the Loki Docker image. For instance, `docker run grafana/loki:latest -config.file=/etc/loki/config.yaml -target=query-scheduler -server.http-listen-port=8009 -server.grpc-listen-port=9009` starts the query scheduler listening on ports `8009` and `9009`. ## Memory ballast From f42942768e5356b3fe960315d04b87e7eab12235 Mon Sep 17 00:00:00 2001 From: Kaviraj Kanagaraj Date: Fri, 14 Apr 2023 21:14:52 +0200 Subject: [PATCH 05/14] Makefile: Support debug build for `logcli`. (#9093) **What this PR does / why we need it**: Support debug build for `logcli` binary. This writes debug binary under `logcli-debug` name in the same `cmd/logcli` directory. Already faced situation multiple times this is useful to debug some issues with LogQL locally. **Which issue(s) this PR fixes**: Fixes NA **Special notes for your reviewer**: **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` Signed-off-by: Kaviraj --- Makefile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/Makefile b/Makefile index 0667ad5003c3..800346c9e7dd 100644 --- a/Makefile +++ b/Makefile @@ -131,6 +131,7 @@ check-generated-files: yacc ragel fmt-proto protos clients/pkg/promtail/server/u ########## .PHONY: cmd/logcli/logcli logcli: cmd/logcli/logcli +logcli-debug: cmd/logcli/logcli-debug logcli-image: $(SUDO) docker build -t $(IMAGE_PREFIX)/logcli:$(IMAGE_TAG) -f cmd/logcli/Dockerfile . @@ -138,6 +139,8 @@ logcli-image: cmd/logcli/logcli: CGO_ENABLED=0 go build $(GO_FLAGS) -o $@ ./cmd/logcli +cmd/logcli/logcli-debug: + CGO_ENABLED=0 go build $(DEBUG_GO_FLAGS) -o ./cmd/logcli/logcli-debug ./cmd/logcli ######## # Loki # ######## From 926a4e148e20346ec7f9d8b9861a2fef64d8c2c0 Mon Sep 17 00:00:00 2001 From: Mitsuru Kariya Date: Sat, 15 Apr 2023 04:15:07 +0900 Subject: [PATCH 06/14] Docs: add missing configuration option (#9092) **What this PR does / why we need it**: Promtail can set the tenant id from a label with PR 6290 (commit a1e0298a5). However, while docs/sources/clients/promtail/stages/tenant.md was appropriately updated, docs/sources/clients/promtail/configuration.md was left untouched. **Which issue(s) this PR fixes**: **Special notes for your reviewer**: **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --- docs/sources/clients/promtail/configuration.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/sources/clients/promtail/configuration.md b/docs/sources/clients/promtail/configuration.md index 7b1dd8638a14..d036ee24afad 100644 --- a/docs/sources/clients/promtail/configuration.md +++ b/docs/sources/clients/promtail/configuration.md @@ -781,9 +781,13 @@ picking it from a field in the extracted data map. ```yaml tenant: - # Name from extracted data to whose value should be set as tenant ID. - # Either source or value config option is required, but not both (they + # Either label, source or value config option is required, but not all (they # are mutually exclusive). + + # Name from labels to whose value should be set as tenant ID. + [ label: ] + + # Name from extracted data to whose value should be set as tenant ID. [ source: ] # Value to use to set the tenant ID when this stage is executed. Useful From b5946df30e6fe2675ab35d7095aeb2dc96bacfa2 Mon Sep 17 00:00:00 2001 From: nikuljain <32053514+nikuljain@users.noreply.github.com> Date: Sat, 15 Apr 2023 00:45:19 +0530 Subject: [PATCH 07/14] Update configuration.md (#9085) **What this PR does / why we need it**: **Which issue(s) this PR fixes**: Fixes # **Special notes for your reviewer**: **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --- docs/sources/clients/promtail/configuration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/clients/promtail/configuration.md b/docs/sources/clients/promtail/configuration.md index d036ee24afad..e997f419ace7 100644 --- a/docs/sources/clients/promtail/configuration.md +++ b/docs/sources/clients/promtail/configuration.md @@ -1,6 +1,6 @@ --- title: Configuration -description: Configuring Promtaim +description: Configuring Promtail --- # Configuration From 9dd2aac9e45989509a9c2cfd2669e52b48d6c355 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 14 Apr 2023 13:15:42 -0600 Subject: [PATCH 08/14] Bump actions/checkout from 2 to 3 (#9084) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
Release notes

Sourced from actions/checkout's releases.

v3.0.0

  • Updated to the node16 runtime by default
    • This requires a minimum Actions Runner version of v2.285.0 to run, which is by default available in GHES 3.4 or later.

v2.7.0

What's Changed

Full Changelog: https://github.com/actions/checkout/compare/v2.6.0...v2.7.0

v2.6.0

What's Changed

Full Changelog: https://github.com/actions/checkout/compare/v2.5.0...v2.6.0

v2.5.0

What's Changed

Full Changelog: https://github.com/actions/checkout/compare/v2...v2.5.0

v2.4.2

What's Changed

Full Changelog: https://github.com/actions/checkout/compare/v2...v2.4.2

v2.4.1

  • Fixed an issue where checkout failed to run in container jobs due to the new git setting safe.directory

v2.4.0

  • Convert SSH URLs like org-<ORG_ID>@github.com: to https://github.com/ - pr

v2.3.5

Update dependencies

v2.3.4

v2.3.3

... (truncated)

Changelog

Sourced from actions/checkout's changelog.

Changelog

v3.4.0

v3.3.0

v3.2.0

v3.1.0

v3.0.2

v3.0.1

v3.0.0

v2.3.1

v2.3.0

v2.2.0

v2.1.1

  • Changes to support GHES (here and here)

... (truncated)

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .github/workflows/backport.yml | 2 +- .github/workflows/helm-ci.yml | 4 ++-- .github/workflows/issue_commands.yml | 2 +- .github/workflows/metrics-collector.yml | 2 +- .github/workflows/nix-ci.yaml | 2 +- .github/workflows/operator-bundle.yaml | 2 +- .github/workflows/operator-images.yaml | 6 +++--- .github/workflows/operator-scorecard.yaml | 2 +- .github/workflows/operator.yaml | 8 ++++---- .../workflows/publish-technical-documentation-next.yml | 4 ++-- .../workflows/publish-technical-documentation-release.yml | 6 +++--- .github/workflows/syft-sbom-ci.yml | 2 +- 12 files changed, 21 insertions(+), 21 deletions(-) diff --git a/.github/workflows/backport.yml b/.github/workflows/backport.yml index 343ef3c87822..820d2758307e 100644 --- a/.github/workflows/backport.yml +++ b/.github/workflows/backport.yml @@ -10,7 +10,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout Actions - uses: actions/checkout@v3.3.0 + uses: actions/checkout@v3 with: repository: "grafana/grafana-github-actions" path: ./actions diff --git a/.github/workflows/helm-ci.yml b/.github/workflows/helm-ci.yml index 9bdfe5b02151..644993775941 100644 --- a/.github/workflows/helm-ci.yml +++ b/.github/workflows/helm-ci.yml @@ -14,7 +14,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout Code - uses: actions/checkout@v3.3.0 + uses: actions/checkout@v3 - name: Check Docs run: | @@ -43,7 +43,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout - uses: actions/checkout@v3.3.0 + uses: actions/checkout@v3 with: fetch-depth: 0 diff --git a/.github/workflows/issue_commands.yml b/.github/workflows/issue_commands.yml index c9327356f27d..c82bb56fca8d 100644 --- a/.github/workflows/issue_commands.yml +++ b/.github/workflows/issue_commands.yml @@ -7,7 +7,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout Actions - uses: actions/checkout@v3.3.0 + uses: actions/checkout@v3 with: repository: "grafana/grafana-github-actions" path: ./actions diff --git a/.github/workflows/metrics-collector.yml b/.github/workflows/metrics-collector.yml index b5ea9e1dd644..5b227db2e8e8 100644 --- a/.github/workflows/metrics-collector.yml +++ b/.github/workflows/metrics-collector.yml @@ -8,7 +8,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout Actions - uses: actions/checkout@v3.3.0 + uses: actions/checkout@v3 with: repository: "grafana/grafana-github-actions" path: ./actions diff --git a/.github/workflows/nix-ci.yaml b/.github/workflows/nix-ci.yaml index 38ec33e744c1..3a3e289e78b0 100644 --- a/.github/workflows/nix-ci.yaml +++ b/.github/workflows/nix-ci.yaml @@ -9,7 +9,7 @@ jobs: tests: runs-on: ubuntu-latest steps: - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - uses: cachix/install-nix-action@v18 with: nix_path: nixpkgs=channel:nixos-unstable diff --git a/.github/workflows/operator-bundle.yaml b/.github/workflows/operator-bundle.yaml index ef448d1b681c..1083fe0bb5d1 100644 --- a/.github/workflows/operator-bundle.yaml +++ b/.github/workflows/operator-bundle.yaml @@ -23,7 +23,7 @@ jobs: with: go-version: ${{ matrix.go }} id: go - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - name: Install make run: sudo apt-get install make - name: make bundle diff --git a/.github/workflows/operator-images.yaml b/.github/workflows/operator-images.yaml index 388d166abc77..7e9a391f81e3 100644 --- a/.github/workflows/operator-images.yaml +++ b/.github/workflows/operator-images.yaml @@ -18,7 +18,7 @@ jobs: publish-manager: runs-on: ubuntu-latest steps: - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - name: Set up QEMU uses: docker/setup-qemu-action@v1 @@ -56,7 +56,7 @@ jobs: publish-bundle: runs-on: ubuntu-latest steps: - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - name: Set up QEMU uses: docker/setup-qemu-action@v1 @@ -95,7 +95,7 @@ jobs: publish-size-calculator: runs-on: ubuntu-latest steps: - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - name: Set up QEMU uses: docker/setup-qemu-action@v1 diff --git a/.github/workflows/operator-scorecard.yaml b/.github/workflows/operator-scorecard.yaml index c3722f52bf39..f732fccde363 100644 --- a/.github/workflows/operator-scorecard.yaml +++ b/.github/workflows/operator-scorecard.yaml @@ -26,7 +26,7 @@ jobs: - uses: engineerd/setup-kind@v0.5.0 with: version: "v0.17.0" - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - name: Install make run: sudo apt-get install make - name: Run scorecard diff --git a/.github/workflows/operator.yaml b/.github/workflows/operator.yaml index a17a779daaa5..690e2c5938a9 100644 --- a/.github/workflows/operator.yaml +++ b/.github/workflows/operator.yaml @@ -25,7 +25,7 @@ jobs: with: go-version: ${{ matrix.go }} id: go - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - name: Lint uses: golangci/golangci-lint-action@v3.4.0 with: @@ -51,7 +51,7 @@ jobs: with: go-version: ${{ matrix.go }} id: go - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - name: Build Manager working-directory: ./operator run: |- @@ -72,7 +72,7 @@ jobs: with: go-version: ${{ matrix.go }} id: go - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - name: Build Broker working-directory: ./operator run: |- @@ -93,7 +93,7 @@ jobs: with: go-version: ${{ matrix.go }} id: go - - uses: actions/checkout@v3.3.0 + - uses: actions/checkout@v3 - name: Run tests working-directory: ./operator run: go test -coverprofile=profile.cov ./... diff --git a/.github/workflows/publish-technical-documentation-next.yml b/.github/workflows/publish-technical-documentation-next.yml index d1105729420e..052ec1b07158 100644 --- a/.github/workflows/publish-technical-documentation-next.yml +++ b/.github/workflows/publish-technical-documentation-next.yml @@ -12,7 +12,7 @@ jobs: runs-on: "ubuntu-latest" steps: - name: "Check out code" - uses: "actions/checkout@v3.3.0" + uses: "actions/checkout@v3" - name: "Build website" # -e HUGO_REFLINKSERRORLEVEL=ERROR prevents merging broken refs with the downside # that no refs to external content can be used as these refs will not resolve in the @@ -25,7 +25,7 @@ jobs: needs: "test" steps: - name: "Check out code" - uses: "actions/checkout@v3.3.0" + uses: "actions/checkout@v3" - name: "Clone website-sync Action" # WEBSITE_SYNC_TOKEN is a fine-grained GitHub Personal Access Token that expires. diff --git a/.github/workflows/publish-technical-documentation-release.yml b/.github/workflows/publish-technical-documentation-release.yml index 77ebf01498ce..02e13270b924 100644 --- a/.github/workflows/publish-technical-documentation-release.yml +++ b/.github/workflows/publish-technical-documentation-release.yml @@ -14,7 +14,7 @@ jobs: runs-on: "ubuntu-latest" steps: - name: "Check out code" - uses: "actions/checkout@v3.3.0" + uses: "actions/checkout@v3" - name: "Build website" # -e HUGO_REFLINKSERRORLEVEL=ERROR prevents merging broken refs with the downside @@ -28,12 +28,12 @@ jobs: needs: "test" steps: - name: "Checkout code and tags" - uses: "actions/checkout@v3.3.0" + uses: "actions/checkout@v3" with: fetch-depth: 0 - name: "Checkout Actions library" - uses: "actions/checkout@v3.3.0" + uses: "actions/checkout@v3" with: repository: "grafana/grafana-github-actions" path: "./actions" diff --git a/.github/workflows/syft-sbom-ci.yml b/.github/workflows/syft-sbom-ci.yml index bae173f6ef6f..13250eed80d8 100644 --- a/.github/workflows/syft-sbom-ci.yml +++ b/.github/workflows/syft-sbom-ci.yml @@ -11,7 +11,7 @@ jobs: steps: - name: Checkout - uses: actions/checkout@v2 + uses: actions/checkout@v3 - name: Anchore SBOM Action uses: anchore/sbom-action@v0.12.0 From 556af7609ab26de864b10480646d0d29db42b722 Mon Sep 17 00:00:00 2001 From: logan <33067676+logyball@users.noreply.github.com> Date: Fri, 14 Apr 2023 15:16:14 -0400 Subject: [PATCH 09/14] Fixing spelling errors and incorrect units (#9082) **What this PR does / why we need it**: mild grammar changes **Which issue(s) this PR fixes**: None **Special notes for your reviewer**: **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [x] Documentation added - [ ] ~~Tests updated~~ - [ ] ~~`CHANGELOG.md` updated~~ - [ ] ~~Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md`~~ --- docs/sources/clients/promtail/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sources/clients/promtail/_index.md b/docs/sources/clients/promtail/_index.md index 0edfdad25ea9..d173964b0cae 100644 --- a/docs/sources/clients/promtail/_index.md +++ b/docs/sources/clients/promtail/_index.md @@ -68,7 +68,7 @@ scrape_configs: Important details are: * It relies on the `\n` character to separate the data into different log lines. -* The max expected log line is 2MB bytes within the compressed file. +* The max expected log line is 2MB within the compressed file. * The data is decompressed in blocks of 4096 bytes. i.e: it first fetches a block of 4096 bytes from the compressed file and processes it. After processing this block and pushing the data to Loki, it fetches the following 4096 bytes, and so on. @@ -77,7 +77,7 @@ Important details are: - `.z`: Data will be decompressed with the native Zlib Golang pkg (`pkg/compress/zlib`) - `.bz2`: Data will be decompressed with the native Bzip2 Golang pkg (`pkg/compress/bzip2`) - `.tar.gz`: Data will be decompressed exactly as the `.gz` extension. - However, because `tar` will add its metadata at the beggining of the + However, because `tar` will add its metadata at the beginning of the compressed file, **the first parsed line will contains metadata together with your log line**. It is illustrated at `./clients/pkg/promtail/targets/file/decompresser_test.go`. From 691b6aaae9f3953d8949c9af160edc936601ef29 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 14 Apr 2023 13:17:35 -0600 Subject: [PATCH 10/14] Bump anchore/sbom-action from 0.12.0 to 0.14.1 (#9083) Bumps [anchore/sbom-action](https://github.com/anchore/sbom-action) from 0.12.0 to 0.14.1.
Release notes

Sourced from anchore/sbom-action's releases.

v0.14.1

Changes in v0.14.1

v0.13.4

Changes in v0.13.4

v0.13.3

Changes in v0.13.3

v0.13.2

Changes in v0.13.2

v0.13.1

Changes in v0.13.1

v0.13.0

Changes in v0.13.0

Commits

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=anchore/sbom-action&package-manager=github_actions&previous-version=0.12.0&new-version=0.14.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .github/workflows/syft-sbom-ci.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/syft-sbom-ci.yml b/.github/workflows/syft-sbom-ci.yml index 13250eed80d8..63b965c6b680 100644 --- a/.github/workflows/syft-sbom-ci.yml +++ b/.github/workflows/syft-sbom-ci.yml @@ -14,7 +14,7 @@ jobs: uses: actions/checkout@v3 - name: Anchore SBOM Action - uses: anchore/sbom-action@v0.12.0 + uses: anchore/sbom-action@v0.14.1 with: artifact-name: ${{ github.event.repository.name }}-spdx.json From 8f6d31c34e64309d95828e4af0d2b753bef87aa7 Mon Sep 17 00:00:00 2001 From: Alfredo <109958902+alfredo-d@users.noreply.github.com> Date: Sat, 15 Apr 2023 03:19:29 +0800 Subject: [PATCH 11/14] Docs: update template function (#9037) **What this PR does / why we need it**: - Add `eq` function - Add example for nested if and `AND`/`OR` logic **Which issue(s) this PR fixes**: Fixes # N/A **Special notes for your reviewer**: **Checklist** - [ ] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [ ] Documentation added - [ ] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --- docs/sources/logql/template_functions.md | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/docs/sources/logql/template_functions.md b/docs/sources/logql/template_functions.md index 18abf34587bb..2006cbb74a48 100644 --- a/docs/sources/logql/template_functions.md +++ b/docs/sources/logql/template_functions.md @@ -25,6 +25,16 @@ Example: {{ .path | replace " " "_" | trunc 5 | upper }} ``` +For function that returns a `bool` such as `contains`, `eq`, `hasPrefix` and `hasSuffix`, you can apply `AND` / `OR` and nested `if` logic. + +Example: + +```template +{{ if and (contains "he" "hello") (contains "llo" "hello") }} yes {{end}} +{{ if or (contains "he" "hello") (contains("llo" "hello") }} yes {{end}} +{{ if contains .err "ErrTimeout" }} timeout {{else if contains "he" "hello"}} yes {{else}} no {{end}} +``` + ## __line__ This function returns the current log line. @@ -273,6 +283,19 @@ Examples: {{ if contains "he" "hello" }} yes {{end}} ``` +## eq + +Use this function to test to see if one string has exact matching inside of another. + +Signature: `eq(s string, src string) bool` + +Examples: + +```template +{{ if eq .err "ErrTimeout" }} timeout {{end}} +{{ if eq "he" "hello" }} yes {{end}} +``` + ## hasPrefix and hasSuffix The `hasPrefix` and `hasSuffix` functions test whether a string has a given prefix or suffix. From 0215a171486eacc3d59c813f9b87d3d54b184a62 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Labesse=20K=C3=A9vin?= Date: Fri, 14 Apr 2023 21:22:29 +0200 Subject: [PATCH 12/14] gcplog: improve formatter (#9044) **What this PR does / why we need it**: Add two improvement to the target `gcplog` * capability to ignore the `textPayload` from the log line -> don't trunk important metadata * add new source labels (severity and root labels of log entry) -> more relabel_config possibilities **Special notes for your reviewer**: I'm not sure about the naming of the option `useFullLine`, feel free to suggest any name, but I guess this feature must be disable by default for compatibility? **Checklist** - [X] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) - [X] Documentation added - [X] Tests updated - [ ] `CHANGELOG.md` updated - [ ] Changes that require user attention or interaction to upgrade are documented in `docs/sources/upgrading/_index.md` --------- Signed-off-by: Kevin Labesse Co-authored-by: J Stickler --- .../pkg/promtail/scrapeconfig/scrapeconfig.go | 4 ++ .../pkg/promtail/targets/gcplog/formatter.go | 28 ++++++--- .../promtail/targets/gcplog/formatter_test.go | 62 ++++++++++++++++++- .../promtail/targets/gcplog/pull_target.go | 2 +- .../promtail/targets/gcplog/push_target.go | 2 +- .../targets/gcplog/push_translation.go | 4 +- .../sources/clients/promtail/configuration.md | 8 +++ docs/sources/clients/promtail/scraping.md | 5 ++ 8 files changed, 102 insertions(+), 13 deletions(-) diff --git a/clients/pkg/promtail/scrapeconfig/scrapeconfig.go b/clients/pkg/promtail/scrapeconfig/scrapeconfig.go index b65a796e0820..d2866d79102b 100644 --- a/clients/pkg/promtail/scrapeconfig/scrapeconfig.go +++ b/clients/pkg/promtail/scrapeconfig/scrapeconfig.go @@ -407,6 +407,10 @@ type GcplogTargetConfig struct { // Server is the weaveworks server config for listening connections. Used just for `push` subscription type. Server server.Config `yaml:"server"` + + // UseFullLine force Promtail to send the full line from Cloud Logging even if `textPayload` is available. + // By default, if `textPayload` is present in the line, then it's used as log line. + UseFullLine bool `yaml:"use_full_line"` } // HerokuDrainTargetConfig describes a scrape config to listen and consume heroku logs, in the HTTPS drain manner. diff --git a/clients/pkg/promtail/targets/gcplog/formatter.go b/clients/pkg/promtail/targets/gcplog/formatter.go index 9954b83bd6cb..f8a13e356a1a 100644 --- a/clients/pkg/promtail/targets/gcplog/formatter.go +++ b/clients/pkg/promtail/targets/gcplog/formatter.go @@ -13,10 +13,9 @@ import ( "github.com/grafana/loki/clients/pkg/promtail/api" "github.com/grafana/loki/pkg/logproto" - "github.com/grafana/loki/pkg/util" ) -// LogEntry that will be written to the pubsub topic. +// GCPLogEntry that will be written to the pubsub topic. // According to the following spec. // https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry // nolint:revive @@ -32,13 +31,22 @@ type GCPLogEntry struct { // Its important that `Timestamp` is optional in GCE log entry. ReceiveTimestamp string `json:"receiveTimestamp"` + // Optional. The severity of the log entry. The default value is DEFAULT. + // DEFAULT, DEBUG, INFO, NOTICE, WARNING, ERROR, CRITICAL, ALERT, EMERGENCY + // https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#LogSeverity + Severity string `json:"severity"` + + // Optional. A map of key, value pairs that provides additional information about the log entry. + // The labels can be user-defined or system-defined. + Labels map[string]string `json:"labels"` + TextPayload string `json:"textPayload"` // NOTE(kavi): There are other fields on GCPLogEntry. but we need only need above fields for now // anyway we will be sending the entire entry to Loki. } -func parseGCPLogsEntry(data []byte, other model.LabelSet, otherInternal labels.Labels, useIncomingTimestamp bool, relabelConfig []*relabel.Config) (api.Entry, error) { +func parseGCPLogsEntry(data []byte, other model.LabelSet, otherInternal labels.Labels, useIncomingTimestamp, useFullLine bool, relabelConfig []*relabel.Config) (api.Entry, error) { var ge GCPLogEntry if err := json.Unmarshal(data, &ge); err != nil { @@ -50,10 +58,16 @@ func parseGCPLogsEntry(data []byte, other model.LabelSet, otherInternal labels.L lbs := labels.NewBuilder(otherInternal) lbs.Set("__gcp_logname", ge.LogName) lbs.Set("__gcp_resource_type", ge.Resource.Type) + lbs.Set("__gcp_severity", ge.Severity) - // labels from gcp log entry. Add it as internal labels + // resource labels from gcp log entry. Add it as internal labels for k, v := range ge.Resource.Labels { - lbs.Set("__gcp_resource_labels_"+util.SnakeCase(k), v) + lbs.Set("__gcp_resource_labels_"+convertToLokiCompatibleLabel(k), v) + } + + // labels from gcp log entry. Add it as internal labels + for k, v := range ge.Labels { + lbs.Set("__gcp_labels_"+convertToLokiCompatibleLabel(k), v) } var processed labels.Labels @@ -101,8 +115,8 @@ func parseGCPLogsEntry(data []byte, other model.LabelSet, otherInternal labels.L } } - // Send only `ge.textPaylload` as log line if its present. - if strings.TrimSpace(ge.TextPayload) != "" { + // Send only `ge.textPayload` as log line if its present and user don't explicitly ask for the whole log. + if !useFullLine && strings.TrimSpace(ge.TextPayload) != "" { line = ge.TextPayload } diff --git a/clients/pkg/promtail/targets/gcplog/formatter_test.go b/clients/pkg/promtail/targets/gcplog/formatter_test.go index 3522189be52e..f70fa1d79d12 100644 --- a/clients/pkg/promtail/targets/gcplog/formatter_test.go +++ b/clients/pkg/promtail/targets/gcplog/formatter_test.go @@ -22,6 +22,7 @@ func TestFormat(t *testing.T) { labels model.LabelSet relabel []*relabel.Config useIncomingTimestamp bool + useFullLine bool expected api.Entry }{ { @@ -49,6 +50,22 @@ func TestFormat(t *testing.T) { Action: "replace", Replacement: "$1", }, + { + SourceLabels: model.LabelNames{"__gcp_severity"}, + Separator: ";", + Regex: relabel.MustNewRegexp("(.*)"), + TargetLabel: "severity", + Action: "replace", + Replacement: "$1", + }, + { + SourceLabels: model.LabelNames{"__gcp_labels_dataflow_googleapis_com_region"}, + Separator: ";", + Regex: relabel.MustNewRegexp("(.*)"), + TargetLabel: "region", + Action: "replace", + Replacement: "$1", + }, }, useIncomingTimestamp: true, expected: api.Entry{ @@ -56,6 +73,8 @@ func TestFormat(t *testing.T) { "jobname": "pubsub-test", "backend_service_name": "http-loki", "bucket_name": "loki-bucket", + "severity": "INFO", + "region": "europe-west1", }, Entry: logproto.Entry{ Timestamp: mustTime(t, "2020-12-22T15:01:23.045123456Z"), @@ -100,11 +119,48 @@ func TestFormat(t *testing.T) { }, }, }, + { + name: "use-full-line", + useFullLine: true, + msg: &pubsub.Message{ + Data: []byte(withTextPayload), + }, + labels: model.LabelSet{ + "jobname": "pubsub-test", + }, + expected: api.Entry{ + Labels: model.LabelSet{ + "jobname": "pubsub-test", + }, + Entry: logproto.Entry{ + Timestamp: time.Now(), + Line: withTextPayload, + }, + }, + }, + { + name: "use-text-payload", + msg: &pubsub.Message{ + Data: []byte(withTextPayload), + }, + labels: model.LabelSet{ + "jobname": "pubsub-test", + }, + expected: api.Entry{ + Labels: model.LabelSet{ + "jobname": "pubsub-test", + }, + Entry: logproto.Entry{ + Timestamp: time.Now(), + Line: logTextPayload, + }, + }, + }, } for _, c := range cases { t.Run(c.name, func(t *testing.T) { - got, err := parseGCPLogsEntry(c.msg.Data, c.labels, nil, c.useIncomingTimestamp, c.relabel) + got, err := parseGCPLogsEntry(c.msg.Data, c.labels, nil, c.useIncomingTimestamp, c.useFullLine, c.relabel) require.NoError(t, err) @@ -130,5 +186,7 @@ func mustTime(t *testing.T, v string) time.Time { } const ( - withAllFields = `{"logName": "https://project/gcs", "resource": {"type": "gcs", "labels": {"backendServiceName": "http-loki", "bucketName": "loki-bucket", "instanceId": "344555"}}, "timestamp": "2020-12-22T15:01:23.045123456Z"}` + withAllFields = `{"logName": "https://project/gcs", "severity": "INFO", "resource": {"type": "gcs", "labels": {"backendServiceName": "http-loki", "bucketName": "loki-bucket", "instanceId": "344555"}}, "timestamp": "2020-12-22T15:01:23.045123456Z", "labels": {"dataflow.googleapis.com/region": "europe-west1"}}` + logTextPayload = "text-payload-log" + withTextPayload = `{"logName": "https://project/gcs", "severity": "INFO", "textPayload": "` + logTextPayload + `", "resource": {"type": "gcs", "labels": {"backendServiceName": "http-loki", "bucketName": "loki-bucket", "instanceId": "344555"}}, "timestamp": "2020-12-22T15:01:23.045123456Z", "labels": {"dataflow.googleapis.com/region": "europe-west1"}}` ) diff --git a/clients/pkg/promtail/targets/gcplog/pull_target.go b/clients/pkg/promtail/targets/gcplog/pull_target.go index fba08b804136..38db550bdf73 100644 --- a/clients/pkg/promtail/targets/gcplog/pull_target.go +++ b/clients/pkg/promtail/targets/gcplog/pull_target.go @@ -108,7 +108,7 @@ func (t *pullTarget) run() error { case <-t.ctx.Done(): return t.ctx.Err() case m := <-t.msgs: - entry, err := parseGCPLogsEntry(m.Data, t.config.Labels, nil, t.config.UseIncomingTimestamp, t.relabelConfig) + entry, err := parseGCPLogsEntry(m.Data, t.config.Labels, nil, t.config.UseIncomingTimestamp, t.config.UseFullLine, t.relabelConfig) if err != nil { level.Error(t.logger).Log("event", "error formating log entry", "cause", err) m.Ack() diff --git a/clients/pkg/promtail/targets/gcplog/push_target.go b/clients/pkg/promtail/targets/gcplog/push_target.go index e711347d981b..730dd7a33545 100644 --- a/clients/pkg/promtail/targets/gcplog/push_target.go +++ b/clients/pkg/promtail/targets/gcplog/push_target.go @@ -133,7 +133,7 @@ func (h *pushTarget) push(w http.ResponseWriter, r *http.Request) { return } - entry, err := translate(pushMessage, h.config.Labels, h.config.UseIncomingTimestamp, h.relabelConfigs, r.Header.Get("X-Scope-OrgID")) + entry, err := translate(pushMessage, h.config.Labels, h.config.UseIncomingTimestamp, h.config.UseFullLine, h.relabelConfigs, r.Header.Get("X-Scope-OrgID")) if err != nil { h.metrics.gcpPushErrors.WithLabelValues("translation").Inc() level.Warn(h.logger).Log("msg", "failed to translate gcp push request", "err", err.Error()) diff --git a/clients/pkg/promtail/targets/gcplog/push_translation.go b/clients/pkg/promtail/targets/gcplog/push_translation.go index de5fb6cbab8a..19eb9e44a0a4 100644 --- a/clients/pkg/promtail/targets/gcplog/push_translation.go +++ b/clients/pkg/promtail/targets/gcplog/push_translation.go @@ -41,7 +41,7 @@ func (pm PushMessage) Validate() error { } // translate converts a GCP PushMessage into a loki api.Entry. It parses the push-specific labels, and delegates the rest to parseGCPLogsEntry. -func translate(m PushMessage, other model.LabelSet, useIncomingTimestamp bool, relabelConfigs []*relabel.Config, xScopeOrgID string) (api.Entry, error) { +func translate(m PushMessage, other model.LabelSet, useIncomingTimestamp, useFullLine bool, relabelConfigs []*relabel.Config, xScopeOrgID string) (api.Entry, error) { // Collect all push-specific labels. Every one of them is first configured as optional, and the user // can relabel it if needed. The relabeling and internal drop is handled in parseGCPLogsEntry. lbs := labels.NewBuilder(nil) @@ -67,7 +67,7 @@ func translate(m PushMessage, other model.LabelSet, useIncomingTimestamp bool, r return api.Entry{}, fmt.Errorf("failed to decode data: %w", err) } - entry, err := parseGCPLogsEntry(decodedData, fixedLabels, lbs.Labels(nil), useIncomingTimestamp, relabelConfigs) + entry, err := parseGCPLogsEntry(decodedData, fixedLabels, lbs.Labels(nil), useIncomingTimestamp, useFullLine, relabelConfigs) if err != nil { return api.Entry{}, fmt.Errorf("failed to parse logs entry: %w", err) } diff --git a/docs/sources/clients/promtail/configuration.md b/docs/sources/clients/promtail/configuration.md index e997f419ace7..619334242a15 100644 --- a/docs/sources/clients/promtail/configuration.md +++ b/docs/sources/clients/promtail/configuration.md @@ -1036,6 +1036,10 @@ When using the `push` subscription type, keep in mind: # timestamp to the log when it was processed. [use_incoming_timestamp: | default = false] +# use_full_line to force Promtail to send the full line from Cloud Logging even if `textPayload` is available. +# By default, if `textPayload` is present in the line, then it's used as log line. +[use_full_line: | default = false] + # If the subscription_type is push, configures an HTTP handler timeout. If processing the incoming GCP Logs request takes longer # than the configured duration, that is processing and then sending the entry down the processing pipeline, the server will abort # and respond with a 503 HTTP status code. @@ -1053,8 +1057,10 @@ When Promtail receives GCP logs, various internal labels are made available for **Internal labels available for pull** - `__gcp_logname` +- `__gcp_severity` - `__gcp_resource_type` - `__gcp_resource_labels_` +- `__gcp_labels_` **Internal labels available for push** @@ -1062,8 +1068,10 @@ When Promtail receives GCP logs, various internal labels are made available for - `__gcp_subscription_name` - `__gcp_attributes_`: All attributes read from `.message.attributes` in the incoming push message. Each attribute key is conveniently renamed, since it might contain unsupported characters. For example, `logging.googleapis.com/timestamp` is converted to `__gcp_attributes_logging_googleapis_com_timestamp`. - `__gcp_logname` +- `__gcp_severity` - `__gcp_resource_type` - `__gcp_resource_labels_` +- `__gcp_labels_` ### Azure Event Hubs diff --git a/docs/sources/clients/promtail/scraping.md b/docs/sources/clients/promtail/scraping.md index 25bec3b6cc0b..deca998a5609 100644 --- a/docs/sources/clients/promtail/scraping.md +++ b/docs/sources/clients/promtail/scraping.md @@ -213,6 +213,7 @@ There are two kind of scraping strategies: `pull` and `push`. project_id: "my-gcp-project" subscription: "my-pubsub-subscription" use_incoming_timestamp: false # default rewrite timestamps. + use_full_line: false # default use textPayload as log line. labels: job: "gcplog" relabel_configs: @@ -232,8 +233,10 @@ It also supports `relabeling` and `pipeline` stages just like other targets. When Promtail receives GCP logs, various internal labels are made available for [relabeling](#relabeling): - `__gcp_logname` + - `__gcp_severity` - `__gcp_resource_type` - `__gcp_resource_labels_` + - `__gcp_labels_` In the example above, the `project_id` label from a GCP resource was transformed into a label called `project` through `relabel_configs`. ### Push @@ -270,8 +273,10 @@ When Promtail receives GCP logs, various internal labels are made available for - `__gcp_subscription_name` - `__gcp_attributes_` - `__gcp_logname` +- `__gcp_severity` - `__gcp_resource_type` - `__gcp_resource_labels_` +- `__gcp_labels_` In the example above, the `__gcp_message_id` and the `__gcp_attributes_logging_googleapis_com_timestamp` labels are transformed to `message_id` and `incoming_ts` through `relabel_configs`. All other internal labels, for example some other attribute, From b26ff7df457b3737e983ed842678e4a43206a291 Mon Sep 17 00:00:00 2001 From: Alfredo <109958902+alfredo-d@users.noreply.github.com> Date: Sat, 15 Apr 2023 04:26:50 +0800 Subject: [PATCH 13/14] Update replication factor impact to read path (#9061) **What this PR does / why we need it**: Update a note to callout that replication factor also impacts read path behavior. **Which issue(s) this PR fixes**: Fixes # N/A **Special notes for your reviewer**: **Checklist** - [x] Reviewed the [`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md) guide (**required**) --------- Co-authored-by: J Stickler --- docs/sources/fundamentals/architecture/components/_index.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/sources/fundamentals/architecture/components/_index.md b/docs/sources/fundamentals/architecture/components/_index.md index 80ec7f86e186..6fdbdeacea8d 100644 --- a/docs/sources/fundamentals/architecture/components/_index.md +++ b/docs/sources/fundamentals/architecture/components/_index.md @@ -233,3 +233,5 @@ factor, it is possible that the querier may receive duplicate data. To resolve this, the querier internally **deduplicates** data that has the same nanosecond timestamp, label set, and log message. +At read path, [replication factor]({{< relref "#replication-factor" >}}) also plays a role here. For example with `replication-factor` of `3`, we require that two queries to be running. + From 1ff4ad75d397b54f25158459b35c60192d960cc3 Mon Sep 17 00:00:00 2001 From: Alex Close Date: Fri, 14 Apr 2023 21:28:09 +0100 Subject: [PATCH 14/14] =?UTF-8?q?Updated=20Scalability=20benefit=20to=20ca?= =?UTF-8?q?ll=20out=20decoupled=20read/write=20paths=20an=E2=80=A6=20(#903?= =?UTF-8?q?2)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit **What this PR does / why we need it**: Updated Scalability benefit to call out decoupled read/write paths and that benefit - this is a huge competitive edge over tools like Splunk and Elastic and it makes sense to explicitly call it out under our benefits --- docs/sources/fundamentals/overview/_index.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/sources/fundamentals/overview/_index.md b/docs/sources/fundamentals/overview/_index.md index f759423699ba..d4f73d5b0630 100644 --- a/docs/sources/fundamentals/overview/_index.md +++ b/docs/sources/fundamentals/overview/_index.md @@ -57,9 +57,8 @@ and allows for efficient query execution. - **Scalability** Loki is designed for scalability, - as each of Loki's components can be run as microservices. - Configuration permits scaling the microservices individually, - permitting flexible large-scale installations. + as each of Loki's components can be run as microservices designed to run statelessly and natively within Kubernetes. + Loki's read and write path are decoupled meaning that you can independently scale read or write leading to flexible large-scale installations that can quickly adapt to meet your workload at any given time. - **Flexibility**