Skip to content

Commit

Permalink
Merge pull request grafana#182 from grafana/main
Browse files Browse the repository at this point in the history
Update from upstream repository
  • Loading branch information
openshift-merge-robot authored Sep 12, 2023
2 parents 53c94b6 + 5f7bde7 commit 5170935
Show file tree
Hide file tree
Showing 67 changed files with 666 additions and 324 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
* [10380](https://github.com/grafana/loki/pull/10380) **shantanualsi** Remove `experimental.ruler.enable-api` in favour of `ruler.enable-api`
* [10395](https://github.com/grafana/loki/pull/10395/) **shantanualshi** Remove deprecated `split_queries_by_interval` and `forward_headers_list` configuration options in the `query_range` section
* [10456](https://github.com/grafana/loki/pull/10456) **dannykopping** Add `loki_distributor_ingester_append_timeouts_total` metric, remove `loki_distributor_ingester_append_failures_total` metric
* [10534](https://github.com/grafana/loki/pull/10534) **chaudum** Remove configuration `use_boltdb_shipper_as_backup`

##### Fixes

Expand Down
1 change: 1 addition & 0 deletions clients/pkg/promtail/client/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ import (

"github.com/grafana/loki/clients/pkg/promtail/api"
"github.com/grafana/loki/clients/pkg/promtail/utils"

"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/push"
lokiflag "github.com/grafana/loki/pkg/util/flagext"
Expand Down
2 changes: 1 addition & 1 deletion cmd/migrate/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,7 @@ func (m *chunkMover) moveChunks(ctx context.Context, threadID int, syncRangeCh <
var totalBytes uint64
var totalChunks uint64
//log.Printf("%d processing sync range %d - Start: %v, End: %v\n", threadID, sr.number, time.Unix(0, sr.from).UTC(), time.Unix(0, sr.to).UTC())
schemaGroups, fetchers, err := m.source.GetChunkRefs(m.ctx, m.sourceUser, model.TimeFromUnixNano(sr.from), model.TimeFromUnixNano(sr.to), m.matchers...)
schemaGroups, fetchers, err := m.source.GetChunks(m.ctx, m.sourceUser, model.TimeFromUnixNano(sr.from), model.TimeFromUnixNano(sr.to), m.matchers...)
if err != nil {
log.Println(threadID, "Error querying index for chunk refs:", err)
errCh <- err
Expand Down
16 changes: 7 additions & 9 deletions docs/sources/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,12 @@ weight: 100

# Grafana Loki documentation

<p align="center"> <img src="logo_and_name.png" alt="Loki Logo"> <br>
<small>Like Prometheus, but for logs!</small> </p>
<p align="center"> <img src="logo_and_name.png" alt="Loki Logo"> <br>

Grafana Loki is a set of components that can be composed into a fully featured
logging stack.
Grafana Loki is a set of components that can be composed into a fully featured logging stack.

Unlike other logging systems, Loki is built around the idea of only indexing
metadata about your logs: labels (just like Prometheus labels). Log data itself
is then compressed and stored in chunks in object stores such as S3 or GCS, or
even locally on the filesystem. A small index and highly compressed chunks
simplifies the operation and significantly lowers the cost of Loki.
Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels).
Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.
A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.

For more information, see the [Loki overview]({{< relref "./get-started/overview" >}})
9 changes: 4 additions & 5 deletions docs/sources/community/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,14 @@ as a remote.
$ git clone https://github.com/grafana/loki.git $GOPATH/src/github.com/grafana/loki
$ cd $GOPATH/src/github.com/grafana/loki
$ git remote add fork <FORK_URL>
```

# Make some changes!
Make your changes, add your changes to a commit, and open a pull request (PR).

```bash
$ git add .
$ git commit -m "docs: fix spelling error"
$ git push -u fork HEAD

# Open a PR!
```

Note that if you downloaded Loki using `go get`, the message `package github.com/grafana/loki: no Go files in /go/src/github.com/grafana/loki`
Expand All @@ -54,10 +54,9 @@ While `go install ./cmd/loki` works, the preferred way to build is by using
- `make images`: builds all Docker images (optionally suffix the previous binary
commands with `-image`, e.g., `make loki-image`).

These commands can be chained together to build multiple binaries in one go:
These commands can be chained together to build multiple binaries in one go. The following example builds binaries for Loki, Promtail, and LogCLI.

```bash
# Builds binaries for Loki, Promtail, and LogCLI.
$ make loki promtail logcli
```

Expand Down
20 changes: 9 additions & 11 deletions docs/sources/configure/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2036,11 +2036,6 @@ boltdb_shipper:
# CLI flag: -boltdb.shipper.index-gateway-client.log-gateway-requests
[log_gateway_requests: <boolean> | default = false]

# Use boltdb-shipper index store as backup for indexing chunks. When enabled,
# boltdb-shipper needs to be configured under storage_config
# CLI flag: -boltdb.shipper.use-boltdb-shipper-as-backup
[use_boltdb_shipper_as_backup: <boolean> | default = false]

[ingestername: <string> | default = ""]

[mode: <string> | default = ""]
Expand Down Expand Up @@ -2103,11 +2098,6 @@ tsdb_shipper:
# CLI flag: -tsdb.shipper.index-gateway-client.log-gateway-requests
[log_gateway_requests: <boolean> | default = false]

# Use boltdb-shipper index store as backup for indexing chunks. When enabled,
# boltdb-shipper needs to be configured under storage_config
# CLI flag: -tsdb.shipper.use-boltdb-shipper-as-backup
[use_boltdb_shipper_as_backup: <boolean> | default = false]

[ingestername: <string> | default = ""]

[mode: <string> | default = ""]
Expand Down Expand Up @@ -2719,9 +2709,17 @@ shard_streams:
# CLI flag: -index-gateway.shard-size
[index_gateway_shard_size: <int> | default = 0]

# Allow user to send structured metadata (non-indexed labels) in push payload.
# Allow user to send structured metadata in push payload.
# CLI flag: -validation.allow-structured-metadata
[allow_structured_metadata: <boolean> | default = false]

# Maximum size accepted for structured metadata per log line.
# CLI flag: -limits.max-structured-metadata-size
[max_structured_metadata_size: <int> | default = 64KB]

# Maximum number of structured metadata entries per log line.
# CLI flag: -limits.max-structured-metadata-entries-count
[max_structured_metadata_entries_count: <int> | default = 128]
```
### frontend_worker
Expand Down
78 changes: 27 additions & 51 deletions docs/sources/get-started/overview.md
Original file line number Diff line number Diff line change
@@ -1,76 +1,52 @@
---
menuTitle: Overview
menuTitle: Loki overview
title: Loki overview
description: Loki product overview and features.
weight: 200
aliases:
- ../overview/
- ../fundamentals/overview/
---
# Loki overview

Grafana Loki is a log aggregation tool,
and it is the core of a fully-featured logging stack.

Loki is a datastore optimized for efficiently holding log data.
The efficient indexing of log data
distinguishes Loki from other logging systems.
Unlike other logging systems, a Loki index is built from labels,
leaving the original log message unindexed.

![Loki overview](../loki-overview-1.png "Loki overview")

An agent (also called a client) acquires logs,
turns the logs into streams,
and pushes the streams to Loki through an HTTP API.
The Promtail agent is designed for Loki installations,
but many other [Agents]({{< relref "../send-data" >}}) seamlessly integrate with Loki.

![Loki agent interaction](../loki-overview-2.png "Loki agent interaction")
# Loki overview

Loki indexes streams.
Each stream identifies a set of logs associated with a unique set of labels.
A quality set of labels is key to the creation of an index that is both compact
and allows for efficient query execution.
Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by [Prometheus](https://prometheus.io/). Loki differs from Prometheus by focusing on logs instead of metrics, and collecting logs via push, instead of pull.

[LogQL]({{< relref "../query" >}}) is the query language for Loki.
Loki is designed to be very cost effective and highly scalable. Unlike other logging systems, Loki does not index the contents of the logs, but only indexes metadata about your logs as a set of labels for each log stream.

## Loki features
A log stream is a set of logs which share the same labels. Labels help Loki to find a log stream within your data store, so having a quality set of labels is key to efficient query execution.

- **Efficient memory usage for indexing the logs**
Log data is then compressed and stored in chunks in an object store such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even, for development or proof of concept, on the filesystem. A small index and highly compressed chunks simplify the operation and significantly lower the cost of Loki.

By indexing on a set of labels, the index can be significantly smaller
than other log aggregation products.
Less memory makes it less expensive to operate.
{{< figure src="../loki-overview-2.png" caption="**Loki logging stack**" >}}

- **Multi-tenancy**
A typical Loki-based logging stack consists of 3 components:

Loki allows multiple tenants to utilize a single Loki instance.
The data of distinct tenants is completely isolated from other tenants.
Multi-tenancy is configured by assigning a tenant ID in the agent.
- **Agent** - An agent or client, for example Promtail, which is distributed with Loki, or the Grafana Agent. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.

- **LogQL, Loki's query language**
- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes]{{< relref "../get-started/deployment-modes/" >}}.

- **[Grafana](https://github.com/grafana/grafana)** for querying and displaying log data. You can also query logs from the command line, using [LogCLI]({{< relref "../query/logcli" >}}) or using the Loki API directly.

Users of the Prometheus query language, PromQL, will find LogQL familiar
and flexible for generating queries against the logs.
The language also facilitates the generation of metrics from log data,
a powerful feature that goes well beyond log aggregation.
## Loki features

- **Scalability**
- **Scalability** - Loki is designed for scalability, and can scale from as small as running on a Raspberry Pi to ingesting petabytes a day.
In its most common deployment, “simple scalable mode”, Loki decouples requests into separate read and write paths, so that you can independently scale them, which leads to flexible large-scale installations that can quickly adapt to meet your workload at any given time.
If needed, each of Loki's components can also be run as microservices designed to run natively within Kubernetes.

Loki is designed for scalability,
as each of Loki's components can be run as microservices designed to run statelessly and natively within Kubernetes.
Loki's read and write path are decoupled meaning that you can independently scale read or write leading to flexible large-scale installations that can quickly adapt to meet your workload at any given time.
- **Multi-tenancy** - Loki allows multiple tenants to share a single Loki instance. With multi-tenancy, the data and requests of each tenant is completely isolated from the others.
Multi-tenancy is [configured] (../operations/multi-tenancy) by assigning a tenant ID in the agent.

- **Flexibility**
- **Third-party integrations** - Several third-party agents (clients) have support for Loki, via plugins. This lets you keep your existing observability setup while also shipping logs to Loki.

Many agents (clients) have plugin support.
This allows a current observability structure
to add Loki as their log aggregation tool without needing
to switch existing portions of the observability stack.
- **Efficient storage** - Loki stores log data in highly compressed chunks.
Similarly, the Loki index, because it indexes only the set of labels, is significantly smaller than other log aggregation tools.
The compressed chunks, smaller index, and use of low-cost object storage, make Loki less expensive to operate.

- **Grafana integration**
- **LogQL, Loki's query language** - [LogQL]({{< relref "../query" >}}) is the query language for Loki. Users who are already familiar with the Prometheus query language, [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/), will find LogQL familiar and flexible for generating queries against the logs.
The language also facilitates the generation of metrics from log data,
a powerful feature that goes well beyond log aggregation.

Loki seamlessly integrates with Grafana,
providing a complete observability stack.
- **Alerting** - Loki includes a component called the [ruler]({{< relref "../alert" >}}), which can continually evaluate queries against your logs, and perform an action based on the result. This allows you to monitor your logs for anomalies or events. Loki integrates with [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), or the [alert manager](/docs/grafana/latest/alerting) within Grafana.

- **Grafana integration** - Loki integrates with Grafana, Mimir, and Tempo, providing a complete observability stack, and seamless correlation between logs, metrics and traces.
23 changes: 15 additions & 8 deletions docs/sources/operations/loki-canary/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,16 +260,23 @@ spec:
If the other options are not sufficient for your use case, you can compile
`loki-canary` yourself:

```bash
# clone the source tree
$ git clone https://github.com/grafana/loki
1. Clone the source tree.

# build the binary
$ make loki-canary
```bash
$ git clone https://github.com/grafana/loki
```

# (optionally build the container image)
$ make loki-canary-image
```
1. Build the binary.

```bash
$ make loki-canary
```

1. Optional: Build the container image.

```bash
$ make loki-canary-image
```

## Configuration

Expand Down
4 changes: 3 additions & 1 deletion docs/sources/operations/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,11 @@ port (`9080` or `3101` if using Helm) locally:

```bash
$ kubectl port-forward loki-promtail-jrfg7 9080
# Then, in a web browser, visit http://localhost:9080/service-discovery
```

Then, in a web browser, visit [http://localhost:9080/service-discovery](http://localhost:9080/service-discovery)


## Debug output

Both Loki and Promtail support a log level flag with the addition of
Expand Down
3 changes: 1 addition & 2 deletions docs/sources/release-notes/cadence.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,7 @@ Once your PR is merged to `main`, you can expect it to become available in the n

`tools/which-release.sh`

For example, [this PR](https://github.com/grafana/loki/pull/7472) was [merged](https://github.com/grafana/loki/pull/7472#event-8431624850)
into the commit named `d434e80`. Using the tool above, we can see that is part of release 2.8 and several weekly releases:
For example, [this PR](https://github.com/grafana/loki/pull/7472) was [merged](https://github.com/grafana/loki/pull/7472#event-8431624850) into the commit named `d434e80`. Using the tool above, we can see that is part of release 2.8 and several weekly releases:

```bash
$ ./tools/which-release.sh d434e80
Expand Down
8 changes: 6 additions & 2 deletions docs/sources/send-data/docker-driver/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,15 @@ Run the following command to install the plugin, updating the release version if
docker plugin install grafana/loki-docker-driver:2.8.2 --alias loki --grant-all-permissions
```

To check installed plugins, use the `docker plugin ls` command. Plugins that
have started successfully are listed as enabled:
To check installed plugins, use the `docker plugin ls` command.
Plugins that have started successfully are listed as enabled:

```bash
$ docker plugin ls
```
You should see output similar to the following:

```bash
ID NAME DESCRIPTION ENABLED
ac720b8fcfdb loki Loki Logging Driver true
```
Expand Down
36 changes: 31 additions & 5 deletions docs/sources/send-data/promtail/cloud/eks/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,14 @@ In this tutorial we'll use [eksctl][eksctl], a simple command line utility for c
eksctl create cluster --name loki-promtail --managed
```

You have time for a coffee ☕, this usually take 15minutes. When this is finished you should have `kubectl context` configured to communicate with your newly created cluster. Let's verify everything is fine:
This usually takes about 15 minutes. When this is finished you should have `kubectl context` configured to communicate with your newly created cluster. To verify, run the following command:

```bash
kubectl version
```
You should see output similar to the following:

```bash
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-07-04T15:01:15Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-fd1ea7", GitCommit:"fd1ea7c64d0e3ccbf04b124431c659f65330562a", GitTreeState:"clean", BuildDate:"2020-05-28T19:06:00Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
```
Expand All @@ -49,14 +52,22 @@ To ship all your pods logs we're going to set up [Promtail]({{< relref "../../..

What's nice about Promtail is that it uses the same [service discovery as Prometheus][prometheus conf], you should make sure the `scrape_configs` of Promtail matches the Prometheus one. Not only this is simpler to configure, but this also means Metrics and Logs will have the same metadata (labels) attached by the Prometheus service discovery. When querying Grafana you will be able to correlate metrics and logs very quickly, you can read more about this on our [blogpost][correlate].

Let's add the Loki repository and list all available charts.
Let's add the Loki repository and list all available charts. To add the repo, run the following command:

```bash
helm repo add loki https://grafana.github.io/loki/charts
```
You should see the following message.
```bash
"loki" has been added to your repositories
```
To list the available charts, run the following command:

```bash
helm search repo

```
You should see output similar to the following:
```bash
NAME CHART VERSION APP VERSION DESCRIPTION
loki/fluent-bit 0.3.0 v1.6.0 Uses fluent-bit Loki go plugin for gathering lo...
loki/loki 0.31.0 v1.6.0 Loki: like Prometheus, but for logs.
Expand All @@ -81,14 +92,24 @@ loki:
password: <grafancloud apikey>
```
Once you're ready let's create a new namespace monitoring and add Promtail to it:
Once you're ready let's create a new namespace monitoring and add Promtail to it. To create the namespace, run the following command:
```bash
kubectl create namespace monitoring
```

You should see the following message.
```bash
namespace/monitoring created
```

To add Promtail, run the following command:
```bash
helm install promtail --namespace monitoring loki/promtail -f values.yaml
```

You should see output similar to the following:
```bash
NAME: promtail
LAST DEPLOYED: Fri Jul 10 14:41:37 2020
NAMESPACE: default
Expand All @@ -105,7 +126,10 @@ Verify that Promtail pods are running. You should see only two since we're runni

```bash
kubectl get -n monitoring pods
```

You should see output similar to the following:
```bash
NAME READY STATUS RESTARTS AGE
promtail-87t62 1/1 Running 0 35s
promtail-8c2r4 1/1 Running 0 35s
Expand Down Expand Up @@ -210,7 +234,9 @@ And deploy the `eventrouter` using:

```bash
kubectl create -f https://raw.githubusercontent.com/grafana/loki/main/docs/sources/clients/aws/eks/eventrouter.yaml

```
You should see output similar to the following:
```bash
serviceaccount/eventrouter created
clusterrole.rbac.authorization.k8s.io/eventrouter created
clusterrolebinding.rbac.authorization.k8s.io/eventrouter created
Expand Down
Loading

0 comments on commit 5170935

Please sign in to comment.