You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/README.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,12 +41,18 @@ Note that in Hugo the structure of the documentation is based on the folder stru
41
41
42
42
## Shared content
43
43
44
-
**NOTE:** As of Loki/GEL 3.0, there will be shared files between the Loki docs and the GEL docs. The Grafana Enterprise Logs documentation will pull in content from the Loki repo when publishing the GEL docs. Files that are shared between the two doc sets will contain a comment indicating that the content is shared.
44
+
**NOTE:** As of Loki/GEL 3.0, there are shared files between the Loki docs and the GEL docs. The Grafana Enterprise Logs documentation will pull in content from the Loki repo when publishing the GEL docs. Files that are shared between the two doc sets will contain a comment indicating that the content is shared.
45
45
46
46
For more information about shared content, see the [reuse content](https://grafana.com/docs/writers-toolkit/write/reuse-content/) section of the Writers' Toolkit.
47
47
48
48
For more information about building and testing documentation, see the [build and review](https://grafana.com/docs/writers-toolkit/review/) section of the Writers' Toolkit.
49
49
50
+
### Lambda-Promtail documentation
51
+
52
+
As of June 2025, the code for the Lambda-promtail client has moved from the Loki repository to a separate [lambda-promtail repository](https://github.com/grafana/lambda-promtail).
53
+
54
+
As of October 2025, the documentation for the Lambda-promtail client has also moved to the lambda-promtail repository. You can find it under [docs/sources](https://github.com/grafana/lambda-promtail/tree/main/docs/sources).
55
+
50
56
## Testing documentation
51
57
52
58
Loki uses the static site generator [Hugo](https://gohugo.io/) to generate the documentation. The Loki repository uses a continuous integration (CI) action to sync documentation to the [Grafana website](https://grafana.com/docs/loki/latest). The CI is triggered on every merge to main in the `docs` subfolder.
Copy file name to clipboardExpand all lines: docs/sources/community/maintaining/release/patch-vulnerabilities.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ Before start patching vulnerabilities, know what are you patching. It can be one
43
43
1. Check if [dependabot already patched the dependency](https://github.com/grafana/loki/pulls?q=is%3Apr+label%3Adependencies+is%3Aclosed) or [have a PR opened to patch](https://github.com/grafana/loki/pulls?q=is%3Apr+is%3Aopen+label%3Adependencies) . If not, manually upgrade the package on the `main` branch as follows.
Copy file name to clipboardExpand all lines: docs/sources/get-started/_index.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,7 @@ aliases:
14
14
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus.
15
15
It's designed to be very cost-effective and easy to operate.
16
16
It doesn't index the contents of the logs, but rather a set of labels for each log stream.
17
+
Note that the entire content of the log line is searchable, using labels just makes searching more efficient by narrowing the number of logs retrieved during querying.
17
18
18
19
Because all Loki implementations are unique, the installation process is different for every customer.
19
20
But there are some steps in the process that are common to every installation.
@@ -24,13 +24,13 @@ For more information see [Deployment modes](../deployment-modes/).
24
24
|[Index Gateway](#index-gateway)| x |||| x |
25
25
|[Compactor](#compactor)| x | x ||| x |
26
26
|[Ruler](#ruler)| x | x ||| x |
27
+
|[Pattern ingester](#pattern-ingester)| x | x || x ||
27
28
|[Bloom Planner (Experimental)](#bloom-planner)| x |||| x |
28
29
|[Bloom Builder (Experimental)](#bloom-builder)| x |||| x |
29
30
|[Bloom Gateway (Experimental)](#bloom-gateway)| x |||| x |
30
31
31
32
This page describes the responsibilities of each of these components.
32
33
33
-
34
34
## Distributor
35
35
36
36
The **distributor** service is responsible for handling incoming push requests from
@@ -118,7 +118,6 @@ quorum consistency on reads and writes. This means that the distributor will wai
118
118
for a positive response of at least one half plus one of the ingesters to send
119
119
the sample to before responding to the client that initiated the send.
120
120
121
-
122
121
## Ingester
123
122
124
123
The **ingester** service is responsible for persisting data and shipping it to long-term
@@ -133,7 +132,7 @@ the hash ring. Each ingester has a state of either `PENDING`, `JOINING`,
133
132
another ingester that is `LEAVING`. This only applies for legacy deployment modes.
134
133
135
134
{{< admonition type="note" >}}
136
-
Handoff is a deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log](../../operations/storage/wal/).
135
+
Handoff is a deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/wal/).
137
136
{{< /admonition >}}
138
137
139
138
1.`JOINING` is an Ingester's state when it is currently inserting its tokens
@@ -179,7 +178,7 @@ Loki is configured to [accept out-of-order writes](https://grafana.com/docs/loki
179
178
180
179
When not configured to accept out-of-order writes, the ingester validates that ingested log lines are in order. When an
181
180
ingester receives a log line that doesn't follow the expected order, the line
182
-
is rejected and an error is returned to the user.
181
+
is rejected and an error is returned to the user.
183
182
184
183
The ingester validates that log lines are received in
185
184
timestamp-ascending order. Each log has a timestamp that occurs at a later
@@ -190,7 +189,7 @@ Logs from each unique set of labels are built up into "chunks" in memory and
190
189
then flushed to the backing storage backend.
191
190
192
191
If an ingester process crashes or exits abruptly, all the data that has not yet
193
-
been flushed could be lost. Loki is usually configured with a [Write Ahead Log](../../operations/storage/wal/) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
192
+
been flushed could be lost. Loki is usually configured with a [Write Ahead Log](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/wal/) which can be _replayed_ on restart as well as with a `replication_factor` (usually 3) of each log to mitigate this risk.
194
193
195
194
When not configured to accept out-of-order writes,
196
195
all lines pushed to Loki for a given stream (unique combination of
@@ -209,7 +208,7 @@ nanosecond timestamps:
209
208
### Handoff
210
209
211
210
{{< admonition type="warning" >}}
212
-
Handoff is deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log](../../operations/storage/wal/).
211
+
Handoff is deprecated behavior mainly used in stateless deployments of ingesters, which is discouraged. Instead, it's recommended using a stateful deployment model together with the [write ahead log](https://grafana.com/docs/loki/latest<LOKI_VERSION>/operations/storage/wal/).
213
212
{{< /admonition >}}
214
213
215
214
By default, when an ingester is shutting down and tries to leave the hash ring,
@@ -232,7 +231,6 @@ works in single-process mode as [queriers](#querier) need access to the same
232
231
back-end store and BoltDB only allows one process to have a lock on the DB at a
233
232
given time.
234
233
235
-
236
234
## Query frontend
237
235
238
236
The **query frontend** is an **optional service** providing the querier's API endpoints and can be used to accelerate the read path. When the query frontend is in place, incoming query requests should be directed to the query frontend instead of the queriers. The querier service will be still required within the cluster, in order to execute the actual queries.
@@ -277,20 +275,18 @@ This cache is only applicable when using single store TSDB.
277
275
The query frontend caches log volume query results similar to the [metric query](#metric-queries) results.
278
276
This cache is only applicable when using single store TSDB.
279
277
280
-
281
278
## Query scheduler
282
279
283
-
The **query scheduler** is an **optional service** providing more [advanced queuing functionality](../../operations/query-fairness/) than the [query frontend](#query-frontend).
280
+
The **query scheduler** is an **optional service** providing more [advanced queuing functionality](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/query-fairness/) than the [query frontend](#query-frontend).
284
281
When using this component in the Loki deployment, query frontend pushes split up queries to the query scheduler which enqueues them in an internal in-memory queue.
285
282
There is a queue for each tenant to guarantee the query fairness across all tenants.
286
283
The queriers that connect to the query scheduler act as workers that pull their jobs from the queue, execute them, and return them to the query frontend for aggregation. Queriers therefore need to be configured with the query scheduler address (via the `-querier.scheduler-address` CLI flag) in order to allow them to connect to the query scheduler.
287
284
288
285
Query schedulers are **stateless**. However, due to the in-memory queue, it's recommended to run more than one replica to keep the benefit of high availability. Two replicas should suffice in most cases.
289
286
290
-
291
287
## Querier
292
288
293
-
The **querier** service is responsible for executing [Log Query Language (LogQL)](../../query/) queries.
289
+
The **querier** service is responsible for executing [Log Query Language (LogQL)](https://grafana.com/docs/loki/<LOKI_VERSION>/query/) queries.
294
290
The querier can handle HTTP requests from the client directly (in "single binary" mode, or as part of the read path in "simple scalable deployment")
295
291
or pull subqueries from the query frontend or query scheduler (in "microservice" mode).
296
292
@@ -301,30 +297,28 @@ factor, it is possible that the querier may receive duplicate data. To resolve
301
297
this, the querier internally **deduplicates** data that has the same nanosecond
302
298
timestamp, label set, and log message.
303
299
304
-
305
300
## Index Gateway
306
301
307
302
The **index gateway** service is responsible for handling and serving metadata queries.
308
303
Metadata queries are queries that look up data from the index. The index gateway is only used by "shipper stores",
309
-
such as [single store TSDB](../../operations/storage/tsdb/) or [single store BoltDB](../../operations/storage/boltdb-shipper/).
304
+
such as [single store TSDB](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/tsdb/) or [single store BoltDB](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/boltdb-shipper/).
310
305
311
306
The query frontend queries the index gateway for the log volume of queries so it can make a decision on how to shard the queries.
312
307
The queriers query the index gateway for chunk references for a given query so they know which chunks to fetch and query.
313
308
314
309
The index gateway can run in `simple` or `ring` mode. In `simple` mode, each index gateway instance serves all indexes from all tenants.
315
310
In `ring` mode, index gateways use a consistent hash ring to distribute and shard the indexes per tenant amongst available instances.
316
311
317
-
318
312
## Compactor
319
313
320
-
The **compactor** service is used by "shipper stores", such as [single store TSDB](../../operations/storage/tsdb/)
321
-
or [single store BoltDB](../../operations/storage/boltdb-shipper/), to compact the multiple index files produced by the ingesters
314
+
The **compactor** service is used by "shipper stores", such as [single store TSDB](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/tsdb/)
315
+
or [single store BoltDB](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/boltdb-shipper/), to compact the multiple index files produced by the ingesters
322
316
and shipped to object storage into single index files per day and tenant. This makes index lookups more efficient.
323
317
324
318
To do so, the compactor downloads the files from object storage in a regular interval, merges them into a single one,
325
319
uploads the newly created index, and cleans up the old files.
326
320
327
-
Additionally, the compactor is also responsible for [log retention](../../operations/storage/retention/) and [log deletion](../../operations/storage/logs-deletion/).
321
+
Additionally, the compactor is also responsible for [log retention](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/retention/) and [log deletion](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/logs-deletion/).
328
322
329
323
In a Loki deployment, the compactor service is usually run as a single instance.
330
324
@@ -340,7 +334,18 @@ from the query frontend.
340
334
341
335
When running multiple rulers, they use a consistent hash ring to distribute rule groups amongst available ruler instances.
342
336
337
+
## Pattern ingester
338
+
339
+
The optional **pattern ingester** component receives log data from the ingesters and scans the logs to detect and aggregate patterns. This can be useful for understanding the structure of your logs at scale. The pattern ingester is used by the pattern feature in Logs Drilldown, which lets you detect similar log lines and add or exclude them from your search.
340
+
341
+
The ingester uses a drain algorithm to identify related logs that share the same pattern, and maintain their counts over time. Patterns consist of a number, a string, and a Loki series identifier.
342
+
343
+
The pattern ingester exposes a query API, so you can fetch detected patterns. This API is used by the Patterns tab in the Grafana Logs Drilldown plugin.
344
+
345
+
This component is disabled by default and must be enabled in your [Loki config file](https://grafana.com/docs/loki/latest/configure/#supported-contents-and-default-values-of-lokiyaml).
346
+
343
347
## Bloom Planner
348
+
344
349
{{< admonition type="warning" >}}
345
350
This feature is an [experimental feature](/docs/release-life-cycle/). Engineering and on-call support is not available.
346
351
No SLA is provided.
@@ -353,27 +358,29 @@ been built for a given day and tenant and what series need to be newly added.
353
358
This service is also used to apply blooms retention.
354
359
355
360
## Bloom Builder
361
+
356
362
{{< admonition type="warning" >}}
357
363
This feature is an [experimental feature](/docs/release-life-cycle/). Engineering and on-call support is not available.
358
364
No SLA is provided.
359
365
{{< /admonition >}}
360
366
361
367
The Bloom Builder service is responsible for processing the tasks created by the Bloom Planner.
362
368
The Bloom Builder creates bloom blocks from structured metadata of log entries.
363
-
The resulting blooms are grouped in bloom blocks spanning multiple series and chunks from a given day.
369
+
The resulting blooms are grouped in bloom blocks spanning multiple series and chunks from a given day.
364
370
This component also builds metadata files to track which blocks are available for each series and TSDB index file.
365
371
366
372
The service is stateless and horizontally scalable.
367
373
368
374
## Bloom Gateway
375
+
369
376
{{< admonition type="warning" >}}
370
377
This feature is an [experimental feature](/docs/release-life-cycle/). Engineering and on-call support is not available.
371
378
No SLA is provided.
372
379
{{< /admonition >}}
373
380
374
-
The Bloom Gateway service is responsible for handling and serving chunks filtering requests.
381
+
The Bloom Gateway service is responsible for handling and serving chunks filtering requests.
375
382
The index gateway queries the Bloom Gateway when computing chunk references, or when computing shards for a given query.
376
-
The gateway service takes a list of chunks and a filtering expression and matches them against the blooms,
383
+
The gateway service takes a list of chunks and a filtering expression and matches them against the blooms,
377
384
filtering out any chunks that do not match the given label filter expression.
378
385
379
386
The service is horizontally scalable. When running multiple instances, the client (Index Gateway) shards requests
Copy file name to clipboardExpand all lines: docs/sources/get-started/labels/_index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,7 @@ Loki automatically tries to populate a default `service_name` label while ingest
44
44
- Grafana Cloud Application Observability
45
45
46
46
{{< admonition type="note" >}}
47
-
If you are already applying a `service_name`, Loki will use that value.
47
+
If you are already applying a `service_name`, Loki will use that value. For example, if you are using the Kubernetes monitoring Helm Chart, the Alloy configuration applies a `service_name` by default.
48
48
{{< /admonition >}}
49
49
50
50
Loki will attempt to create the `service_name` label by looking for the following labels in this order:
0 commit comments