Publié le : 10/05/2026 à 10:00

Discover how to configure Grafana Alloy to read log files, journald, or network streams, process this data, and forward it securely to Grafana Loki.

1. Log sources

Reading files, journald, and network receivers.

Capture information at the source

Alloy is an excellent replacement for Promtail. It can collect logs from multiple sources: read local files with loki.source.file, query system logs via loki.source.journal, or even act as a Syslog server or an OTLP receiver. Discovery components (e.g., discovery.kubernetes) can be coupled with sources to automatically target logs from relevant containers.

2. Processing pipelines

Filtering, parsing (JSON, Regex), and relabeling.

Enrich and filter logs

Raw logs often lack context and consume too much storage. The loki.process component allows you to define sophisticated processing pipelines. You can embed stages to extract fields via JSON or Regex parsing, use these fields to create new dynamic labels, filter (drop) irrelevant lines (like HTTP healthchecks), or even rewrite the log content itself.


Best Practice: Drastically limit the number of labels extracted to Loki. Only extract global and static fields as labels (e.g., env, app, level). For other fields (dynamic business values), leave them in the log line and use LogQL filters (e.g., | json) at query time.
Common Mistake: Promoting a highly variable field (like an IP address, a trace_id, or a transaction uuid) as a label. This mistake will crush the performance of your Loki instance.
loki.process "filter_logs" {
  forward_to = [loki.write.default.receiver]

  // JSON Parsing
  stage.json {
    expressions = { level = "log_level", app = "app_name" }
  }
  // Secure label creation
  stage.labels {
    values = { level = null, app = null }
  }
  // Filter out useless logs
  stage.drop {
    source = "level"
    value  = "debug"
  }
}

3. Forwarding to Loki

Secure forwarding to Grafana Loki.

The loki.write component

Once logs are processed and properly labeled, the loki.write component groups the lines (batching) and sends them via HTTP(S) to the Grafana Loki (or Grafana Cloud) API. It supports authentication, TLS configuration, and retry strategies to ensure log delivery even during transient network disruptions.

Lien copié dans le presse-papiers !