Publié le : 10/05/2026 à 10:00

Discover how to configure continuous profiling in your environments using Grafana Alloy and Pyroscope to identify bottlenecks in your applications.

1. Introduction to continuous profiling

Continuous profiling with Pyroscope.

Going beyond metrics

Where metrics inform you that an application consumes too much CPU or memory, continuous profiling tells you exactly *which lines of code* are responsible. Grafana Pyroscope is the database dedicated to this need. Grafana Alloy includes the necessary components to query (scrape) the profiling endpoints of your applications (like /debug/pprof in Go) and forward this data to Pyroscope.

2. Setting up data collection

Configuration of scraping and forwarding for Pyroscope.

The Pyroscope pipeline

The profiling pipeline looks very similar to Prometheus'. It usually starts with a discovery mechanism (e.g., discovery.kubernetes) to find pods. Next, the pyroscope.scrape component periodically queries these targets to retrieve CPU, memory, or goroutine profiles. Finally, the pyroscope.write component formats and sends these complex profiles to your Pyroscope backend (local or Grafana Cloud), where they can be visualized as Flame Graphs.


Best Practice: Ensure that the labels applied in pyroscope.scrape (e.g., app, env, cluster) exactly match those used in your metrics and logs. This is essential to enable automatic Flame Graph correlation in Grafana.
Common Mistake: Leaving continuous profiling enabled on a production environment with non-optimized base parameters (pprof), which can add too much CPU overhead on your target processes.
pyroscope.scrape "prod_apps" {
  targets    = discovery.kubernetes.pods.targets
  forward_to = [pyroscope.write.cloud.receiver]
  
  profiling_config {
    profile.process_cpu {
      enabled = true
    }
    profile.godgoroutine {
      enabled = true
    }
  }
}
Lien copié dans le presse-papiers !