-
Couldn't load subscription status.
- Fork 3.8k
Description
Is your feature request related to a problem? Please describe.
With Loki being really memory intensive it makes sense to configure GOMEMLIMIT to the available memory. See https://weaviate.io/blog/gomemlimit-a-game-changer-for-high-memory-applications
Setting GOMEMLIMIT is also a common suggestion on OOM and other memory issues with Loki, see e.g. #6501
Describe the solution you'd like
I propose to add https://github.com/KimMachineGun/automemlimit which automagically configures GOMEMLIMIT and also keeps some headroom as is best-practice.
There are many examples / MRs for the proposed mechanisms / libraries for other popular tools ....
- Many other Grafana Labs tools https://github.com/search?q=org%3Agrafana+automemlimit&type=code
- Prometheus prometheus/prometheus@2c0f9d1
...
While I found GOMEMLIMIT in some value file, this is not really documented, it's static and in the end up to the user to configure this properly and in relation to the memory limits of the Pods.
Grafana Alloy already has exactly this automatic configuration implemented via grafana/alloy#651 // grafana/alloy#655 (which has been refined multiple times in the meantime).
Describe alternatives you've considered
-
One could use the Kubernetes downwardAPI to expose the container memory limit as env variable
GOMAXPROCS. But this does not follow the best-practice to leave 10-15% of headroom. -
Apply Helm templating to statically set
GOMAXPROCSas env variable to some percentage of the container resource limit.
Additional context
In the context of auto-tuning the Golang runtime GOMAXPROCS is also mentioned often. But with Go >= 1.25 the default value for GOMAXPROCS will automatically respect the cgroup CPU quota, see https://pkg.go.dev/runtime@master#hdr-Default-GOMAXPROCS).