-
Couldn't load subscription status.
- Fork 4.2k
Description
Which component are you using?:
- recommender
- update
- admission
/area vertical-pod-autoscaler
What version of the component are you using?:
VPA: 1.4.1
Component version:
What k8s version are you using (kubectl version)?:
kubectl version Output
$ kubectl version Client Version: v1.34.0 Kustomize Version: v5.7.1 Server Version: v1.33.0
What environment is this in?:
OnPrem VMs with Talos
What did you expect to happen?:
VPA will increase the memory by our memory bump settings:
recommender:
extraArgs:
oom-bump-up-ratio: 1.5
oom-min-bump-up-bytes: 104857600 # 100MiWhat happened instead?:
OOM event will be discarded - it is too old:
Here an example (the first timestamp is form Loki and is converted to my timezone):
2025-09-18 10:50:22.358 I0918 08:50:22.357719 1 cluster_feeder.go:528] "OOM detected" oomInfo={"Timestamp":"2025-09-18T08:49:46Z","Memory":195989341,"ContainerID":{"Namespace":"authentik","PodName":"authentik-server-c4544c967-ftqx6","ContainerName":"server"}}
2025-09-18 10:50:22.358 I0918 08:50:22.357757 1 cluster_feeder.go:530] "Failed to record OOM" oomInfo={"Timestamp":"2025-09-18T08:49:46Z","Memory":195989341,"ContainerID":{"Namespace":"authentik","PodName":"authentik-server-c4544c967-ftqx6","ContainerName":"server"}} error="error while recording OOM for {{authentik authentik-server-c4544c967-ftqx6} server}, Reason: OOM event will be discarded - it is too old (2025-09-18 08:49:46 +0000 UTC)"How to reproduce it (as minimally and precisely as possible):
Try creating OOM events for a pod that is handled by VPA.
Anything else we need to know?:
Similar issue: #4152