-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Description
.
helm chart: cluster-autoscaler 1.34 // chart version: cluster-autoscaler-9.35.0
What version of the component are you using?:
cluster-autoscaler 1.34 // chart version: cluster-autoscaler-9.35.0
Component version:
What k8s version are you using (kubectl version)?: EKS 1.34
kubectl version Output
$ kubectl version ~ → ./kubectl version Client Version: v1.34.0 Kustomize Version: v5.7.1 Server Version: v1.34.1-eks-d96d92f
What environment is this in?:
What did you expect to happen?:
We have an error in the cluster-autoscale 1.34 with EKS 1.34:
failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:cluster-autoscaler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
so the autoscaler is not working adding more nodes.
we have had to add the permission in the cluster role to work fine:
`kubectl describe clusterrole cluster-autoscaler-aws-cluster-autoscaler
Name: cluster-autoscaler-aws-cluster-autoscaler
Labels: app.kubernetes.io/instance=cluster-autoscaler
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=aws-cluster-autoscaler
app.kubernetes.io/version=1.29.0
helm.sh/chart=cluster-autoscaler-9.35.0
k8slens-edit-resource-version=v1
Annotations: meta.helm.sh/release-name: cluster-autoscaler
meta.helm.sh/release-namespace: kube-system
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
endpoints [] [] [create patch]
events [] [] [create patch]
pods/eviction [] [] [create]
leases.coordination.k8s.io [] [] [create]
jobs.extensions [] [] [get list patch watch]
endpoints [] [cluster-autoscaler] [get update]
leases.coordination.k8s.io [] [cluster-autoscaler] [get update]
configmaps [] [] [list watch get]
pods/status [] [] [update]
nodes [] [] [watch list create delete get update]
jobs.batch [] [] [watch list get patch]
namespaces [] [] [watch list get]
persistentvolumeclaims [] [] [watch list get]
persistentvolumes [] [] [watch list get]
pods [] [] [watch list get]
replicationcontrollers [] [] [watch list get]
services [] [] [watch list get]
daemonsets.apps [] [] [watch list get]
replicasets.apps [] [] [watch list get]
statefulsets.apps [] [] [watch list get]
cronjobs.batch [] [] [watch list get]
daemonsets.extensions [] [] [watch list get]
replicasets.extensions [] [] [watch list get]
csidrivers.storage.k8s.io [] [] [watch list get]
csinodes.storage.k8s.io [] [] [watch list get]
csistoragecapacities.storage.k8s.io [] [] [watch list get]
storageclasses.storage.k8s.io [] [] [watch list get]
volumeattachments.storage.k8s.io [] [] [watch list get]
poddisruptionbudgets.policy [] [] [watch list]
`
What happened instead?:
How to reproduce it (as minimally and precisely as possible):
We install the autoscaler with terraform:
eks_blueprints_addons
module "eks_blueprints_addons" {
aws_load_balancer_controller = {
set = [
{
name = "vpcId"
value = data.aws_vpc.main.id
},
]
}
cluster_endpoint = data.aws_eks_cluster.this.endpoint
cluster_name = data.aws_eks_cluster.this.id
cluster_version = data.aws_eks_cluster.this.version
eks_addons = {
aws-ebs-csi-driver = {
addon_version = "v1.43.0-eksbuild.1"
most_recent = false
service_account_role_arn = module.ebs_csi_driver_irsa.iam_role_arn
}
}
enable_aws_efs_csi_driver = true
enable_aws_load_balancer_controller = true
enable_cert_manager = true
enable_cluster_autoscaler = true
enable_cluster_proportional_autoscaler = false
enable_external_dns = true
enable_external_secrets = true
enable_karpenter = false
enable_kube_prometheus_stack = false
enable_metrics_server = true
enable_secrets_store_csi_driver = false
enable_secrets_store_csi_driver_provider_aws = false
external_dns_route53_zone_arns = [
data.aws_route53_zone.this.arn,
]
oidc_provider_arn = data.aws_iam_openid_connect_provider.this.arn
source = "aws-ia/eks-blueprints-addons/aws"
tags = {}
version = "~> 1.0"
}
Anything else we need to know?: