asrin andirin // devops & cloud infrastructure

EKS Cluster Upgrade Remarks

Version Lifecycle (as of Feb 2026)

VersionEKS ReleaseEnd of Standard SupportEnd of Extended SupportStatus
1.35Jan 27, 2026Mar 27, 2027Mar 27, 2028Standard
1.34Oct 2, 2025Dec 2, 2026Dec 2, 2027Standard
1.33May 29, 2025Jul 29, 2026Jul 29, 2027Standard
1.32Jan 23, 2025Mar 23, 2026Mar 23, 2027Standard
1.31Sep 26, 2024Nov 26, 2025Nov 26, 2026Extended
1.30May 23, 2024Jul 23, 2025Jul 23, 2026Extended
1.29Jan 23, 2024Mar 23, 2025Mar 23, 2026Extended

Extended support incurs additional hourly charges and is enabled by default for all clusters. If you let extended support expire without upgrading, AWS will auto-upgrade your cluster.

Reference Links

Check these before every upgrade.

Upgrade Workflow

EKS only allows one minor version upgrade at a time (e.g., 1.31 → 1.32, not 1.31 → 1.33).

1. Pre-upgrade checks (deprecated APIs, add-on compatibility, EKS Insights)
2. Upgrade the control plane
3. Update add-ons
4. Upgrade node groups
5. Post-upgrade verification

Step 1 — Pre-Upgrade Checks

# Check current cluster version
aws eks describe-cluster --name my-cluster \
  --query 'cluster.version' --output text

# EKS Upgrade Insights — run this first, it surfaces deprecated API usage and blockers
aws eks list-insights --cluster-name my-cluster
aws eks describe-insight --cluster-name my-cluster --id <insight-id>

# Scan live cluster for removed API usage (kubent)
kubent --target-version 1.32

# Cross-check with pluto
pluto detect-all-in-cluster --target-versions k8s=v1.32.0

# Scan Helm chart sources in Git
pluto detect-files -d ./helm-charts/

# Check add-on compatibility for target version
aws eks describe-addon-versions --kubernetes-version 1.32

Step 2 — Upgrade Control Plane

# Via eksctl (recommended)
eksctl upgrade cluster --name my-cluster --version 1.32 --approve

# Via AWS CLI
aws eks update-cluster-version \
  --name my-cluster \
  --kubernetes-version 1.32

# Wait for completion
aws eks wait cluster-active --name my-cluster

Step 3 — Update Add-ons

Update add-ons after the control plane and before upgrading nodes.

# Check what add-ons are installed
aws eks list-addons --cluster-name my-cluster

# Update a managed add-on (use PRESERVE to keep custom config)
aws eks update-addon \
  --cluster-name my-cluster \
  --addon-name coredns \
  --addon-version v1.13.2-eksbuild.1 \
  --resolve-conflicts PRESERVE

# Script to auto-update all add-ons to latest for the new K8s version
for ADDON in vpc-cni coredns kube-proxy aws-ebs-csi-driver; do
  LATEST=$(aws eks describe-addon-versions \
    --kubernetes-version 1.32 \
    --addon-name $ADDON \
    --query 'addons[0].addonVersions[0].addonVersion' \
    --output text)

  aws eks update-addon \
    --cluster-name my-cluster \
    --addon-name $ADDON \
    --addon-version $LATEST \
    --resolve-conflicts PRESERVE
done

Step 4 — Upgrade Node Groups

# Managed node group — AWS CLI
aws eks update-nodegroup-version \
  --cluster-name my-cluster \
  --nodegroup-name my-nodegroup

# Managed node group — eksctl
eksctl upgrade nodegroup \
  --cluster my-cluster \
  --name my-nodegroup \
  --kubernetes-version 1.32

# Self-managed: update launch template AMI, then cordon/drain manually

Step 5 — Post-Upgrade Verification

kubectl version --short
kubectl get nodes -o wide
kubectl get pods -n kube-system
kubent

Pre-Upgrade Commands

# Check for deprecated seccomp annotations (removed in 1.27)
kubectl get pods --all-namespaces -o json | \
  grep -E 'seccomp.security.alpha.kubernetes.io/pod|container.seccomp.security.alpha.kubernetes.io'

# Check VPC CNI plugin version
kubectl get daemonset aws-node -n kube-system \
  -o jsonpath='{.spec.template.spec.containers[0].image}'

# Check CoreDNS version
kubectl describe deployment coredns -n kube-system | grep Image | cut -d ":" -f 3

# Check kube-proxy version
kubectl get daemonset kube-proxy -n kube-system \
  -o jsonpath='{.spec.template.spec.containers[0].image}'

# Check Cluster Autoscaler version
kubectl get deployment clusterautoscaler-aws-cluster-autoscaler -n kube-system \
  -o=jsonpath='{.spec.template.spec.containers[0].image}'

# Check all API versions currently in use (useful before any upgrade)
kubectl api-versions

# Find legacy service account tokens (1.29+)
kubectl get cm kube-apiserver-legacy-service-account-token-tracking -n kube-system

Tools

Kubent (kube-no-trouble)

  • Scans your live cluster and Helm release histories for deprecated/removed API versions.
  • If the cluster is heavily loaded with Helm charts it may error out — fall back to kubectl api-versions in that case.
# Install
sh -c "$(curl -sSL https://git.io/install-kubent)"
# or: brew install kubent

# Scan live cluster against a target version
kubent --target-version 1.32

# Scan Helm 3 releases
kubent --helm3

# JSON output for CI/CD pipelines
kubent --output json --exit-error

Pluto

  • Detects deprecated apiVersions in code repositories, Helm releases, and live clusters. Strong for scanning chart sources in Git.
# Install
brew install FairwindsOps/tap/pluto

# Scan live cluster
pluto detect-all-in-cluster --target-versions k8s=v1.32.0

# Scan Helm releases in cluster
pluto detect-helm --helm-version 3

# Scan manifests on disk
pluto detect-files -d ./manifests/

# List all known removed API versions
pluto list-versions
FeaturekubentPluto
Live cluster scanYesYes
Helm release scanYes (secrets)Yes
Static file scanYesYes (stronger)
CI/CD integrationGoodExcellent
Helm chart source scanNoYes

Add-on Version Reference (Feb 2026)

Add-onK8s 1.29–1.32K8s 1.33–1.35
Amazon VPC CNIv1.21.1-eksbuild.3v1.21.1-eksbuild.3
CoreDNSv1.11.4-eksbuild.28v1.13.2-eksbuild.1
kube-proxymatches cluster minor versionmatches cluster minor version
Cluster Autoscalermatch to cluster minor versionmatch to cluster minor version

Recommendation: Use EKS managed add-ons where possible — they simplify version management during upgrades.

Amazon VPC CNI

Managing VPC CNI Latest release (v1.21.1)

  • Latest version: v1.21.1-eksbuild.3, compatible with Kubernetes 1.29–1.35
  • If upgrading to v1.12.0 or later from an older version, you must first upgrade to v1.7.0 and then increment one minor version at a time
# Check current version
kubectl get daemonset aws-node -n kube-system \
  -o jsonpath='{.spec.template.spec.containers[0].image}'

kube-proxy

Managing kube-proxy

  • kube-proxy version must match the cluster’s Kubernetes minor version
  • After 1.25, use the minimal EKS build image (no shell, minimal packages)
  • Only deployed to EC2 nodes — not Fargate
  • If using EKS Auto Mode, kube-proxy is managed automatically
Kubernetes Versionkube-proxy Version
1.35v1.35.0-eksbuild.2
1.34v1.34.3-eksbuild.2
1.33v1.33.7-eksbuild.2
1.32v1.32.11-eksbuild.2

CoreDNS

CoreDNS self-managed update

  • If updating to CoreDNS 1.8.3 or later, add endpointslices permission to system:coredns clusterrole
  • AWS recommends migrating to the managed add-on type
# Check if managed or self-managed
aws eks describe-addon --cluster-name my-cluster --addon-name coredns \
  --query addon.addonVersion --output text
# (Error = self-managed, version string = managed)

# Check current image
kubectl describe deployment coredns -n kube-system | grep Image | cut -d ":" -f 3

# Update image (replace REGION and VERSION as appropriate)
kubectl set image deployment.apps/coredns -n kube-system \
  coredns=602401143452.dkr.ecr.REGION.amazonaws.com/eks/coredns:VERSION

Cluster Autoscaler

Autoscaler releases

  • The Cluster Autoscaler version must match the Kubernetes minor version of the cluster
Kubernetes VersionCluster Autoscaler
1.35v1.35.0
1.34v1.34.3
1.33v1.33.4
1.32v1.32.7
# Check current version
kubectl get deployment clusterautoscaler-aws-cluster-autoscaler -n kube-system \
  -o=jsonpath='{.spec.template.spec.containers[0].image}'

Migrate Workers to New AMIs

Self-managed workers: Update the launch template, update the ASG, then manually cordon and drain each node.

Managed node groups (node pools): Click the update button — cordon and drain is handled automatically.

All worker groups (every nodepool and every self-managed group) must be on the same minor version as the control plane, or within the supported skew. Since Kubernetes 1.28, nodes can be up to 3 minor versions behind the control plane (expanded from n-2).

Pod Security

Namespace labeling for Pod Security Admission (replacement for the removed PodSecurityPolicy):

# MODE: enforce | audit | warn
# LEVEL: privileged | baseline | restricted
# VERSION: valid Kubernetes minor version or `latest`

# Example: label the 'default' namespace with audit mode at baseline level
kubectl label namespace default \
  pod-security.kubernetes.io/audit=baseline \
  pod-security.kubernetes.io/audit-version=latest \
  --overwrite

Version-Specific Notes

1.35

  • Cgroup v1 removed by default. Kubelet refuses to start on cgroup v1 nodes. AL2023 and Bottlerocket use cgroup v2. Fargate still uses cgroup v1. Custom AMIs must migrate.
  • containerd 1.x end of support. Upgrade to containerd 2.0+ before going beyond 1.35.
  • --pod-infra-container-image kubelet flag removed. Remove from all node bootstrap scripts and launch templates.
  • IPVS mode (kube-proxy) deprecated — will be removed in 1.36.
  • Ingress NGINX retiring March 2026. Plan migration to Gateway API or another controller.

1.34

  • No more AL2 AMIs from AWS. Migrate to Amazon Linux 2023 before upgrading.
  • VolumeAttributesClass graduated to GA (storage.k8s.io/v1). Self-managed CSI sidecars may need pinning on older clusters.
  • --cgroup-driver kubelet flag deprecated. Remove it from node bootstrap scripts before upgrading to 1.34+.
  • AppArmor deprecated — migrate to seccomp or Pod Security Standards.

1.33

  • No more AL2 AMIs from AWS. Migrate to AL2023.
  • Endpoints API deprecated — migrate to EndpointSlices for dual-stack and modern features.
  • Sidecar containers graduated to stable (restartPolicy: Always).

1.32

  • flowcontrol.apiserver.k8s.io/v1beta3 removed. Update all FlowSchema and PriorityLevelConfiguration manifests to flowcontrol.apiserver.k8s.io/v1 before upgrading.
  • Anonymous authentication restricted. Only /healthz, /livez, /readyz endpoints accept unauthenticated requests. Any tooling relying on unauthenticated API access will break.
  • Last version with AL2 AMIs. Plan migration to AL2023.

1.31

  • --keep-terminated-pod-volumes kubelet flag removed. Remove from bootstrap scripts and launch templates.
  • AppArmor graduated to stable. Migrate from annotation-based config to appArmorProfile.type field in securityContext.
  • Amazon EBS CSI Driver: Upgrade to v1.35.0+ to enable VolumeAttributesClass support.

1.30

  • Default node OS changed to Amazon Linux 2023 (AL2023) for newly created managed node groups.
  • gp2 StorageClass no longer set as default on new clusters. If you rely on a default StorageClass, set defaultStorageClass.enabled: true in AWS EBS CSI Driver 1.31.0+ or reference gp2 explicitly.
  • New IAM requirement: Add ec2:DescribeAvailabilityZones to the EKS cluster IAM role.
  • New node label: topology.k8s.aws/zone-id added to worker nodes.

1.29

  • flowcontrol.apiserver.k8s.io/v1beta2 removed. Migrate FlowSchema and PriorityLevelConfiguration to v1.
  • .status.kubeProxyVersion field deprecated (unreliable — kubelet doesn’t know the actual kube-proxy version). Remove usage from client software.
  • LegacyServiceAccountTokenCleanUp enabled — tokens unused for 1 year are marked invalid; after another year, automatically removed.
  • AWS Load Balancer Controller: Must be on v2.4.7+ before upgrading to 1.25 if using EndpointSlices.
  • HPA migration: HPAs should be on autoscaling/v2 (v2beta2 removed in 1.26).

1.27 and Earlier

  • --container-runtime kubelet argument ignored. Remove from all node creation workflows and build scripts. You must be running containerd.
  • Alpha seccomp annotations removed. Use securityContext.seccompProfile instead of the deprecated annotations.