Version Lifecycle (as of Feb 2026)
| Version | EKS Release | End of Standard Support | End of Extended Support | Status |
|---|---|---|---|---|
| 1.35 | Jan 27, 2026 | Mar 27, 2027 | Mar 27, 2028 | Standard |
| 1.34 | Oct 2, 2025 | Dec 2, 2026 | Dec 2, 2027 | Standard |
| 1.33 | May 29, 2025 | Jul 29, 2026 | Jul 29, 2027 | Standard |
| 1.32 | Jan 23, 2025 | Mar 23, 2026 | Mar 23, 2027 | Standard |
| 1.31 | Sep 26, 2024 | Nov 26, 2025 | Nov 26, 2026 | Extended |
| 1.30 | May 23, 2024 | Jul 23, 2025 | Jul 23, 2026 | Extended |
| 1.29 | Jan 23, 2024 | Mar 23, 2025 | Mar 23, 2026 | Extended |
Extended support incurs additional hourly charges and is enabled by default for all clusters. If you let extended support expire without upgrading, AWS will auto-upgrade your cluster.
Reference Links
Check these before every upgrade.
- EKS Standard Support Versions
- EKS Extended Support Versions
- Kubernetes Deprecated API Migration Guide
- EKS Cluster Upgrade Guide
- EKS Add-on Management
Upgrade Workflow
EKS only allows one minor version upgrade at a time (e.g., 1.31 → 1.32, not 1.31 → 1.33).
1. Pre-upgrade checks (deprecated APIs, add-on compatibility, EKS Insights)
2. Upgrade the control plane
3. Update add-ons
4. Upgrade node groups
5. Post-upgrade verification
Step 1 — Pre-Upgrade Checks
# Check current cluster version
aws eks describe-cluster --name my-cluster \
--query 'cluster.version' --output text
# EKS Upgrade Insights — run this first, it surfaces deprecated API usage and blockers
aws eks list-insights --cluster-name my-cluster
aws eks describe-insight --cluster-name my-cluster --id <insight-id>
# Scan live cluster for removed API usage (kubent)
kubent --target-version 1.32
# Cross-check with pluto
pluto detect-all-in-cluster --target-versions k8s=v1.32.0
# Scan Helm chart sources in Git
pluto detect-files -d ./helm-charts/
# Check add-on compatibility for target version
aws eks describe-addon-versions --kubernetes-version 1.32
Step 2 — Upgrade Control Plane
# Via eksctl (recommended)
eksctl upgrade cluster --name my-cluster --version 1.32 --approve
# Via AWS CLI
aws eks update-cluster-version \
--name my-cluster \
--kubernetes-version 1.32
# Wait for completion
aws eks wait cluster-active --name my-cluster
Step 3 — Update Add-ons
Update add-ons after the control plane and before upgrading nodes.
# Check what add-ons are installed
aws eks list-addons --cluster-name my-cluster
# Update a managed add-on (use PRESERVE to keep custom config)
aws eks update-addon \
--cluster-name my-cluster \
--addon-name coredns \
--addon-version v1.13.2-eksbuild.1 \
--resolve-conflicts PRESERVE
# Script to auto-update all add-ons to latest for the new K8s version
for ADDON in vpc-cni coredns kube-proxy aws-ebs-csi-driver; do
LATEST=$(aws eks describe-addon-versions \
--kubernetes-version 1.32 \
--addon-name $ADDON \
--query 'addons[0].addonVersions[0].addonVersion' \
--output text)
aws eks update-addon \
--cluster-name my-cluster \
--addon-name $ADDON \
--addon-version $LATEST \
--resolve-conflicts PRESERVE
done
Step 4 — Upgrade Node Groups
# Managed node group — AWS CLI
aws eks update-nodegroup-version \
--cluster-name my-cluster \
--nodegroup-name my-nodegroup
# Managed node group — eksctl
eksctl upgrade nodegroup \
--cluster my-cluster \
--name my-nodegroup \
--kubernetes-version 1.32
# Self-managed: update launch template AMI, then cordon/drain manually
Step 5 — Post-Upgrade Verification
kubectl version --short
kubectl get nodes -o wide
kubectl get pods -n kube-system
kubent
Pre-Upgrade Commands
# Check for deprecated seccomp annotations (removed in 1.27)
kubectl get pods --all-namespaces -o json | \
grep -E 'seccomp.security.alpha.kubernetes.io/pod|container.seccomp.security.alpha.kubernetes.io'
# Check VPC CNI plugin version
kubectl get daemonset aws-node -n kube-system \
-o jsonpath='{.spec.template.spec.containers[0].image}'
# Check CoreDNS version
kubectl describe deployment coredns -n kube-system | grep Image | cut -d ":" -f 3
# Check kube-proxy version
kubectl get daemonset kube-proxy -n kube-system \
-o jsonpath='{.spec.template.spec.containers[0].image}'
# Check Cluster Autoscaler version
kubectl get deployment clusterautoscaler-aws-cluster-autoscaler -n kube-system \
-o=jsonpath='{.spec.template.spec.containers[0].image}'
# Check all API versions currently in use (useful before any upgrade)
kubectl api-versions
# Find legacy service account tokens (1.29+)
kubectl get cm kube-apiserver-legacy-service-account-token-tracking -n kube-system
Tools
- Scans your live cluster and Helm release histories for deprecated/removed API versions.
- If the cluster is heavily loaded with Helm charts it may error out — fall back to
kubectl api-versionsin that case.
# Install
sh -c "$(curl -sSL https://git.io/install-kubent)"
# or: brew install kubent
# Scan live cluster against a target version
kubent --target-version 1.32
# Scan Helm 3 releases
kubent --helm3
# JSON output for CI/CD pipelines
kubent --output json --exit-error
- Detects deprecated
apiVersionsin code repositories, Helm releases, and live clusters. Strong for scanning chart sources in Git.
# Install
brew install FairwindsOps/tap/pluto
# Scan live cluster
pluto detect-all-in-cluster --target-versions k8s=v1.32.0
# Scan Helm releases in cluster
pluto detect-helm --helm-version 3
# Scan manifests on disk
pluto detect-files -d ./manifests/
# List all known removed API versions
pluto list-versions
| Feature | kubent | Pluto |
|---|---|---|
| Live cluster scan | Yes | Yes |
| Helm release scan | Yes (secrets) | Yes |
| Static file scan | Yes | Yes (stronger) |
| CI/CD integration | Good | Excellent |
| Helm chart source scan | No | Yes |
Add-on Version Reference (Feb 2026)
| Add-on | K8s 1.29–1.32 | K8s 1.33–1.35 |
|---|---|---|
| Amazon VPC CNI | v1.21.1-eksbuild.3 | v1.21.1-eksbuild.3 |
| CoreDNS | v1.11.4-eksbuild.28 | v1.13.2-eksbuild.1 |
| kube-proxy | matches cluster minor version | matches cluster minor version |
| Cluster Autoscaler | match to cluster minor version | match to cluster minor version |
Recommendation: Use EKS managed add-ons where possible — they simplify version management during upgrades.
Amazon VPC CNI
Managing VPC CNI Latest release (v1.21.1)
- Latest version: v1.21.1-eksbuild.3, compatible with Kubernetes 1.29–1.35
- If upgrading to v1.12.0 or later from an older version, you must first upgrade to v1.7.0 and then increment one minor version at a time
# Check current version
kubectl get daemonset aws-node -n kube-system \
-o jsonpath='{.spec.template.spec.containers[0].image}'
kube-proxy
- kube-proxy version must match the cluster’s Kubernetes minor version
- After 1.25, use the minimal EKS build image (no shell, minimal packages)
- Only deployed to EC2 nodes — not Fargate
- If using EKS Auto Mode, kube-proxy is managed automatically
| Kubernetes Version | kube-proxy Version |
|---|---|
| 1.35 | v1.35.0-eksbuild.2 |
| 1.34 | v1.34.3-eksbuild.2 |
| 1.33 | v1.33.7-eksbuild.2 |
| 1.32 | v1.32.11-eksbuild.2 |
CoreDNS
- If updating to CoreDNS
1.8.3or later, addendpointslicespermission tosystem:corednsclusterrole - AWS recommends migrating to the managed add-on type
# Check if managed or self-managed
aws eks describe-addon --cluster-name my-cluster --addon-name coredns \
--query addon.addonVersion --output text
# (Error = self-managed, version string = managed)
# Check current image
kubectl describe deployment coredns -n kube-system | grep Image | cut -d ":" -f 3
# Update image (replace REGION and VERSION as appropriate)
kubectl set image deployment.apps/coredns -n kube-system \
coredns=602401143452.dkr.ecr.REGION.amazonaws.com/eks/coredns:VERSION
Cluster Autoscaler
- The Cluster Autoscaler version must match the Kubernetes minor version of the cluster
| Kubernetes Version | Cluster Autoscaler |
|---|---|
| 1.35 | v1.35.0 |
| 1.34 | v1.34.3 |
| 1.33 | v1.33.4 |
| 1.32 | v1.32.7 |
# Check current version
kubectl get deployment clusterautoscaler-aws-cluster-autoscaler -n kube-system \
-o=jsonpath='{.spec.template.spec.containers[0].image}'
Migrate Workers to New AMIs
Self-managed workers: Update the launch template, update the ASG, then manually cordon and drain each node.
Managed node groups (node pools): Click the update button — cordon and drain is handled automatically.
All worker groups (every nodepool and every self-managed group) must be on the same minor version as the control plane, or within the supported skew. Since Kubernetes 1.28, nodes can be up to 3 minor versions behind the control plane (expanded from n-2).
Pod Security
Namespace labeling for Pod Security Admission (replacement for the removed PodSecurityPolicy):
# MODE: enforce | audit | warn
# LEVEL: privileged | baseline | restricted
# VERSION: valid Kubernetes minor version or `latest`
# Example: label the 'default' namespace with audit mode at baseline level
kubectl label namespace default \
pod-security.kubernetes.io/audit=baseline \
pod-security.kubernetes.io/audit-version=latest \
--overwrite
Version-Specific Notes
1.35
- Cgroup v1 removed by default. Kubelet refuses to start on cgroup v1 nodes. AL2023 and Bottlerocket use cgroup v2. Fargate still uses cgroup v1. Custom AMIs must migrate.
- containerd 1.x end of support. Upgrade to containerd 2.0+ before going beyond 1.35.
--pod-infra-container-imagekubelet flag removed. Remove from all node bootstrap scripts and launch templates.- IPVS mode (kube-proxy) deprecated — will be removed in 1.36.
- Ingress NGINX retiring March 2026. Plan migration to Gateway API or another controller.
1.34
- No more AL2 AMIs from AWS. Migrate to Amazon Linux 2023 before upgrading.
- VolumeAttributesClass graduated to GA (
storage.k8s.io/v1). Self-managed CSI sidecars may need pinning on older clusters. --cgroup-driverkubelet flag deprecated. Remove it from node bootstrap scripts before upgrading to 1.34+.- AppArmor deprecated — migrate to seccomp or Pod Security Standards.
1.33
- No more AL2 AMIs from AWS. Migrate to AL2023.
- Endpoints API deprecated — migrate to EndpointSlices for dual-stack and modern features.
- Sidecar containers graduated to stable (
restartPolicy: Always).
1.32
flowcontrol.apiserver.k8s.io/v1beta3removed. Update all FlowSchema and PriorityLevelConfiguration manifests toflowcontrol.apiserver.k8s.io/v1before upgrading.- Anonymous authentication restricted. Only
/healthz,/livez,/readyzendpoints accept unauthenticated requests. Any tooling relying on unauthenticated API access will break. - Last version with AL2 AMIs. Plan migration to AL2023.
1.31
--keep-terminated-pod-volumeskubelet flag removed. Remove from bootstrap scripts and launch templates.- AppArmor graduated to stable. Migrate from annotation-based config to
appArmorProfile.typefield insecurityContext. - Amazon EBS CSI Driver: Upgrade to v1.35.0+ to enable VolumeAttributesClass support.
1.30
- Default node OS changed to Amazon Linux 2023 (AL2023) for newly created managed node groups.
gp2StorageClass no longer set as default on new clusters. If you rely on a default StorageClass, setdefaultStorageClass.enabled: truein AWS EBS CSI Driver1.31.0+or referencegp2explicitly.- New IAM requirement: Add
ec2:DescribeAvailabilityZonesto the EKS cluster IAM role. - New node label:
topology.k8s.aws/zone-idadded to worker nodes.
1.29
flowcontrol.apiserver.k8s.io/v1beta2removed. MigrateFlowSchemaandPriorityLevelConfigurationtov1..status.kubeProxyVersionfield deprecated (unreliable — kubelet doesn’t know the actual kube-proxy version). Remove usage from client software.- LegacyServiceAccountTokenCleanUp enabled — tokens unused for 1 year are marked invalid; after another year, automatically removed.
- AWS Load Balancer Controller: Must be on v2.4.7+ before upgrading to 1.25 if using EndpointSlices.
- HPA migration: HPAs should be on
autoscaling/v2(v2beta2 removed in 1.26).
1.27 and Earlier
--container-runtimekubelet argument ignored. Remove from all node creation workflows and build scripts. You must be runningcontainerd.- Alpha
seccompannotations removed. UsesecurityContext.seccompProfileinstead of the deprecated annotations.