Editors: Agustina Barbetta, Aakanksha Bhende, Udi Hofesh, Ryota Sawada, Sneha Yadav
Similar to previous releases, the release of Kubernetes v1.33 introduces new stable, beta, and alphafeatures. The consistent delivery of high-quality releases underscores the strength of ourdevelopment cycle and the vibrant support from our community.
This release consists of 64 enhancements. Of those enhancements, 18 have graduated to Stable, 20 areentering Beta, 24 have entered Alpha, and 2 are deprecated or withdrawn.
There are also several notable deprecations and removals in thisrelease; make sure to read about those if you already run an older version of Kubernetes.
Release theme and logo
The theme for Kubernetes v1.33 is Octarine: The Color of Magic1, inspired by TerryPratchett’s Discworld series. This release highlights the open source magic2 thatKubernetes enables across the ecosystem.
If you’re familiar with the world of Discworld, you might recognize a small swamp dragon perchedatop the tower of the Unseen University, gazing up at the Kubernetes moon above the city ofAnkh-Morpork with 64 stars3 in the background.
As Kubernetes moves into its second decade, we celebrate both the wizardry of its maintainers, thecuriosity of new contributors, and the collaborative spirit that fuels the project. The v1.33release is a reminder that, as Pratchett wrote, “It’s still magic even if you know how it’s done.”Even if you know the ins and outs of the Kubernetes code base, stepping back at the end of therelease cycle, you’ll realize that Kubernetes remains magical.
Kubernetes v1.33 is a testament to the enduring power of open source innovation, where hundreds ofcontributors4 from around the world work together to create something trulyextraordinary. Behind every new feature, the Kubernetes community works to maintain and improve theproject, ensuring it remains secure, reliable, and released on time. Each release builds upon theother, creating something greater than we could achieve alone.
1. Octarine is the mythical eighth color, visible only to those attuned to the arcane—wizards,witches, and, of course, cats. And occasionally, someone who’s stared at IPtable rules for toolong.
2. Any sufficiently advanced technology is indistinguishable from magic…?
3. It’s not a coincidence 64 KEPs (Kubernetes Enhancement Proposals) are also included inv1.33.
4. See the Project Velocity section for v1.33 🚀
Spotlight on key updates
Kubernetes v1.33 is packed with new features and improvements. Here are a few select updates theRelease Team would like to highlight!
Stable: Sidecar containers
The sidecar pattern involves deploying separate auxiliary container(s) to handle extra capabilitiesin areas such as networking, logging, and metrics gathering. Sidecar containers graduate to stablein v1.33.
Kubernetes implements sidecars as a special class of init containers with restartPolicy: Always
,ensuring that sidecars start before application containers, remain running throughout the pod'slifecycle, and terminate automatically after the main containers exit.
Additionally, sidecars can utilize probes (startup, readiness, liveness) to signal their operationalstate, and their Out-Of-Memory (OOM) score adjustments are aligned with primary containers toprevent premature termination under memory pressure.
To learn more, read Sidecar Containers.
This work was done as part of KEP-753: Sidecar Containers led by SIG Node.
Beta: In-place resource resize for vertical scaling of Pods
Workloads can be defined using APIs like Deployment, StatefulSet, etc. These describe the templatefor the Pods that should run, including memory and CPU resources, as well as the replica count ofthe number of Pods that should run. Workloads can be scaled horizontally by updating the Pod replicacount, or vertically by updating the resources required in the Pods container(s). Before thisenhancement, container resources defined in a Pod's spec
were immutable, and updating any of thesedetails within a Pod template would trigger Pod replacement.
But what if you could dynamically update the resource configuration for your existing Pods withoutrestarting them?
The KEP-1287 is precisely to allow such in-place Pod updates. It wasreleased as alpha in v1.27, and has graduated to beta in v1.33. This opens up various possibilitiesfor vertical scale-up of stateful processes without any downtime, seamless scale-down when thetraffic is low, and even allocating larger resources during startup, which can then be reduced oncethe initial setup is complete.
This work was done as part of KEP-1287: In-Place Update of Pod Resourcesled by SIG Node and SIG Autoscaling.
Alpha: New configuration option for kubectl with .kuberc
for user preferences
In v1.33, kubectl
introduces a new alpha feature with opt-in configuration file .kuberc
for userpreferences. This file can contain kubectl
aliases and overrides (e.g. defaulting to useserver-side apply), while leaving clustercredentials and host information in kubeconfig. This separation allows sharing the same userpreferences for kubectl
interaction, regardless of target cluster and kubeconfig used.
To enable this alpha feature, users can set the environment variable of KUBECTL_KUBERC=true
andcreate a .kuberc
configuration file. By default, kubectl
looks for this file in~/.kube/kuberc
. You can also specify an alternative location using the --kuberc
flag, forexample: kubectl --kuberc /var/kube/rc
.
This work was done as part ofKEP-3104: Separate kubectl user preferences from cluster configs led bySIG CLI.
Features graduating to Stable
This is a selection of some of the improvements that are now stable following the v1.33 release.
Backoff limits per index for indexed Jobs
This release graduates a feature that allows setting backoff limits on a per-index basis for IndexedJobs. Traditionally, the backoffLimit
parameter in Kubernetes Jobs specifies the number of retriesbefore considering the entire Job as failed. This enhancement allows each index within an IndexedJob to have its own backoff limit, providing more granular control over retry behavior forindividual tasks. This ensures that the failure of specific indices does not prematurely terminatethe entire Job, allowing the other indices to continue processing independently.
This work was done as part ofKEP-3850: Backoff Limit Per Index For Indexed Jobs led by SIG Apps.
Job success policy
Using .spec.successPolicy
, users can specify which pod indexes must succeed (succeededIndexes
),how many pods must succeed (succeededCount
), or a combination of both. This feature benefitsvarious workloads, including simulations where partial completion is sufficient, and leader-workerpatterns where only the leader's success determines the Job's overall outcome.
This work was done as part of KEP-3998: Job success/completion policy ledby SIG Apps.
Bound ServiceAccount token security improvements
This enhancement introduced features such as including a unique token identifier (i.e.JWT ID Claim, also known as JTI) andnode information within the tokens, enabling more precise validation and auditing. Additionally, itsupports node-specific restrictions, ensuring that tokens are only usable on designated nodes,thereby reducing the risk of token misuse and potential security breaches. These improvements, nowgenerally available, aim to enhance the overall security posture of service account tokens withinKubernetes clusters.
This work was done as part ofKEP-4193: Bound service account token improvements led by SIG Auth.
Subresource support in kubectl
The --subresource
argument is now generally available for kubectl subcommands such as get
,patch
, edit
, apply
and replace
, allowing users to fetch and update subresources for allresources that support them. To learn more about the subresources supported, visit thekubectl reference.
This work was done as part ofKEP-2590: Add subresource support to kubectl led by SIG CLI.
Multiple Service CIDRs
This enhancement introduced a new implementation of allocation logic for Service IPs. Across thewhole cluster, every Service of type: ClusterIP
must have a unique IP address assigned to it.Trying to create a Service with a specific cluster IP that has already been allocated will return anerror. The updated IP address allocator logic uses two newly stable API objects: ServiceCIDR
andIPAddress
. Now generally available, these APIs allow cluster administrators to dynamicallyincrease the number of IP addresses available for type: ClusterIP
Services (by creating newServiceCIDR objects).
This work was done as part of KEP-1880: Multiple Service CIDRs led by SIGNetwork.
nftables
backend for kube-proxy
The nftables
backend for kube-proxy is now stable, adding a new implementation that significantlyimproves performance and scalability for Services implementation within Kubernetes clusters. Forcompatibility reasons, iptables
remains the default on Linux nodes. Check themigration guideif you want to try it out.
This work was done as part of KEP-3866: nftables kube-proxy backend ledby SIG Network.
Topology aware routing with trafficDistribution: PreferClose
This release graduates topology-aware routing and traffic distribution to GA, which would allow usto optimize service traffic in multi-zone clusters. The topology-aware hints in EndpointSlices wouldenable components like kube-proxy to prioritize routing traffic to endpoints within the same zone,thereby reducing latency and cross-zone data transfer costs. Building upon this,trafficDistribution
field is added to the Service specification, with the PreferClose
optiondirecting traffic to the nearest available endpoints based on network topology. This configurationenhances performance and cost-efficiency by minimizing inter-zone communication.
This work was done as part of KEP-4444: Traffic Distribution for Servicesand KEP-2433: Topology Aware Routing led by SIG Network.
Options to reject non SMT-aligned workload
This feature added policy options to the CPU Manager, enabling it to reject workloads that do notalign with Simultaneous Multithreading (SMT) configurations. This enhancement, now generallyavailable, ensures that when a pod requests exclusive use of CPU cores, the CPU Manager can enforceallocation of entire core pairs (comprising primary and sibling threads) on SMT-enabled systems,thereby preventing scenarios where workloads share CPU resources in unintended ways.
This work was done as part ofKEP-2625: node: cpumanager: add options to reject non SMT-aligned workloadled by SIG Node.
Defining Pod affinity or anti-affinity using matchLabelKeys
and mismatchLabelKeys
The matchLabelKeys
and mismatchLabelKeys
fields are available in Pod affinity terms, enablingusers to finely control the scope where Pods are expected to co-exist (Affinity) or not(AntiAffinity). These newly stable options complement the existing labelSelector
mechanism. Theaffinity fields facilitate enhanced scheduling for versatile rolling updates, as well as isolationof services managed by tools or controllers based on global configurations.
This work was done as part ofKEP-3633: Introduce MatchLabelKeys to Pod Affinity and Pod Anti Affinityled by SIG Scheduling.
Considering taints and tolerations when calculating Pod topology spread skew
This enhanced PodTopologySpread
by introducing two fields: nodeAffinityPolicy
andnodeTaintsPolicy
. These fields allow users to specify whether node affinity rules and node taintsshould be considered when calculating pod distribution across nodes. By default,nodeAffinityPolicy
is set to Honor
, meaning only nodes matching the pod's node affinity orselector are included in the distribution calculation. The nodeTaintsPolicy
defaults to Ignore
,indicating that node taints are not considered unless specified. This enhancement provides finercontrol over pod placement, ensuring that pods are scheduled on nodes that meet both affinity andtaint toleration requirements, thereby preventing scenarios where pods remain pending due tounsatisfied constraints.
This work was done as part ofKEP-3094: Take taints/tolerations into consideration when calculating PodTopologySpread skewled by SIG Scheduling.
Volume populators
After being released as beta in v1.24, volume populators have graduated to GA in v1.33. This newlystable feature provides a way to allow users to pre-populate volumes with data from various sources,and not just from PersistentVolumeClaim (PVC) clones or volume snapshots. The mechanism relies onthe dataSourceRef
field within a PersistentVolumeClaim. This field offers more flexibility thanthe existing dataSource
field, and allows for custom resources to be used as data sources.
A special controller, volume-data-source-validator
, validates these data source references,alongside a newly stable CustomResourceDefinition (CRD) for an API kind named VolumePopulator. TheVolumePopulator API allows volume populator controllers to register the types of data sources theysupport. You need to set up your cluster with the appropriate CRD in order to use volume populators.
This work was done as part of KEP-1495: Generic data populators led bySIG Storage.
Always honor PersistentVolume reclaim policy
This enhancement addressed an issue where the Persistent Volume (PV) reclaim policy is notconsistently honored, leading to potential storage resource leaks. Specifically, if a PV is deletedbefore its associated Persistent Volume Claim (PVC), the "Delete" reclaim policy may not beexecuted, leaving the underlying storage assets intact. To mitigate this, Kubernetes now setsfinalizers on relevant PVs, ensuring that the reclaim policy is enforced regardless of the deletionsequence. This enhancement prevents unintended retention of storage resources and maintainsconsistency in PV lifecycle management.
This work was done as part ofKEP-2644: Always Honor PersistentVolume Reclaim Policy led by SIGStorage.
New features in Beta
This is a selection of some of the improvements that are now beta following the v1.33 release.
Support for Direct Service Return (DSR) in Windows kube-proxy
DSR provides performance optimizations by allowing the return traffic routed through load balancersto bypass the load balancer and respond directly to the client; reducing load on the load balancerand also reducing overall latency. For information on DSR on Windows, readDirect Server Return (DSR) in a nutshell.
Initially introduced in v1.14, support for DSR has been promoted to beta by SIG Windows as part ofKEP-5100: Support for Direct Service Return (DSR) and overlay networking in Windows kube-proxy.
Structured parameter support
While structured parameter support continues as a beta feature in Kubernetes v1.33, this core partof Dynamic Resource Allocation (DRA) has seen significant improvements. A new v1beta2 versionsimplifies the resource.k8s.io
API, and regular users with the namespaced cluster edit
role cannow use DRA.
The kubelet
now includes seamless upgrade support, enabling drivers deployed as DaemonSets to usea rolling update mechanism. For DRA implementations, this prevents the deletion and re-creation ofResourceSlices, allowing them to remain unchanged during upgrades. Additionally, a 30-second graceperiod has been introduced before the kubelet
cleans up after unregistering a driver, providingbetter support for drivers that do not use rolling updates.
This work was done as part of KEP-4381: DRA: structured parameters by WGDevice Management, a cross-functional team including SIG Node, SIG Scheduling, and SIG Autoscaling.
Dynamic Resource Allocation (DRA) for network interfaces
The standardized reporting of network interface data via DRA, introduced in v1.32, has graduated tobeta in v1.33. This enables more native Kubernetes network integrations, simplifying the developmentand management of networking devices. This was covered previously in thev1.32 release announcement blog.
This work was done as part ofKEP-4817: DRA: Resource Claim Status with possible standardized network interface dataled by SIG Network, SIG Node, and WG Device Management.
Handle unscheduled pods early when scheduler does not have any pod on activeQ
This feature improves queue scheduling behavior. Behind the scenes, the scheduler achieves this bypopping pods from the backoffQ, which are not backed off due to errors, when the activeQ isempty. Previously, the scheduler would become idle even when the activeQ was empty; thisenhancement improves scheduling efficiency by preventing that.
This work was done as part ofKEP-5142: Pop pod from backoffQ when activeQ is empty led by SIGScheduling.
Asynchronous preemption in the Kubernetes Scheduler
Preemption ensures higher-priority pods get the resources they need by evicting lower-priority ones.Asynchronous Preemption, introduced in v1.32 as alpha, has graduated to beta in v1.33. With thisenhancement, heavy operations such as API calls to delete pods are processed in parallel, allowingthe scheduler to continue scheduling other pods without delays. This improvement is particularlybeneficial in clusters with high Pod churn or frequent scheduling failures, ensuring a moreefficient and resilient scheduling process.
This work was done as part ofKEP-4832: Asynchronous preemption in the scheduler led by SIG Scheduling.
ClusterTrustBundles
ClusterTrustBundle, a cluster-scoped resource designed for holding X.509 trust anchors (rootcertificates), has graduated to beta in v1.33. This API makes it easier for in-cluster certificatesigners to publish and communicate X.509 trust anchors to cluster workloads.
This work was done as part ofKEP-3257: ClusterTrustBundles (previously Trust Anchor Sets) led by SIGAuth.
Fine-grained SupplementalGroups control
Introduced in v1.31, this feature graduates to beta in v1.33 and is now enabled by default. Providedthat your cluster has the SupplementalGroupsPolicy
feature gate enabled, thesupplementalGroupsPolicy
field within a Pod's securityContext
supports two policies: the defaultMerge policy maintains backward compatibility by combining specified groups with those from thecontainer image's /etc/group
file, whereas the new Strict policy applies only to explicitlydefined groups.
This enhancement helps to address security concerns where implicit group memberships from containerimages could lead to unintended file access permissions and bypass policy controls.
This work was done as part ofKEP-3619: Fine-grained SupplementalGroups control led by SIG Node.
Support for mounting images as volumes
Support for using Open Container Initiative (OCI) images as volumes in Pods, introduced in v1.31,has graduated to beta. This feature allows users to specify an image reference as a volume in a Podwhile reusing it as a volume mount within containers. It opens up the possibility of packaging thevolume data separately, and sharing them among containers in a Pod without including them in themain image, thereby reducing vulnerabilities and simplifying image creation.
This work was done as part ofKEP-4639: VolumeSource: OCI Artifact and/or Image led by SIG Node and SIGStorage.
Support for user namespaces within Linux Pods
One of the oldest open KEPs as of writing is KEP-127, Pod securityimprovement by using Linux User namespaces forPods. This KEP was first opened in late 2016, and after multiple iterations, had its alpha releasein v1.25, initial beta in v1.30 (where it was disabled by default), and has moved to on-by-defaultbeta as part of v1.33.
This support will not impact existing Pods unless you manually specify pod.spec.hostUsers
to optin. As highlighted in thev1.30 sneak peek blog, this is an importantmilestone for mitigating vulnerabilities.
This work was done as part of KEP-127: Support User Namespaces in pods ledby SIG Node.
Pod procMount
option
The procMount
option, introduced as alpha in v1.12, and off-by-default beta in v1.31, has moved toan on-by-default beta in v1.33. This enhancement improves Pod isolation by allowing users tofine-tune access to the /proc
filesystem. Specifically, it adds a field to the PodsecurityContext
that lets you override the default behavior of masking and marking certain /proc
paths as read-only. This is particularly useful for scenarios where users want to run unprivilegedcontainers inside the Kubernetes Pod using user namespaces. Normally, the container runtime (via theCRI implementation) starts the outer container with strict /proc
mount settings. However, tosuccessfully run nested containers with an unprivileged Pod, users need a mechanism to relax thosedefaults, and this feature provides exactly that.
This work was done as part of KEP-4265: add ProcMount option led by SIGNode.
CPUManager policy to distribute CPUs across NUMA nodes
This feature adds a new policy option for the CPU Manager to distribute CPUs across Non-UniformMemory Access (NUMA) nodes, rather than concentrating them on a single node. It optimizes CPUresource allocation by balancing workloads across multiple NUMA nodes, thereby improving performanceand resource utilization in multi-NUMA systems.
This work was done as part ofKEP-2902: Add CPUManager policy option to distribute CPUs across NUMA nodes instead of packing themled by SIG Node.
Zero-second sleeps for container PreStop hooks
Kubernetes 1.29 introduced a Sleep action for the preStop
lifecycle hook in Pods, allowingcontainers to pause for a specified duration before termination. This provides a straightforwardmethod to delay container shutdown, facilitating tasks such as connection draining or cleanupoperations.
The Sleep action in a preStop
hook can now accept a zero-second duration as a beta feature. Thisallows defining a no-op preStop
hook, which is useful when a preStop
hook is required but nodelay is desired.
This work was done as part ofKEP-3960: Introducing Sleep Action for PreStop Hook andKEP-4818: Allow zero value for Sleep Action of PreStop Hook led by SIGNode.
Internal tooling for declarative validation of Kubernetes-native types
Behind the scenes, the internals of Kubernetes are starting to use a new mechanism for validatingobjects and changes to objects. Kubernetes v1.33 introduces validation-gen
, an internal tool thatKubernetes contributors use to generate declarative validation rules. The overall goal is to improvethe robustness and maintainability of API validations by enabling developers to specify validationconstraints declaratively, reducing manual coding errors and ensuring consistency across thecodebase.
This work was done as part ofKEP-5073: Declarative Validation Of Kubernetes Native Types With validation-genled by SIG API Machinery.
New features in Alpha
This is a selection of some of the improvements that are now alpha following the v1.33 release.
Configurable tolerance for HorizontalPodAutoscalers
This feature introduces configurable tolerance for HorizontalPodAutoscalers, which dampens scalingreactions to small metric variations.
This work was done as part ofKEP-4951: Configurable tolerance for Horizontal Pod Autoscalers led bySIG Autoscaling.
Configurable container restart delay
Introduced as alpha1 in v1.32, this feature provides a set of kubelet-level configurations tofine-tune how CrashLoopBackOff is handled.
This work was done as part of KEP-4603: Tune CrashLoopBackOff led by SIGNode.
Custom container stop signals
Before Kubernetes v1.33, stop signals could only be set in container image definitions (for example,via the StopSignal
configuration field in the image metadata). If you wanted to modify terminationbehavior, you needed to build a custom container image. By enabling the (alpha)ContainerStopSignals
feature gate in Kubernetes v1.33, you can now define custom stop signalsdirectly within Pod specifications. This is defined in the container's lifecycle.stopSignal
fieldand requires the Pod's spec.os.name
field to be present. If unspecified, containers fall back tothe image-defined stop signal (if present), or the container runtime default (typically SIGTERM forLinux).
This work was done as part of KEP-4960: Container Stop Signals led by SIGNode.
DRA enhancements galore!
Kubernetes v1.33 continues to develop Dynamic Resource Allocation (DRA) with features designed fortoday’s complex infrastructures. DRA is an API for requesting and sharing resources between pods andcontainers inside a pod. Typically those resources are devices such as GPUs, FPGAs, and networkadapters.
The following are all the alpha DRA feature gates introduced in v1.33:
- Similar to Node taints, by enabling the
DRADeviceTaints
feature gate, devices support taints andtolerations. An admin or a control plane component can taint devices to limit their usage.Scheduling of pods which depend on those devices can be paused while a taint exists and/or podsusing a tainted device can be evicted. - By enabling the feature gate
DRAPrioritizedList
, DeviceRequests get a new field namedfirstAvailable
. This field is an ordered list that allows the user to specify that a request maybe satisfied in different ways, including allocating nothing at all if some specific hardware isnot available. - With feature gate
DRAAdminAccess
enabled, only users authorized to create ResourceClaim orResourceClaimTemplate objects in namespaces labeled withresource.k8s.io/admin-access: "true"
can use theadminAccess
field. This ensures that non-admin users cannot misuse theadminAccess
feature. - While it has been possible to consume device partitions since v1.31, vendors had to pre-partitiondevices and advertise them accordingly. By enabling the
DRAPartitionableDevices
feature gate inv1.33, device vendors can advertise multiple partitions, including overlapping ones. TheKubernetes scheduler will choose the partition based on workload requests, and prevent theallocation of conflicting partitions simultaneously. This feature gives vendors the ability todynamically create partitions at allocation time. The allocation and dynamic partitioning areautomatic and transparent to users, enabling improved resource utilization.
These feature gates have no effect unless you also enable the DynamicResourceAllocation
featuregate.
This work was done as part ofKEP-5055: DRA: device taints and tolerations,KEP-4816: DRA: Prioritized Alternatives in Device Requests,KEP-5018: DRA: AdminAccess for ResourceClaims and ResourceClaimTemplates,and KEP-4815: DRA: Add support for partitionable devices, led by SIGNode, SIG Scheduling and SIG Auth.
Robust image pull policy to authenticate images for IfNotPresent
and Never
This feature allows users to ensure that kubelet requires an image pull authentication check foreach new set of credentials, regardless of whether the image is already present on the node.
This work was done as part of KEP-2535: Ensure secret pulled images ledby SIG Auth.
Node topology labels are available via downward API
This feature enables Node topology labels to be exposed via the downward API. Prior to Kubernetesv1.33, a workaround involved using an init container to query the Kubernetes API for the underlyingnode; this alpha feature simplifies how workloads can access Node topology information.
This work was done as part ofKEP-4742: Expose Node labels via downward API led by SIG Node.
Better pod status with generation and observed generation
Prior to this change, the metadata.generation
field was unused in pods. Along with extending tosupport metadata.generation
, this feature will introduce status.observedGeneration
to provideclearer pod status.
This work was done as part of KEP-5067: Pod Generation led by SIG Node.
Support for split level 3 cache architecture with kubelet’s CPU Manager
The previous kubelet’s CPU Manager was unaware of split L3 cache architecture (also known as LastLevel Cache, or LLC), and can potentially distribute CPU assignments without considering the splitL3 cache, causing a noisy neighbor problem. This alpha feature improves the CPU Manager to betterassign CPU cores for better performance.
This work was done as part ofKEP-5109: Split L3 Cache Topology Awareness in CPU Manager led by SIGNode.
PSI (Pressure Stall Information) metrics for scheduling improvements
This feature adds support on Linux nodes for providing PSI stats and metrics using cgroupv2. It candetect resource shortages and provide nodes with more granular control for pod scheduling.
This work was done as part of KEP-4205: Support PSI based on cgroupv2 ledby SIG Node.
Secret-less image pulls with kubelet
The kubelet's on-disk credential provider now supports optional Kubernetes ServiceAccount (SA) tokenfetching. This simplifies authentication with image registries by allowing cloud providers to betterintegrate with OIDC compatible identity solutions.
This work was done as part ofKEP-4412: Projected service account tokens for Kubelet image credential providersled by SIG Auth.
Graduations, deprecations, and removals in v1.33
Graduations to stable
This lists all the features that have graduated to stable (also known as general availability).For a full list of updates including new features and graduations from alpha to beta, see therelease notes.
This release includes a total of 18 enhancements promoted to stable:
- Take taints/tolerations into consideration when calculating PodTopologySpread skew
- Introduce
MatchLabelKeys
to Pod Affinity and Pod Anti Affinity - Bound service account token improvements
- Generic data populators
- Multiple Service CIDRs
- Topology Aware Routing
- Portworx file in-tree to CSI driver migration
- Always Honor PersistentVolume Reclaim Policy
- nftables kube-proxy backend
- Deprecate status.nodeInfo.kubeProxyVersion field
- Add subresource support to kubectl
- Backoff Limit Per Index For Indexed Jobs
- Job success/completion policy
- Sidecar Containers
- CRD Validation Ratcheting
- node: cpumanager: add options to reject non SMT-aligned workload
- Traffic Distribution for Services
- Recursive Read-only (RRO) mounts
Deprecations and removals
As Kubernetes develops and matures, features may be deprecated, removed, or replaced with betterones to improve the project's overall health. See the Kubernetesdeprecation and removal policy for more details onthis process. Many of these deprecations and removals were announced in theDeprecations and Removals blog post.
Deprecation of the stable Endpoints API
The EndpointSlices API has been stable sincev1.21, which effectively replaced the original Endpoints API. While the original Endpoints API wassimple and straightforward, it also posed some challenges when scaling to large numbers of networkendpoints. The EndpointSlices API has introduced new features such as dual-stack networking, makingthe original Endpoints API ready for deprecation.
This deprecation affects only those who use the Endpoints API directly from workloads or scripts;these users should migrate to use EndpointSlices instead. There will be a dedicated blog post withmore details on the deprecation implications and migration plans.
You can find more in KEP-4974: Deprecate v1.Endpoints.
Removal of kube-proxy version information in node status
Following its deprecation in v1.31, as highlighted in the v1.31release announcement,the .status.nodeInfo.kubeProxyVersion
field for Nodes was removed in v1.33.
This field was set by kubelet, but its value was not consistently accurate. As it has been disabledby default since v1.31, this field has been removed entirely in v1.33.
You can find more inKEP-4004: Deprecate status.nodeInfo.kubeProxyVersion field.
Removal of in-tree gitRepo volume driver
The gitRepo volume type has been deprecated since v1.11, nearly 7 years ago. Since its deprecation,there have been security concerns, including how gitRepo volume types can be exploited to gainremote code execution as root on the nodes. In v1.33, the in-tree driver code is removed.
There are alternatives such as git-sync and initContainers. gitVolumes
in the Kubernetes API isnot removed, and thus pods with gitRepo
volumes will be admitted by kube-apiserver, but kubeletswith the feature-gate GitRepoVolumeDriver
set to false will not run them and return an appropriateerror to the user. This allows users to opt-in to re-enabling the driver for 3 versions to give themenough time to fix workloads.
The feature gate in kubelet and in-tree plugin code is planned to be removed in the v1.39 release.
You can find more in KEP-5040: Remove gitRepo volume driver.
Removal of host network support for Windows pods
Windows Pod networking aimed to achieve feature parity with Linux and provide better cluster densityby allowing containers to use the Node’s networking namespace. The original implementation landed asalpha with v1.26, but because it faced unexpected containerd behaviours and alternative solutionswere available, the Kubernetes project has decided to withdraw the associated KEP. Support was fullyremoved in v1.33.
Please note that this does not affectHostProcess containers, whichprovides host network as well as host level access. The KEP withdrawn in v1.33 was about providingthe host network only, which was never stable due to technical limitations with Windows networkinglogic.
You can find more in KEP-3503: Host network support for Windows pods.
Release notes
Check out the full details of the Kubernetes v1.33 release in ourrelease notes.
Availability
Kubernetes v1.33 is available for download onGitHub or on theKubernetes download page.
To get started with Kubernetes, check out these interactive tutorials or runlocal Kubernetes clusters using minikube. You can also easilyinstall v1.33 usingkubeadm.
Release Team
Kubernetes is only possible with the support, commitment, and hard work of its community. ReleaseTeam is made up of dedicated community volunteers who work together to build the many pieces thatmake up the Kubernetes releases you rely on. This requires the specialized skills of people from allcorners of our community, from the code itself to its documentation and project management.
We would like to thank the entireRelease Teamfor the hours spent hard at work to deliver the Kubernetes v1.33 release to our community. TheRelease Team's membership ranges from first-time shadows to returning team leads with experienceforged over several release cycles. There was a new team structure adopted in this release cycle,which was to combine Release Notes and Docs subteams into a unified subteam of Docs. Thanks to themeticulous effort in organizing the relevant information and resources from the new Docs team, bothRelease Notes and Docs tracking have seen a smooth and successful transition. Finally, a veryspecial thanks goes out to our release lead, Nina Polshakova, for her support throughout asuccessful release cycle, her advocacy, her efforts to ensure that everyone could contributeeffectively, and her challenges to improve the release process.
Project velocity
The CNCF K8sDevStatsproject aggregates several interesting data points related to the velocity of Kubernetes and varioussubprojects. This includes everything from individual contributions, to the number of companiescontributing, and illustrates the depth and breadth of effort that goes into evolving thisecosystem.
During the v1.33 release cycle, which spanned 15 weeks from January 13 to April 23, 2025, Kubernetesreceived contributions from as many as 121 different companies and 570 individuals (as of writing, afew weeks before the release date). In the wider cloud native ecosystem, the figure goes up to 435companies counting 2400 total contributors. You can find the data source inthis dashboard.Compared to thevelocity data from previous release, v1.32,we see a similar level of contribution from companies and individuals, indicating strong communityinterest and engagement.
Note that, “contribution” counts when someone makes a commit, code review, comment, creates an issueor PR, reviews a PR (including blogs and documentation) or comments on issues and PRs. If you areinterested in contributing, visitGetting Started on our contributorwebsite.
Check out DevStatsto learn more about the overall velocity of the Kubernetes project and community.
Event update
Explore upcoming Kubernetes and cloud native events, including KubeCon + CloudNativeCon, KCD, andother notable conferences worldwide. Stay informed and get involved with the Kubernetes community!
May 2025
- KCD - Kubernetes Community Days: Costa Rica:May 3, 2025 | Heredia, Costa Rica
- KCD - Kubernetes Community Days: Helsinki:May 6, 2025 | Helsinki, Finland
- KCD - Kubernetes Community Days: Texas Austin:May 15, 2025 | Austin, USA
- KCD - Kubernetes Community Days: Seoul:May 22, 2025 | Seoul, South Korea
- KCD - Kubernetes Community Days: Istanbul, Turkey:May 23, 2025 | Istanbul, Turkey
- KCD - Kubernetes Community Days: San Francisco Bay Area:May 28, 2025 | San Francisco, USA
June 2025
- KCD - Kubernetes Community Days: New York:June 4, 2025 | New York, USA
- KCD - Kubernetes Community Days: Czech & Slovak:June 5, 2025 | Bratislava, Slovakia
- KCD - Kubernetes Community Days: Bengaluru:June 6, 2025 | Bangalore, India
- KubeCon + CloudNativeCon China 2025:June 10-11, 2025 | Hong Kong
- KCD - Kubernetes Community Days: Antigua Guatemala:June 14, 2025 | Antigua Guatemala, Guatemala
- KubeCon + CloudNativeCon Japan 2025:June 16-17, 2025 | Tokyo, Japan
- KCD - Kubernetes Community Days: Nigeria, Africa: June 19, 2025 |Nigeria, Africa
July 2025
- KCD - Kubernetes Community Days: Utrecht:July 4, 2025 | Utrecht, Netherlands
- KCD - Kubernetes Community Days: Taipei:July 5, 2025 | Taipei, Taiwan
- KCD - Kubernetes Community Days: Lima, Peru:July 19, 2025 | Lima, Peru
August 2025
- KubeCon + CloudNativeCon India 2025:August 6-7, 2025 | Hyderabad, India
- KCD - Kubernetes Community Days: Colombia:August 29, 2025 | Bogotá, Colombia
You can find the latest KCD details here.
Upcoming release webinar
Join members of the Kubernetes v1.33 Release Team on Friday, May 16th 2025 at 4:00 PM (UTC), tolearn about the release highlights of this release, as well as deprecations and removals to helpplan for upgrades. For more information and registration, visit theevent pageon the CNCF Online Programs site.
Get involved
The simplest way to get involved with Kubernetes is by joining one of the manySpecial Interest Groups (SIGs)that align with your interests. Have something you’d like to broadcast to the Kubernetes community?Share your voice at our weeklycommunity meeting, and throughthe channels below. Thank you for your continued feedback and support.
- Follow us on Bluesky @kubernetes.io for the latestupdates
- Join the community discussion on Discuss
- Join the community on Slack
- Post questions (or answer questions) onServer Fault orStack Overflow
- Share your Kubernetesstory
- Read more about what’s happening with Kubernetes on the blog
- Learn more about theKubernetes Release Team
- ←Previous
- Next→