6
6
Table of Contents

What Problem Does This Solve?

When you deploy an application in Kubernetes, your pods usually run on multiple nodes, often spread across different availability zones. When traffic comes in through a Service, Kubernetes has to decide which pod should handle that request.

By default, Kubernetes distributes traffic across all healthy endpoints. That works in many cases, but it can cause a couple of real problems once clusters grow, making Kubernetes Management and Optimization a bit challenging. 

Problem 1: Unnecessary Network Latency. If a request arrives at Node A but gets routed to a pod running on Node B in a different zone, the traffic has to travel across the network. That extra hop adds latency and can even cause pods to remain stuck in Pending, despite local capacity being available.

Problem 2: Cross-Zone Data Transfer Costs . Most cloud providers charge for traffic that crosses availability zones. At scale, this adds up quickly. Services handling large volumes of traffic can end up paying more than expected simply because traffic is bouncing between zones.

Beyond cross-zone data transfer costs, several pitfalls can lead to an unexpected Amazon EKS bill shock at the end of your billing cycle. Here’s your guide to optimizing Amazon EKS costs.

The Evolution of Traffic Distribution in Kubernetes

Kubernetes didn’t solve this problem overnight. The current approach is the result of a few iterations.

a) Earlier Approaches

externalTrafficPolicy and internalTrafficPolicy
Before trafficDistribution existed, Kubernetes provided two related options:
Earlier Kubernetes Options

Setting these to Local forced traffic to stay on the same node. If no local endpoints were available, traffic would fail instead of falling back to remote pods.

This behavior was sometimes useful, but risky. A node without a local pod would simply drop traffic, which made these settings hard to use safely in production.

b) The Introduction of trafficDistribution

The trafficDistribution field was introduced to provide a softer, safer approach. Instead of enforcing strict rules, it lets you express preferences, while still allowing fallback when local endpoints are unavailable.

This small change makes a big difference operationally.

What Changes in Kubernetes 1.35

In Kubernetes 1.35, the trafficDistribution feature becomes stable, and two important updates come with it. To get a better understanding of the change, read about Kubernetes 1.34.

1. New Option: PreferSameNode

PreferSameNode tells Kubernetes to try to keep traffic on the same node when possible.
If traffic arrives on a node and a healthy pod exists on that same node, Kubernetes will send traffic there. If not, it simply falls back to any other healthy pod.
Changes in Kubernetes 1.35

The key thing to remember is that this is a preference, not a hard rule. Traffic will not fail if local pods are missing.

Note: trafficDistribution expresses a preference, not a guarantee.
The exact behavior depends on the cluster’s networking implementation (iptables, IPVS, or eBPF-based proxies).

2. Renamed Option: PreferClose Becomes PreferSameZone

The older option PreferClose, has been renamed to PreferSameZone. The behavior is the same, but the new name makes it much clearer what Kubernetes is actually doing.
Renamed Option Change in Kubernetes 1.35

PreferClose still works for backward compatibility, but PreferSameZone is the recommended option going forward.

How These Options Differ

The difference between these two options becomes clearer with an example.

Assume a cluster with:

  • 3 availability zones (zone-a, zone-b, zone-c)
  • 2 nodes per zone
  • 6 application pods spread across the cluster

Scenario: Traffic arrives at Node-1 in Zone-A

a) With PreferSameNode

  • Kubernetes first looks for a healthy pod on Node-1.
  • If it finds one, traffic goes there.
  • If not, traffic falls back to any healthy pod in the cluster.

b) With PreferSameZone

  • Kubernetes looks for healthy pods anywhere in Zone-A.
  • If it finds one, traffic goes to one of those pods (not necessarily on the same node).
  • If no pods exist in that zone, traffic falls back to the rest of the cluster.

Which One Should You Choose?

Choose PreferSameNode when:

  • Lowest possible latency matters
  • Pods are spread across most or all nodes
  • You’re running DaemonSets or very high replica counts

Choose PreferSameZone when:

  • Reducing cross-zone costs is the main goal
  • Pods are not present on every node
  • Zone-level locality is “good enough” for latency

Technical Details: How It Works Under the Hood

When you set trafficDistribution, the control plane (for AWS, EKS Provisioned Control Plane)  passes this preference to the network proxy (kube-proxy or an equivalent dataplane).

The proxy uses EndpointSlices to understand where endpoints live. EndpointSlices include topology information such as node and zone. Based on this data, the proxy programs' routing rules that prefer local endpoints when possible.

You normally don’t need to interact with EndpointSlices directly, but knowing what’s inside them helps a lot when debugging traffic behavior.

EndpointSlice Example
Endpoint slice example

The proxy reads the nodeName and zone fields to decide which endpoints to prefer based on your configuration.

Comparison with Older Traffic Policies

Comparison with older Kubernetes Traffic Policies

Because of the fallback behavior, trafficDistribution is much safer for production workloads.

Things to Consider Before Using This Feature

a) Pod Distribution Matters

PreferSameNode only helps if pods actually exist on most nodes. If pods are concentrated on a few nodes, traffic will frequently fall back to remote endpoints and you won’t see much benefit.

DaemonSets work especially well here. Deployments can also work, but they need enough replicas and proper scheduling.

b) Health Checks Still Apply

Preferences only apply to healthy endpoints. If a local pod is failing readiness checks, traffic will be routed elsewhere. Make sure readiness probes are accurate.

c) Load Distribution Can Become Uneven

With PreferSameNode, traffic depends on where requests arrive. Some nodes may receive more traffic than others, which can lead to uneven load.

HPA can help, but it won’t guarantee pods land on the busiest nodes. This is something to keep in mind for latency-sensitive services.

Real-World Use Cases

  • Sidecar proxies running as DaemonSets
  • Local caching layers with fallback to remote cache nodes
  • Log collection agents running per node
  • Multi-zone services looking to reduce cross-zone costs

These patterns already exist today. trafficDistribution simply makes them easier and safer to implement.

What This Means for Amazon EKS Users

At the time of writing, Amazon EKS does not yet support Kubernetes 1.35.

Once Amazon EKS adds support, the trafficDistribution field with PreferSameNode and PreferSameZone will be available without any API changes.

Until then, you can:

  • Identify services that would benefit from traffic locality
  • Plan migration from PreferClose to PreferSameZone
  • Review pod placement strategies to prepare for PreferSameNode

Looking Ahead

With trafficDistribution now marked stable, Kubernetes considers this feature ready for production use. The API is stable and not expected to change in future releases.

The addition of PreferSameNode fills an important gap. Earlier releases allowed zone-level preferences, but node-level locality with safe fallback was missing. Kubernetes 1.35 finally addresses that.

This work was delivered as part of KEP 3015 by the SIG Network team, after several release cycles of iteration and feedback.

Summary

Kubernetes 1.35 improves service traffic routing with a stable trafficDistribution API:

  • PreferSameNode prefers same-node endpoints with safe fallback
  • PreferSameZone prefers same-zone endpoints with safe fallback (formerly PreferClose)

These options reduce latency and cross-zone costs without the failure risks of older policies.

When managed Kubernetes platforms like Amazon EKS roll out support for 1.35, these features can be used to fine-tune traffic behavior in real production clusters.

For more tips, check out this conversation on smarter Kubernetes management optimized for performance and cost with our expert.

12
Let's discuss your cloud challenges and see how CloudKeeper can solve them all!
Meet the Author
  • Gourav Kumar Pandey
    Senior DevOps Engineer

    Gourav specializes in helping organizations design secure and scalable Kubernetes infrastructures on AWS.

Leave a Comment

Speak with our advisors to learn how you can take control of your Cloud Cost