AI & ML

Gateway API v1.5: Transitioning Features to Stability

Apr 21, 2026 5 min read views

Gateway API logo

The Kubernetes SIG Network community has just unveiled the release of Gateway API version 1.5, marking a significant upgrade for users. Launched on March 14, 2026, this iteration is particularly notable for its transition of several Experimental features to the Standard (Stable) tier. This move reinforces the project's commitment to stability and reliability.

Moreover, a patch for this version, noted as v1.5.1, is already on the table.

Version 1.5 includes six features that the community has eagerly anticipated, all now promoted to the Standard channel of the Gateway API's General Availability release:

  • ListenerSet
  • TLSRoute promoted to Stable
  • HTTPRoute CORS Filter
  • Client Certificate Validation
  • Certificate Selection for Gateway TLS Origination
  • ReferenceGrant promoted to Stable

This progress wouldn't have been possible without the hard work from our Gateway API Contributors.

New Release Model

With Gateway API v1.5, the team has adopted a new release train model, tweaking how features are rolled out. This system accelerates delivery and allows for more predictable updates. Under this model, when a feature freeze date arrives, any features that are ready can be included in the release, regardless of their previous phase, whether Experimental or Standard. This inclusivity also extends to documentation; if accompanying documentation is not finalized, the feature won't see the light of day.

This strategic shift aims to enhance the reliability and frequency of releases, taking cues from the successful processes of the SIG Release team within Kubernetes. Alongside this update, new roles like Release Manager and Release Shadow have been introduced to streamline coordination. Kudos are due to Flynn (from Buoyant) and Beka Modebadze (from Google) for their efforts in refining our release process. They’ll be vital in guiding the next release as well.

Features of Note

ListenerSet

Project leads Dave Protasowski and David Jumani heralded the addition of ListenerSet, detailed in GEP-1713. Prior to this feature, every listener needed to be directly associated with the Gateway object, which suited simple use cases but posed significant hurdles for more intricate multi-tenant structures.

Challenges included:

  • Need for coordination between platform and application teams modifying the same Gateway
  • Difficulty in safely delegating control over individual listeners
  • Hurdles in extending existing Gateways without altering the original resource

The ListenerSet addresses these pain points by facilitating the independent definition of listeners, which can be merged onto a target Gateway. Now, you can attach over 64 listeners to a single shared Gateway—an essential capability for expansive deployments where multiple hostnames are utilized per listener.

Despite these advancements, it's important to highlight that the listener field in Gateway is still mandatory. There must always be at least one valid listener associated with the Gateway.

Listener Contribution Explained

A ListenerSet is linked to a Gateway, allowing it to add one or more listeners. The merging of listeners—combining those from the Gateway resource and any linked ListenerSet resources—falls to the Gateway controller. To illustrate, consider a central infrastructure team that sets up a Gateway with a default HTTP listener:

Meanwhile, application teams can create their individual ListenerSet resources within their own namespaces. Both ListenerSets connect to the same Gateway and contribute additional HTTPS listeners.

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
 name: example-gateway
 namespace: infra
spec:
 gatewayClassName: example-gateway-class
 allowedListeners:
 namespaces:
 from: All # A selector lets you fine tune this
 listeners:
 - name: http
 protocol: HTTP
 port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: ListenerSet
metadata:
 name: team-a-listeners
 namespace: team-a
spec:
 parentRef:
 name: example-gateway
 namespace: infra
 listeners:
 - name: https-a
 protocol: HTTPS
 port: 443
 hostname: a.example.com
 tls:
 certificateRefs:
 - name: a-cert
---
apiVersion: gateway.networking.k8s.io/v1
kind: ListenerSet
metadata:
 name: team-b-listeners
 namespace: team-b
spec:
 parentRef:
 name: example-gateway
 namespace: infra
 listeners:
 - name: https-b
 protocol: HTTPS
 port: 443
 hostname: b.example.com
 tls:
 certificateRefs:
 - name: b-cert

TLSRoute

The addition of the TLSRoute resource, managed by Rostislav Bobrovsky and Ricardo Pchevuzinske Katz, introduces enhanced routing capabilities. This feature allows requests to be routed based on the Server Name Indication (SNI) transmitted by clients during SSL/TLS handshakes, directing traffic appropriately to Kubernetes backends.

Configuring a Gateway's TLS listener within TLSRoute can be done in one of two ways: Passthrough mode offers strict security, meant for situations requiring end-to-end encryption, while Terminate mode allows the Gateway to handle decryption. It’s critical to highlight that if you've been using Experimental versions of TLSRoute, migrating to v1.5 means your existing configurations won’t be compatible. You'll need to either continue with Experimental versions or migrate to v1 resources to align with the Standard YAMLs.

Passthrough Functionality

Designed with strict security use cases in mind, the Passthrough mode is ideal when traffic needs to stay encrypted until it reaches its intended backend. This is particularly relevant when the external client and backend must authenticate directly with one another or when certificate storage on the Gateway is not feasible. Essentially, this mode proxies the encrypted byte stream straight to its destination without the Gateway gaining access to any private keys or unencrypted data. The provided TLSRoute associated with a Passthrough-configured listener will specifically target TLS handshakes for the hostname foo.example.com and enact routing policies to forward the encrypted TCP stream to the backend.

Understanding Terminate Mode

Terminate mode simplifies TLS certificate management by handling it at the Gateway level. With this approach, the Gateway decrypts the TLS sessions, allowing the plain text traffic to flow through to the backend services.

Take a look at the TLSRoute example designed for a listener configured to operate in Terminate mode. This configuration specifically targets TLS handshakes that occur with the bar.example.com SNI hostname, ensuring that routing rules are only applied to the appropriate traffic:

apiVersion: gateway.networking.k8s.io/v1
kind: TLSRoute
metadata:
 name: bar-route
spec:
 parentRefs:
 - name: example-gateway
 sectionName: tls-terminate
 hostnames:
 - "bar.example.com"
 rules:
 - backendRefs:
 - name: bar-svc
 port: 8080

Implementing CORS with HTTPRoute

The collaborative effort on this feature includes inputs from developers such as Damian Sawicki and Ricardo Pchevuzinske Katz. For further insights, refer to GEP-1767.

Cross-Origin Resource Sharing (CORS) is essential for managing which domains can interact with resources served from your server. The HTTPRoute resource simplifies this configuration, allowing precise control over access. The example below shows an HTTPRoute set up to permit requests from the specified origin https://app.example:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
 name: cors
spec:
 parentRefs:
 - name: same-namespace
 rules:
 - matches:
 - path:
 type: PathPrefix
 value: /cors-behavior-creds-false
 backendRefs:
 - name: infra-backend-v1
 port: 8080
 filters:
 - cors:
 allowOrigins:
 - https://app.example
 type: CORS

If you opt not to define explicit origins, using a wildcard ("*") is another option, permitting access for any origin. Just keep in mind that semi-specified origins are also valid, such as https://*.bar.com, which opens doors to multiple subdomains while maintaining some level of specificity.

Final Insights into Gateway API Configurations

As we wrap up our exploration of the Gateway API, it’s clear that its approach to configurations—especially concerning CORS and client certificate validation—marks a significant shift toward enhancing security and flexibility in microservices architecture. This is more than just a technical update; it embodies a growing recognition of the intricate challenges posed by modern web interactions. The fine-grained control available through HTTPRoute filters for CORS settings highlights how the API enables precise handling of cross-origin requests. While the documentation offers a range of options, from `allowCredentials` to `maxAge`, developers need to be mindful about their implementations. Each setting carries implications for usability and security, so if you're managing these configurations, consider the specific needs of your applications and the potential exposure they might incur. Here's the thing: it's not enough to just set up configurations without a robust understanding of their operational context. For instance, enabling credentials in CORS opens up a new dimension of security considerations. You must rigorously validate any origins you allow, as the implications of misconfiguration can lead to significant vulnerabilities. Meanwhile, the introduction of mutual TLS (mTLS) in the Gateway API is a game-changer for environments with high-security needs. By requiring client certificates, it enhances assurance in data exchange between services. The shift towards frontend mTLS is insightful—facilitating a level of trust that traditional TLS alone can’t provide. However, the flexibility of the validation modes introduces a potential pitfall. While `AllowInsecureFallback` may seem convenient, it can lead to lax security practices if not managed judiciously. That said, it’s clear that the developers of the Gateway API are paving the way for a more secure microservices framework, but it relies heavily on developers embracing these changes thoughtfully. As you move ahead in implementing these configurations, weigh the benefits against the risks to ensure robust architecture that not only meets functional requirements but also sustains a high bar for security. The future of the Gateway API appears promising, with ongoing updates likely to refine these capabilities further. Keeping an eye on community feedback and evolving best practices will be essential. It’s an exciting time for developers navigating these new waters.