Skip to content

Envoy proxy as a Gateway API implementation in Azure Kubernetes Service (AKS)

This guide explains how to set up Envoy proxy as a Gateway API implementation in an Azure Kubernetes Service (AKS) cluster that runs in an "airtight" environment with OPA policies enforced. Since we are running in Azure and we have LoadBalancers for AKS deployed, we will leverage these for exposing the Envoy Gateway to the users.

Introduction to Gateway API

The Gateway API is a set of resources for configuring networking in Kubernetes clusters, providing advanced routing capabilities. To learn more about Gateway API, visit the official documentation.

As it can be observed in the Gateway API implementations - List, Envoy proxy is one of the supported implementations of the Gateway API. Since Envoy proxy implementation is managed by CNCF, I decided to evaluate it among others.

Prerequisites

  • An AKS cluster up and running. - Please check the official Microsoft documentation for creating an AKS cluster: Create an AKS cluster.
  • kubectl installed and configured to access the AKS cluster.
  • Helm installed for deploying Envoy proxy.
  • Since we are running an "airtight" AKS cluster, make sure to have access to required container images in your private container registry (I'll try to give you a list of images to pull into your container registry).

Steps to deploy Envoy proxy as Gateway API implementation

Step 1: Prepare your private container registry

Make sure to pull the following container images into your private container registry (commands are given for Azure Container Registry - ACR):

az login
az account set --subscription <SUBSCRIPTION_ID>
az acr import --name <ACR_NAME> --image envoyproxy/envoy:distroless-v1.36.2 --source docker.io/envoyproxy/envoy:distroless-v1.36.2
az acr import --name <ACR_NAME> --image envoyproxy/gateway:v1.6.0 --source docker.io/envoyproxy/gateway:v1.6.0
az acr import --name <ACR_NAME> --image envoyproxy/ratelimit:99d85510 --source docker.io/envoyproxy/ratelimit:99d85510

Step 2: Install Gateway API CRDs

Helm chart of Envoy proxy as Gateway API includes the Gateway API CRDs, but those CRDs are from the experimental channel of the Gateway API and, since we want to run it in production, it's better to install them from the standard channel. Apply the Gateway API Custom Resource Definitions (CRDs) to your AKS cluster:

helm template eg oci://docker.io/envoyproxy/gateway-crds-helm \
    --version v1.6.0 \
    --set crds.gatewayAPI.enabled=true \
    --set crds.gatewayAPI.channel=standard \
    --set crds.envoyGateway.enabled=true \
    | kubectl apply --server-side -f -
  ```

If you plan to download this chart and push it to your private container registry, please pay attention that the chart is not the same one that installs Envoy proxy itself.

One more remark, the command above will install both Gateway API CRDs and Envoy CRDs, so we have to skip the CRDs installation while installing Envoy proxy itself.

### Step 3: Deploy Envoy proxy as Gateway API implementation

### Step 3.1: Prepare `values-envoy.yaml` file for Helm chart

```bash
cat <<EOF > values-envoy.yaml
global:
  imageRegistry: <ACR_NAME>.azurecr.io

podDisruptionBudget:
  minAvailable: 0
  maxUnavailable: 1

deployment:
  annotations: {}
  envoyGateway:
    resources:
      limits:
        cpu: 1
        memory: 1024Mi
      requests:
        cpu: 100m
        memory: 256Mi
  replicas: 1
  pod:
    affinity: {}
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "19001"
    labels: {}
    topologySpreadConstraints: []
    tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
    nodeSelector:
      fantastic-team-nodes/purpose: critical-addons

service:
  type: "ClusterIP"

config:
  # -- EnvoyGateway configuration. Visit https://gateway.envoyproxy.io/docs/api/extension_types/#envoygateway to view all options.
  envoyGateway:
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    provider:
      type: Kubernetes
    logging:
      level:
        default: debug
    extensionApis: {}

certgen:
  job:
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: "1"
        memory: 1Gi
    tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
    nodeSelector:
      fantastic-team-nodes/purpose: critical-addons
EOF

Step 3.2: Create a namespace for Envoy proxy and deploy it

I had some issues while using the --create-namespace argument of Helm and I suggest you create a namespace for Envoy proxy in advance:

kubectl create namespace envoy-gateway

Step 3.3: Deploy Envoy proxy using Helm

Now, deploy Envoy proxy using Helm:

helm upgrade --install envoy-gateway gateway-helm/ -n envoy-gateway --skip-crds -f values-envoy.yaml 

Step 4: Verify the installation

kubectl get pods -n envoy-gateway
kubectl get deployments -n envoy-gateway
kubectl get svc -n envoy-gateway

If you get a message similar to the one below, then Envoy proxy is successfully deployed, but you are still missing GatewayClass:

# kubectl logs -l app.kubernetes.io/name=envoy-gateway -n envoy-gateway
...
2025-12-03T13:07:20.218Z    INFO    provider    kubernetes/controller.go:326    no accepted gatewayclass    {"runner": "provider"}
...

Step 5: Create GatewayClass, EnvoyProxy

These resources are required for Envoy proxy to start working and allow us to fine-tune the Envoy Proxy instances.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: envoy
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: config
    namespace: gateway-envoy
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: config
  namespace: gateway-envoy
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        patch:
          type: StrategicMerge
          value:
            spec:
              template:
                spec:
                  containers:
                    - name: shutdown-manager
                      resources:
                        limits:
                          cpu: 100m
                          memory: 1024Mi
                        requests:
                          cpu: 100m
                          memory: 1024Mi
        replicas: 1
        container:
          image: <ACR_NAME>.azurecr.io/envoyproxy/envoy:distroless-v1.36.2
          resources:
            requests:
              cpu: 150m
              memory: 640Mi
            limits:
              cpu: 500m
              memory: 1Gi
        pod:
          tolerations:
            - key: CriticalAddonsOnly
              operator: Exists
          nodeSelector:
            fantastic-team-nodes/purpose: critical-addons
      envoyService:
        annotations:
          service.beta.kubernetes.io/azure-load-balancer-internal: "true"
          service.beta.kubernetes.io/azure-load-balancer-internal-subnet: <SUBNET_NAME>
        type: LoadBalancer
EOF

Verify that the GatewayClass and EnvoyProxy resources are created:

# kubectl get gatewayclass envoy
NAME    CONTROLLER                                      ACCEPTED   AGE
envoy   gateway.envoyproxy.io/gatewayclass-controller   True       4h30m
# kubectl get envoyproxy
NAME     AGE
config   3h56m

Step 6: Create a Gateway resource

So far we have deployed Envoy proxy as Gateway API. Now it's time to create a Gateway resource that will allow us to expose our services. If you check the EnvoyProxy resource created in the previous step, you will see that the envoyService is of type LoadBalancer and has an annotation for creating an internal Load Balancer in Azure. This means that Envoy proxy will be exposed via an internal Load Balancer.

cat <<EOF | kubectl apply -f -
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: main
  namespace: envoy-gateway
spec:
  gatewayClassName: envoy
  listeners:
    - name: http
      port: 80
      protocol: HTTP
      allowedRoutes:
        namespaces:
          from: All
    - name: https
      port: 443
      protocol: HTTPS
      tls:
        mode: Terminate
        certificateRefs:
          - kind: Secret
            name: envoy-tls-cert # Replace with your TLS secret name
      allowedRoutes:
        namespaces:
          from: All
  infrastructure:
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal: "true" # Since these are already set in EnvoyProxy, they can be omitted here
      service.beta.kubernetes.io/azure-load-balancer-internal-subnet: <SUBNET_NAME>  # Since these are already set in EnvoyProxy, they can be omitted here
EOF

It is worth mentioning that the Gateway above will accept HTTPRoute resources from all namespaces. You can restrict this to specific namespaces if needed. Another point to consider is that the TLS certificate used for HTTPS termination must be created in advance as a Kubernetes Secret in the same namespace where the Gateway resource is created and must be of type kubernetes.io/tls.

Verify that the Gateway resource is successfully created:

# kubectl get gateway -n envoy-gateway
NAME   CLASS   ADDRESS        PROGRAMMED   AGE
main   envoy   10.80.126.32   True         4h32m

Step 7: Create HTTPRoute resources to expose services

Now that we have the Gateway resource set up, we can create some HTTPRoute resources to expose our services through Envoy proxy. Below is an example of HTTPRoute resources that redirect HTTP traffic to HTTPS and route traffic to the backend service (I tested it with ArgoCD).

cat <<EOF | kubectl apply -f -
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: system-argocd-server-redirect-ga
  namespace: gitops
  annotations:
    external-dns.alpha.kubernetes.io/hostname: argocd-long-ga-envoy.company.com
spec:
  hostnames:
    - argocd-long-ga.company.com
  parentRefs:
    - name: main
      namespace: envoy-gateway
      sectionName: http
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      filters:
        - type: RequestRedirect
          requestRedirect:
            hostname: argocd-long-ga-envoy.company.com
            scheme: "https"
status:
  parents: []
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: system-argocd-server-ga-envoy
  namespace: gitops
  annotations:
    external-dns.alpha.kubernetes.io/hostname: argocd-long-ga-envoy.company.com
spec:
  hostnames:
    - argocd-long-ga-envoy.company.com
  parentRefs:
    - name: main
      namespace: envoy-gateway
      sectionName: https
  rules:
    - backendRefs:
        - name: system-argocd-server
          port: 80
      matches:
        - path:
            type: PathPrefix
            value: /
status:
  parents: []
EOF

Check that the HTTPRoute resources are created successfully:

# kubectl get httproute system-argocd-server-ga-envoy -n gitops -o yaml
...
status:
  parents:
  - conditions:
    - lastTransitionTime: "2025-12-03T15:00:43Z"
      message: Route is accepted
      observedGeneration: 1
      reason: Accepted
      status: "True"
      type: Accepted
    - lastTransitionTime: "2025-12-03T15:00:43Z"
      message: Resolved all the Object references for the Route
      observedGeneration: 1
      reason: ResolvedRefs
      status: "True"
      type: ResolvedRefs
    controllerName: gateway.envoyproxy.io/gatewayclass-controller
    parentRef:
      group: gateway.networking.k8s.io
      kind: Gateway
      name: main
      namespace: envoy-gateway
      sectionName: https
...

### Final testing

For testing purposes, you can use curl command with `--connect-to` argument to point to the internal Load Balancer IP address assigned to Envoy proxy Gateway service.

```bash
# curl -vLk --connect-to argocd-long-ga-envoy.company.com:80:<IP_FROM_GATEWAY>:80  http://argocd-long-ga-envoy.company.com
* Connecting to hostname: <IP_FROM_GATEWAY>
* Connecting to port: 80
*   Trying <IP_FROM_GATEWAY>:80...
* Connected to <IP_FROM_GATEWAY> (<IP_FROM_GATEWAY>) port 80
> GET / HTTP/1.1
> Host: argocd-long-ga-envoy.company.com
> User-Agent: curl/8.7.1
> Accept: */*
> 
* Request completely sent off
< HTTP/1.1 302 Found
< location: https://argocd-long-ga-envoy.company.com/
< date: Wed, 03 Dec 2025 18:42:52 GMT
< content-length: 0
< 
* Ignoring the response-body
* Connection #0 to host <IP_FROM_GATEWAY> left intact
* Clear auth, redirects to port from 80 to 443
* Issue another request to this URL: 'https://argocd-long-ga-envoy.company.com/'
* Host argocd-long-ga-envoy.company.com:443 was resolved.
* IPv6: (none)
* IPv4: <IP_FROM_GATEWAY>
*   Trying <IP_FROM_GATEWAY>:443...
* Connected to argocd-long-ga-envoy.company.com (<IP_FROM_GATEWAY>) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):

Conclusion

In this guide, we have successfully deployed Envoy proxy as a Gateway API implementation in an Azure Kubernetes Service (AKS) cluster running in an "airtight" environment. We have also created Gateway and HTTPRoute resources to expose services through Envoy proxy. You can now leverage the advanced routing capabilities of Gateway API with Envoy proxy in your AKS cluster. For production deployments, make sure to fine-tune the configurations and security settings like TLS, namespace filtering, etc., according to your requirements.

Some remarks regarding the Helm chart used for deploying Envoy proxy. I would like to see full installation of Gateway API CRDs from the standard channel instead of experimental channel, as well as deployment of GatewayClass and EnvoyProxy resources as part of the Helm chart installation. This would simplify the deployment process, ensure that the latest stable versions of these resources are used, and allow easier start. I have to compliment the maintainers of the chart for very fast response on issues in GitHub.