4

I have a Kubernetes cluster with an external load balancer on a self hosted server running NGINX. I tried to activate the proxy_protocol in order to get the real_ip of clients but NGINX logs are

2020/05/11 14:57:54 [error] 29614#29614: *1325 broken header: "▒▒▒▒▒▒▒Ωߑa"5▒li<c▒*▒ ▒▒▒s▒       ▒6▒▒▒▒▒X▒▒o▒▒▒E▒▒i▒{ ▒/▒0▒+▒,̨̩▒▒ ▒▒
▒▒/5▒" while reading PROXY protocol, client: 51.178.168.233, server: 0.0.0.0:443

Here is my NGINX configuration file:

worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

stream {

    upstream rancher_servers_http {
        least_conn;
        server <IP_NODE_1>:80 max_fails=3 fail_timeout=5s;
        server <IP_NODE_2>:80 max_fails=3 fail_timeout=5s;
        server <IP_NODE_3>:80 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     80;
        proxy_protocol on;
        proxy_pass rancher_servers_http;
    }

    upstream rancher_servers_https {
        least_conn;
        server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s;
        server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s;
        server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s;
    }

    server {
        listen     443 ssl proxy_protocol;
        ssl_certificate /certs/fullchain.pem;
        ssl_certificate_key /certs/privkey.pem;
        proxy_pass rancher_servers_https;
        proxy_protocol on;
    }
}

Here is my configmap for the ingress-controller:

apiVersion: v1
data:
  compute-full-forwarded-for: "true"
  proxy-body-size: 500M
  proxy-protocol: "true"
  use-forwarded-headers: "true"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","data":null,"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"ingress-nginx"},"name":"nginx-configuration","namespace":"ingress-nginx"}}'
  creationTimestamp: "2019-12-09T13:26:59Z"
  labels:
    app: ingress-nginx
  name: nginx-configuration
  namespace: ingress-nginx

Everything was working fine before I add the proxy_protocol directive but now I got all these broken headers errors and I can't reach any services behind ingresses without getting a connection reset error.

What could be wrong with my config ?

Should I use an http reverse proxy instead of a tcp reverse proxy ?

Thank you.


Edit:

I should also say that I doesn't have any service of type LoadBalancer in my cluster. Should I have one ? I'm thinking of Metallb but I'm not sure what it will add to my configuration as I'm already load balancing to nodes with nginx.

MHogge
  • 181
  • 1
  • 4

1 Answers1

5

Nginx allows you to specify whether to use proxy_protocol in incoming or outgoing requests, and you're confusing the two.

To use proxy_protocol in incoming connections, you have to add proxy_protocol to the listen line like this:

listen     443 ssl proxy_protocol;

To use proxy_protocol in outgoing connections, you have to use the standalone proxy_protocol directive, like this:

proxy_protocol on;

They are not the same. In a load balancer, incoming connections come from browsers, which do not speak the proxy protocol. You want proxy protocol only in your outgoing requests, to the nginx-ingress in your kubernetes cluster.

Therefore, remove the proxy_protocol argument from the listen directive, and it should work.

Additionally, you want use-forwarded-headers: "false" in your nginx-ingress config. That controls whether to use the X-Forwarded-For & co. headers in incoming connections (from the point of view of the nginx-ingress, ie outgoing from your load balancer), and you're using proxy protocol in these instead of the headers. With it enabled, your users may be able to spoof IPs by specifying X-Forwarded-For, which can be a security issue. (only if nginx-ingress gives priority to the headers over the proxy protocol, which I'm not sure)

An aside: nginx-ingress itself already load-balances traffic between all pods. With your architecture, you're running two "layers" of load balancers, which is probably unnecessary. If you want to simplify, force nginx-ingress to run on a single node (with nodeSelector for example) and simply send all your traffic to that node. If you want to keep the load balancer on a dedicated machine, you can join the 4th machine to the cluster and make sure it just runs nginx-ingress (with taints and tolerations).

Also, make sure you're running nginx-ingress with hostNetwork: true, otherwise you may be having yet another layer of balancing (kube-proxy, the kubernetes service proxy)

Dirbaio
  • 223
  • 1
  • 6