1

I have an ingress that routes to a custom endpoint external to the kubernetes cluster. The service listens only on HTTPS on port 8006.

apiVersion: v1
kind: Service
metadata:
  name: pve
spec:
  ports:
    - protocol: TCP
      port: 8006
---
apiVersion: v1
kind: Endpoints
metadata:
  name: pve
subsets:
  - addresses:
      - ip: 10.0.1.2
    ports:
      - port: 8006
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: pve

  annotations:
    kubernetes.io/ingress.class: "nginx"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "off"
    nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/16"

spec:
  tls:
    - hosts:
        - pve.example.com
      secretName: pve-tls
  rules:
    - host: pve.example.com
      http:
        paths:
          - backend:
              serviceName: pve
              servicePort: 8006
            path: /

Gives the error in the nginx pod:

10.0.0.25 - - [28/Aug/2020:01:17:58 +0000] "GET / HTTP/1.1" 502 157 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0" "-"

2020/08/28 01:17:58 [error] 2609#2609: *569 upstream prematurely closed connection while reading response header from upstream, client: 10.0.0.25, server: pve.example.com, request: "GET / HTTP/1.1", upstream: "http://10.0.1.2:8006/", host: "pve.example.com"

Edit

After removing the proxy protocol, I get the error

10.0.10.1 - - [28/Aug/2020:02:19:18 +0000] "GET / HTTP/1.1" 400 59 "-" "curl/7.58.0" "-"

2020/08/28 02:19:26 [error] 2504#2504: *521 upstream prematurely closed connection while reading response header from upstream, client: 10.0.10.1, server: pve.example.com, request: "GET / HTTP/1.1", upstream: "http://10.0.1.2:8006/", host: "pve.example.com"

10.0.10.1 - - [28/Aug/2020:02:19:26 +0000] "GET / HTTP/1.1" 502 157 "-" "curl/7.58.0" "-"

 

And in case it is relevant, my nginx configuration, deployed through the helm char nginx-stable/nginx-ingress

  ## nginx configuration
  ## Ref: https://github.com/kubernetes/ingress/blob/master/controllers/nginx/configuration.md
  ##
  controller:
    config:
      entries:
        hsts-include-subdomains: "false"
        ssl-ciphers: "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"
        ssl-protocols: "TLSv1.1 TLSv1.2"
    ingressClass: nginx
    service:
      externalTrafficPolicy: Local
      annotations:
        metallb.universe.tf/address-pool: default
  defaultBackend:
    enabled: true
  tcp:
    22: "gitlab/gitlab-gitlab-shell:22"
cclloyd
  • 583
  • 1
  • 13
  • 24
  • This appears to be two separate and distinct problems, unrelated to each other. It would be best to post them as separate posts. I've answered _one_ of them below. I have no idea about the other. You can just edit it out of here and paste it into a new question. – Michael Hampton Aug 28 '20 at 01:50
  • @MichaelHampton I split up the questions, second one here https://serverfault.com/questions/1031810/400-error-with-nginx-ingress-to-kubernetes-dashboard – cclloyd Aug 28 '20 at 02:02
  • @cclloyd Could you share some more details regarding that `custom endpoint external to the kubernetes cluster`? – Wytrzymały Wiktor Sep 01 '20 at 11:03

2 Answers2

1

This annotation is probably the cause of the problem.

    nginx.ingress.kubernetes.io/use-proxy-protocol: "true"

The docs state:

Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).

If you don't have a load balancer in front of your Ingress which is passing connections in using the PROXY protocol, then this is not what you want, and this annotation should not be present (or should be "false").

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
0

This is a community wiki answer. Feel free to expand it.

The error you see: upstream prematurely closed connection while reading response header from upstream comes from Nginx and means that the connection was closed by your "upstream".

It's hard to say what might be the exact cause of this issue without the necessary details but what you can do regardless is to try to increase the timeout value according to this documentation, specifically:

  • proxy_read_timeout

  • proxy_connect_timeout

You can also adjust code/configuration on your "upstream", whatever it is in your use case.