2

I've configured a Kubernetes cluster as follows:

  • Webapp pod (with a Vue.js and an API, both within each container)
  • Nginx ingress config (with default-http-backend)
  • Database pod (which doesn't seem to be the problem here)
  • Kube lego (for SSL, in a separate namespace)

Anyways, after I finished the setup, the front-end app (i.e. Vue.js) didn't load any styles, only pure HTML + JS. Opening the Network tab from Firefox I saw a "502" error the CSS file.

Just for context, this is my Vue.js's Dockerfile:

FROM node:lts-alpine

RUN npm install http-server -g

WORKDIR /app

# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./

# install project dependencies
RUN npm install

# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
RUN npm run build

EXPOSE 8000
CMD [ "http-server", "dist", "-c-1", "-p", "8000" ]

And here is Nginx Controller's log (from kubectl logs [nginx-controller-pod]): https://pastebin.com/tBfPXJns (couldn't post it here because it is supposedly spam).

Most of the times, only the CSS and .png requests return 502, meanwhile, all JS requests reach the front-end's server.

My Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "nginx"    
    nginx.ingress.kubernetes.io/proxy-body-size: 200m                        
    nginx.ingress.kubernetes.io/rewrite-target: /    
    nginx.ingress.kubernetes.io/server-snippet: |
      add_header 'Access-Control-Expose-Headers' 'access-token,expiry,token-type,uid,client,Access-Token,Expiry,Token-Type,Uid,Client';                                                                

spec:
  tls:
    - hosts:
        ~all-hosts~
      secretName: birthplace-ssl        
  rules:
    - host: api.example.com.br
      http:
        paths:         
         - path: /
           backend:
             serviceName: example-backend-service
             servicePort: 9292         
    - host: example.com.br
      http: &default
        paths:      
          - path: /
            backend:
              serviceName: example-frontend-service
              servicePort: 8000             
    - host: painel.example.com
      http: *default        
    - host: admin.example.com
      http: *default

My deployment YAML is properly configured for both services (i.e. using ports 8000 and 9292)

Weirdly though, I can access any of theses assets from a normal GET (external) request.

P.S. At the log...

10.24.0.40 is default-http-backend's cluster IP.

10.24.1.3 is my webapp's IP.

AlirezaK
  • 316
  • 3
  • 20
jefersonhuan
  • 21
  • 1
  • 1
  • 3
  • maybe the rewrite is messing it up? – 4c74356b41 Feb 19 '19 at 07:47
  • No. With or without, same result. – jefersonhuan Feb 19 '19 at 08:23
  • ok, define "Most of the times, only the CSS and .png requests return 502". so its working sometimes? – 4c74356b41 Feb 19 '19 at 08:25
  • 1
    Yes, and that boggles my mind; but I found [this comment](https://github.com/kubernetes/ingress-nginx/issues/1120#issuecomment-418206748) and using "nginx.ingress.kubernetes.io/service-upstream: "true"" annotation seems to solve the problem. I'll keep testing and update the question if it actually works after some use. – jefersonhuan Feb 19 '19 at 18:20
  • Same issue for me. Small React app with 1 route packaged with nginx deployed on minikube. / request is always successful, assets sometimes not (~40%). Sometimes css, sometimes js resources fail causing the app to not start correctly. Already tried a custom health check. The solution from the linked thread didn't work for me. – Can May 06 '19 at 11:50
  • @jefersonhuan the mentioned annotation definitely resolved your issue? – Mr.KoopaKiller Nov 03 '20 at 08:50
  • 1
    @KoopaKiller, I didn't keep this cluster for too long but yes, this annotation did solve the problem. – jefersonhuan Nov 04 '20 at 20:31

1 Answers1

2

As mentioned in comments, you should use the annotation nginx.ingress.kubernetes.io/service-upstream: "true":

From the nginx-ingress documentation:

Service Upstream

By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) >in the NGINX upstream configuration.

The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.

This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue #257.

Here is a github issue with a valid config running.