1

I have this yaml for an Ingress:

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: app
  namespace: ingress-controller
... omitted for brevity ...
spec:
  rules:
    - host: ifs-alpha-kube-001.example.com
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              serviceName: service-nodeport
              servicePort: 80
          - path: /
            pathType: ImplementationSpecific
            backend:
              serviceName: service-nodeport
              servicePort: 443
status:
  loadBalancer:
    ingress:
      - {}

In the above I set ...

    - host: ifs-alpha-kube-001.example.com

That host just happens to be one of my nodes. I have three nodes. I am pretty certain that this incorrect. The ingress works but if I shutdown ifs-alpha-kube-001 the ingress stops working. What should I set host if I want a high availability cluster?

Thanks

Update: I tried out duct_tape_coder's suggestion but I still must be doing something wrong.

I need to be able to access web servers on both port 80 and 443 so I create two "single service" ingresses.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: port-80-ingress
  namespace: ingress-controller
spec:
  backend:
    serviceName: port-80-service
    servicePort: 80

... and ...

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: port-443-ingress
  namespace: ingress-controller
spec:
  backend:
    serviceName: port-443-service
    servicePort: 443

And I deleted my old ingress. But I still only able to access the web server on my first node, ifs-alpha-kube-001, and not ifs-alpha-kube-002 and ifs-alpha-kube-003. I verified that my web server is running on the pods.

Update II:

Ok I tried this instead:

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: app2
  namespace: ingress-controller

... omitted ...

spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              serviceName: service-nodeport
              servicePort: 80
          - path: /
            pathType: ImplementationSpecific
            backend:
              serviceName: service-nodeport
              servicePort: 443
status:
  loadBalancer:
    ingress:
      - {}
$ kubectl describe ingress app2 --namespace=ingress-controller
Name:             app2
Namespace:        ingress-controller
Address:
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  *
              /   service-nodeport:80 (10.233.119.22:80,10.233.123.33:80,10.233.125.29:80)
              /   service-nodeport:443 (10.233.119.22:443,10.233.123.33:443,10.233.125.29:443)
Annotations:  Events:
  Type        Reason  Age   From                Message
  ----        ------  ----  ----                -------
  Normal      CREATE  13m   ingress-controller  Ingress ingress-controller/app2
  Normal      CREATE  13m   ingress-controller  Ingress ingress-controller/app2
  Normal      UPDATE  12m   ingress-controller  Ingress ingress-controller/app2
  Normal      UPDATE  12m   ingress-controller  Ingress ingress-controller/app2

And deleted all other ingresses. But still I can only access http on host ifs-alpha-kube-001 with the weird twist ... if I execute:

curl -L --insecure https://ifs-alpha-kube-001.example.com -vvvv

I get a ton of output about redirection.

> GET / HTTP/2
> Host: ifs-alpha-kube-001
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/2 302
< date: Tue, 23 Jun 2020 15:52:28 GMT
< server: Apache/2.4.39 (Unix) OpenSSL/1.0.2k-fips mod_wsgi/4.7.1 Python/3.6
< location: https://ifs-alpha-kube-001/
< content-length: 211
< content-type: text/html; charset=iso-8859-1
< strict-transport-security: max-age=15768000
<
* Ignoring the response-body
* Connection #1 to host ifs-alpha-kube-001 left intact
* Maximum (50) redirects followed
curl: (47) Maximum (50) redirects followed
* Closing connection 0
* Closing connection 1

What is going on here?

Update III

Here are the services I have set up:

$ kubectl get service --namespace=ingress-controller  -o wide
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                     AGE     SELECTOR
haproxy-ingress           NodePort    10.233.23.21   <none>        80:30032/TCP,443:30643/TCP,1936:30302/TCP   6d4h    run=haproxy-ingress
ingress-default-backend   ClusterIP   10.233.5.224   <none>        8080/TCP                                    6d5h    run=ingress-default-backend
service-nodeport          NodePort    10.233.3.139   <none>        80:30080/TCP,443:30443/TCP                  5d18h   k8s-app=test-caasa-httpd,pod-template-hash=7d79794567

I believe I have tied the service-nodeport service to my ingress app2.

Red Cricket
  • 462
  • 2
  • 7
  • 20

1 Answers1

2

Don't set a host and it'll be available on all hosts, https://kubernetes.io/docs/concepts/services-networking/ingress/. You can also designate a hostPort for it to be available on a specific port on all hosts. I would recommend doing that, then using an external load balancer/proxy to hit the ingress hostport on all nodes.

duct_tape_coder
  • 755
  • 3
  • 12
  • Would my cluster need to have a "cloud provider" set up? – Red Cricket Jun 23 '20 at 02:48
  • @RedCricket No, but if you are running Kubernetes on your own infrastructure you will need to provide your own ingress controller. See the link given in the answer for information. If you aren't getting the routing you desire, check `kubectl describe ingress ` and see if everything looks like it should. – Michael Hampton Jun 23 '20 at 15:20
  • @duct_tape_coder I have looked at the link and tried the examples in there. I have updated my question could you please take a look. I am still doing something wrong. – Red Cricket Jun 23 '20 at 16:06
  • Do you have a `service` set up for your ingress? You should be accessing the service for the ingress. Also `loadbalancer` only works in cloud or if you have MetalLB set up. You'll want to use nodePort instead with a high port (default range for K8s is 30000-32000something). Try to avoid well known ports. Use an LB/RP external to the cluster to perform the port conversion. – duct_tape_coder Jun 23 '20 at 17:18
  • I think I do have a service for my ingress. Please see my updated post with the service infomation. – Red Cricket Jun 23 '20 at 20:55
  • So if you try to access `https://ifs-alpha-kube-002:30643` or `https://ifs-alpha-kube-001:30443` do you get a response? – duct_tape_coder Jun 24 '20 at 21:47
  • Thanks! I do get a response from `curl -L --insecure http://ifs-alpha-kube-002.cisco.com:30080` and `curl -L --insecure https://ifs-alpha-kube-001.cisco.com:30443`. Hmmm what does that mean? – Red Cricket Jun 25 '20 at 01:11
  • That's literally what you've set up. The node ports selected are 30080 and 30443. That means the nodes will respond on those ports for you. As I said earlier, what you should do now it set up a Reverse Proxy/Load Balancer external to your cluster which will redirect 80/443 requests to those nodeports. – duct_tape_coder Jun 25 '20 at 01:23
  • The alternative to an external LB would be to switch from nodePort to the LoadBalancer IP type and use MetalLB for on-prem or if you're in a cloud they provide one natively (at cost). – duct_tape_coder Jun 25 '20 at 01:28
  • Another thing I have noticed is the `http://ifs-alpha-kube-001.example.com` seems to be acting as a load balancer as I get responses from the web servers like `It works! 001` or `It works! 002` and `It works! 003` where 001 comes from the pod running on ifs-alpha-kube-001 and 002 comes from the pod running on ifs-alpha-kube-002 etc. – Red Cricket Jun 25 '20 at 01:35
  • Oh boy, I think you need to read up more on how Ingress and Services are supposed to work. A service is designed to redirect traffic through the swarm like a load balancer. When you hit the service, it'll balance/redirect traffic to nodes that have the pods even if the node you hit doesn't have the pod locally. Every pod in the deployment should be available when you hit the service. – duct_tape_coder Jun 25 '20 at 01:43
  • So I don't need an ingress at all. According to the docs, an ingress is only useful for port 80 and 443 and only useful for HA if the cluster has a cloud provider set up. RIght? – Red Cricket Jun 25 '20 at 03:16
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/109849/discussion-between-duct-tape-coder-and-red-cricket). – duct_tape_coder Jun 25 '20 at 23:23