1

I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly:

  • kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16

  • flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"}

  • docker subnet setting: --bip=10.0.0.1/24

  • Hostnode IP: 192.168.4.57

I've got the nginx service running and I've tried to expose it like so:

[root@kubemaster ~]# kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
nginx-px6uy   1/1       Running   0          4m
[root@kubemaster ~]# kubectl get services
NAME         LABELS                                    SELECTOR    IP(S)           PORT(S)    AGE
kubernetes   component=apiserver,provider=kubernetes   <none>      172.16.0.1      443/TCP    31m
nginx        run=nginx                                 run=nginx   172.16.84.166   9000/TCP   3m

and then I exposed the service like this:

kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort
NAME      LABELS      SELECTOR    IP(S)     PORT(S)    AGE
nginx     run=nginx   run=nginx             9000/TCP   292y

I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :(

Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?

alexander.polomodov
  • 1,060
  • 3
  • 10
  • 14
jaxxstorm
  • 606
  • 6
  • 10

3 Answers3

1

You don't have to use NodePort and you don't have to use external load balancer. Just dedicate some of your cluster nodes to be loadbalancer nodes. You put them in a different node group, give them some labels: mynodelabel/ingress: nginx, and than you host an nginx ingress daemonset on that node group.

Most important options are:

spec:
  restartPolicy: Always
  dnsPolicy: ClusterFirst
  hostNetwork: true
  nodeSelector:
    mynodelabel/ingress: nginx

and

      ports:
        - containerPort: 80
          hostPort: 80
        - containerPort: 443
          hostPort: 443

Optionally you can taint your loadbalancer nodes so that regular pods don't work on them and slow down the nginx.

cohadar
  • 764
  • 7
  • 6
0

I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? If I have, can explanation would be appreciated :(

Expect to read the pod on hostIP:NodePort, where you can find the node port of a service with:

kubectl get svc echoheaders --template '{{range .spec.ports}}{{.nodePort}}{{end}}'

Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?

You can deploy an ingress controller such as: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx or https://github.com/kubernetes/contrib/tree/master/service-loadbalancer

beeps
  • 951
  • 6
  • 2
0

A NodePort service is the most common solution for a small/local bare-metal cluster, and the same port will be available on all the nodes where your jobs are running (i.e. probably not your master, but worker nodes) that are running kube-proxy.

There are some contrib/not-obvious code that acts like a LoadBalancer for smaller networks so that if you want to use type: LoadBalancer locally as well as in the cloud, you can get roughly equivalent mechanics if that's important.

Ingress controllers become significantly useful over NodePorts when you want to mix and match services (specifically HTTP services) exposed from your cluster on port 80 or 443, and are built to specifically support more than one service through a single endpoint (and potentially, a single port - mapped to separate URI paths or the like). Ingress controllers don't help so much when the access you want is not HTTP based (for example, a socket based service such as Redis or MongoDB, or maybe something custom you are doing)

If you're integrating this into an internal IT project, then many commercial load balancers recommend fronting the NodePort configurations with their own loadbalancer technology, and referencing the pool of all worker nodes in that setup. F5 has a reasonable example of this in their documentation.

Joe Heck
  • 221
  • 1
  • 4