1

I have set up a working Kubernetes cluster using Rancher which defines two networks:

  • 10.42.0.0/16 for pod ip addresses
  • 10.43.0.0./16 for service endpoints

I want to use my existing Caddy reverse proxy to access those service endpoints, so I defined a route (10.10.10.172 is one of my kubernetes nodes):

sudo route add -net 10.43.0.0 netmask 255.255.0.0 gw 10.10.10.172

My routing table on the Caddy web server:

arturh@web:~$ sudo route
[sudo] password for arturh:
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         DD-WRT.local    0.0.0.0         UG    0      0        0 eth0
10.10.10.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.43.0.0       rancherkube1.lo 255.255.0.0     UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

Using this setup I can access and use 10.43.0.1:443 without any issues (it is the main kubernetes api endpoint):

arturh@web:~$ nmap 10.43.0.1 -p 443 | grep 443
443/tcp open  https
arturh@web:~$ curl -k https://10.43.0.1
Unauthorized

But accessing any other IP address in the 10.43.0.0/16 network fails and I cannot figure out why:

arturh@web:~$ kubectl get svc | grep prometheus-server
prometheus-prometheus-server               10.43.115.122   <none>         80/TCP              1d
arturh@web:~$ curl 10.43.115.122
curl: (7) Failed to connect to 10.43.115.122 port 80: No route to host
arturh@web:~$ traceroute 10.43.115.122
traceroute to 10.43.115.122 (10.43.115.122), 30 hops max, 60 byte packets
 1  rancherkube1.local (10.10.10.172)  0.348 ms  0.341 ms  0.332 ms
 2  rancherkube1.local (10.10.10.172)  3060.710 ms !H  3060.722 ms !H  3060.716 ms !H

I can access everything from the kubernetes node itself:

[rancher@rancherkube1 ~]$ wget -qO- 10.43.115.122
<!DOCTYPE html>
<html lang="en">...

which works because of iptable NAT rules:

[rancher@rancherkube1 ~]$ sudo iptables -t nat -L -n  | grep 10.43
KUBE-SVC-NGLRF5PTGH2R7LSO  tcp  --  0.0.0.0/0            10.43.115.122        /* default/prometheus-prometheus-server:http cluster IP */ tcp dpt:80
KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  0.0.0.0/0            10.43.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443

I'm confused because the entries for 10.43.0.1 which works looks identical the the others which do not... I figure I need to add an iptables rule to allow access to the 10.43.0.0/16 subnet, but I'm not familiar with iptables.

I'm quite new to the whole kubernetes business, is this the correct way to go about accessing your service endpoints? If so can someone please help me with the correct iptables command?

arturh
  • 155
  • 1
  • 9

2 Answers2

1

I am looking for an easier way to make K8S accessible to external hosts without using loadBalance and NodePort.

I also deployed K8S using Rancher, one of the nodes is 192.168.1.4 . By default, only k8S nodes can access the virtual IP 10.42.0.0/16 and 10.43.0.0/16 .

Thanks @arturh for the example, I added a route rule on a non-K8S host and it worked:

[root@CentOS ~]# curl 10.43.1.15:27017


^C
[root@CentOS ~]# ip route add 10.43.0.0/16 via 192.168.1.4
[root@CentOS ~]# curl 10.43.1.15:27017
It looks like you are trying to access MongoDB over HTTP on the native driver port.

Possible mistakes:

  • sysctl net.ipv4.ip_forward of the host is disabled.
  • Non-k8s hosts and K8S nodes are on different subnets, so they cannot communicate with each other directly.
LeoHsiao
  • 11
  • 1
1

you can access things from a host that's running a kubernetes workload because it has the iptables rules and possible route table rules to route traffic.

If you want to access the kubernetes services from outside your cluster then you want to use a ingress controller with a ingress service.

https://kubernetes.io/docs/concepts/services-networking/ingress/

Mike
  • 21,910
  • 7
  • 55
  • 79
  • okay, thanks that makes sense in that the other things i got working with kubernetes are either served by my ingress controller or are NodePorts which are available on the node IP directly, but I thought that there must be a way to access service endpoints directly when you want to for example expose MySQL ports which are not http to other hosts – arturh Sep 27 '17 at 21:51
  • You can use like haproxy as a ingress controller and it'll the ingress service controller you could say i want this node port 33060 opened and it'll open them on all hosts.. Then if you are on a non-cloud provider you'd have like a hardware LB to map a new VIP 3306->33060 and then point all your services at that VIP. On AWS for example the service controller would create a ELB – Mike Sep 27 '17 at 21:55
  • ya good luck.. kubernetes is really a great platform once you wrap your head around the concepts and learn a new way of doing ops – Mike Sep 27 '17 at 22:17