0

Little bit of an Introduction, Im pretty new to kubernetes so i'm a bit rough on it. Lets me sketch my problem.

TLDR: After installing cilium on kubernetes I cannot acces from and to any other machine on my local networks

I got two subnets:

10.0.0.0/24 > For all my other virtual machine (DNS, Ansible)

10.8.0.0/24 > For my K8s Cluster

I got 3 nodes, 1 controller 2 worker. and it's mostly for testing and playing around with it.

I initiaziled the cluster using

kubeadm init --skip-phases=addon/kube-proxy (I also wanted to use cilium for the proxy)

After that the only other thing I setup is Helm so i can easily get the cilium package.

In this setup without cilium i am able to connect to everything fine. DNS, Ansible no problem.

After installing cilium through Helm using the following values:

- name: helm install cilium
  kubernetes.core.helm:
    name: cilium
    chart_ref: cilium/cilium 
    chart_version: 1.11.5
    release_namespace: kube-system
    values:
      k8sServiceHost: 10.8.0.1
      k8sServicePort: 6443
      kubeProxyReplacement: strict

I'm not able to connect to my nodes anymore from any other machine, and my nodes are not able to reach anything within my local subnet 10.0.0.0/24 for example

when i try to do NSlookup

nslookup google.com 10.0.0.1
;; connection timed out; no servers could be reached

However when I do to an Ip outside the 10.0.0.0/8 range

nslookup google.com 8.8.8.8
Server:         8.8.8.8
Address:        8.8.8.8#53

Non-authoritative answer:
Name:   google.com
Address: 142.250.179.174
Name:   google.com
Address: 2a00:1450:400e:802::200e

it does work instantly

All services seem to be running fine when using

when using:

Cilium status 

When i Look at all the services they seem to be fine aswel

    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 3, Ready: 3/3, Available: 3/3
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
Containers:       cilium             Running: 3
                  cilium-operator    Running: 2
Cluster Pods:     3/3 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.11.5@sha256:79e66c3c2677e9ecc3fd5b2ed8e4ea7e49cf99ed6ee181f2ef43400c4db5eef0: 3
                  cilium-operator    quay.io/cilium/operator-generic:v1.11.5@sha256:8ace281328b27d4216218c604d720b9a63a8aec2bd1996057c79ab0168f9d6d8: 2
kube-system   cilium-2xhvn                                       1/1     Running   0               78m   10.8.0.3     kube-worker02   <none>           <none>
kube-system   cilium-hk8f7                                       1/1     Running   1 (2m23s ago)   78m   10.8.0.1     kube-master00   <none>           <none>
kube-system   cilium-m26jx                                       1/1     Running   0               78m   10.8.0.2     kube-worker01   <none>           <none>
kube-system   cilium-operator-5484444455-4g7pz                   1/1     Running   1 (2m29s ago)   78m   10.8.0.3     kube-worker02   <none>           <none>
kube-system   cilium-operator-5484444455-9v5dv                   1/1     Running   1 (2m24s ago)   78m   10.8.0.2     kube-worker01   <none>           <none>
kube-system   coredns-6d4b75cb6d-v6gzl                           1/1     Running   1 (2m23s ago)   80m   10.0.0.106   kube-master00   <none>           <none>
kube-system   coredns-6d4b75cb6d-w42pk                           1/1     Running   1 (2m23s ago)   80m   10.0.0.28    kube-master00   <none>           <none>
kube-system   etcd-kube-master00                                 1/1     Running   1 (2m23s ago)   80m   10.8.0.1     kube-master00   <none>           <none>
kube-system   kube-apiserver-kube-master00                       1/1     Running   1 (2m23s ago)   80m   10.8.0.1     kube-master00   <none>           <none>
kube-system   kube-controller-manager-kube-master00              1/1     Running   1 (2m23s ago)   80m   10.8.0.1     kube-master00   <none>           <none>
kube-system   kube-scheduler-kube-master00                       1/1     Running   1 (2m23s ago)   80m   10.8.0.1     kube-master00   <none>           <none>

I don't know why the ips of core dns are that. I think it's just from the pod network automaticaly, i don't know the exact setting to specify a different subnet.

I have a feeling it has something to do with routes aswel ,that it weirdly routes internaly since cilium also uses ranges within 10.0.0.0/8, but i have no clue how to verify or fix it.... and I have been messing withit trying again and again but the same just seem to be happening, so i ask help!

Marc Hoog
  • 21
  • 3

1 Answers1

2

Comming back to this Freshminded a few days later it was kinda easy....

The default setting is that pod networks and internal kubernetes networking use 10.0.0.0/8. These routes bassicly messup my own routes creating loss of connection.

To fix this I gave Ipam the following values when installing the helm package making it use 172.16.0.0/12 instead of 10.0.0.0/8.

      ipam:
        mode: "cluster-pool"
        operator:
          clusterPoolIPv4PodCIDR: "172.16.0.0/12"
          clusterPoolIPv4PodCIDRList: ["172.16.0.0/12"]
          clusterPoolIPv4MaskSize: 24   
Marc Hoog
  • 21
  • 3