0

I have installed a kubeadm based kubernetes cluster (v1.24.2) on Centos7.

I have attempted to install calico CNI as per the instructions at "https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart".

The "/etc/cni/net.d/" and "/var/lib/calico" directories are still empty (or do not exist) on the control node and also in the worker node after installing calico via the commands below.

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.0/manifests/tigera-operator.yaml
kubectl create -f /tmp/custom-resources.yaml

Below are the content of /tmp/custom-resources.yaml

                  
---

  # This section includes base Calico installation configuration.
  # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
  apiVersion: operator.tigera.io/v1
  kind: Installation
  metadata:
    name: default
  spec:
    # Configures Calico networking.
    calicoNetwork:
      # Note: The ipPools section cannot be modified post-install.
      ipPools:
        -
          blockSize: 26
          cidr: 172.22.0.0/16
          encapsulation: VXLANCrossSubnet
          natOutgoing: Enabled
          nodeSelector: all()
  
---
  
  # This section configures the Calico API server.
  # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
  apiVersion: operator.tigera.io/v1
  kind: APIServer 
  metadata: 
    name: default 
  spec: {}
  
  

The config file I supplied to kubeadm init command --config argument contains the following section (this is abbreviated version of the file)

  apiVersion: kubeadm.k8s.io/v1beta3
  kind: ClusterConfiguration
  networking:
    dnsDomain: cluster.local
    serviceSubnet: 172.21.0.0/16
    podSubnet: 172.22.0.0/16

Are there more commands to be issued or objects to be constructed?

Allan K
  • 111
  • 2

1 Answers1

0

Now the "/etc/cni/net.d/" and "/var/lib/calico" directories in the control node and in the worker node are no longer empty after re-installing (on new VMs) kubeadm and kubernetes and then calico via the calico install commands I outlined in the question.

The issue now is that the worker nodes are still having the status value of "NotReady". As shown below. I will be posting a new question on this shortly.

{             kube_apiserver_node_01="192.168.12.17";             {               kubectl                 --kubeconfig=/home/somebody/kubernetes-via-kubeadm/kubeadm/${kube_apiserver_node_01}/admin.conf                 get nodes,pods,services -A                 -o wide               ;             };           };
NAME                 STATUS     ROLES           AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
node/centos7-03-05   Ready      control-plane   3h42m   v1.24.2   192.168.12.17   <none>        CentOS Linux 7 (Core)   3.10.0-1160.76.1.el7.x86_64   cri-o://1.24.2
node/centos7-03-08   NotReady   <none>          12m     v1.24.2   192.168.12.20   <none>        CentOS Linux 7 (Core)   3.10.0-1160.76.1.el7.x86_64   cri-o://1.24.2
node/centos7-03-09   NotReady   <none>          12m     v1.24.2   192.168.12.21   <none>        CentOS Linux 7 (Core)   3.10.0-1160.76.1.el7.x86_64   cri-o://1.24.2

NAMESPACE          NAME                                           READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
calico-apiserver   pod/calico-apiserver-b7d6cbb78-76zvd           1/1     Running   0          36m     172.22.147.134   centos7-03-05   <none>           <none>
calico-apiserver   pod/calico-apiserver-b7d6cbb78-twsrl           1/1     Running   0          36m     172.22.147.133   centos7-03-05   <none>           <none>
calico-system      pod/calico-kube-controllers-5f44c7d7d7-sq55x   1/1     Running   0          38m     172.22.147.132   centos7-03-05   <none>           <none>
calico-system      pod/calico-node-69shp                          1/1     Running   0          12m     192.168.12.20    centos7-03-08   <none>           <none>
calico-system      pod/calico-node-prhh9                          1/1     Running   0          12m     192.168.12.21    centos7-03-09   <none>           <none>
calico-system      pod/calico-node-t4tqf                          1/1     Running   0          38m     192.168.12.17    centos7-03-05   <none>           <none>
calico-system      pod/calico-typha-6779b9584c-gr24b              1/1     Running   0          38m     192.168.12.17    centos7-03-05   <none>           <none>
calico-system      pod/calico-typha-6779b9584c-ngwxj              1/1     Running   0          12m     192.168.12.20    centos7-03-08   <none>           <none>
calico-system      pod/csi-node-driver-8fn7b                      2/2     Running   0          37m     172.22.147.129   centos7-03-05   <none>           <none>
kube-system        pod/coredns-6d4b75cb6d-bwjhn                   1/1     Running   0          3h42m   172.22.147.131   centos7-03-05   <none>           <none>
kube-system        pod/coredns-6d4b75cb6d-wj4j8                   1/1     Running   0          3h42m   172.22.147.130   centos7-03-05   <none>           <none>
kube-system        pod/kube-apiserver-centos7-03-05               1/1     Running   0          3h42m   192.168.12.17    centos7-03-05   <none>           <none>
kube-system        pod/kube-controller-manager-centos7-03-05      1/1     Running   0          3h42m   192.168.12.17    centos7-03-05   <none>           <none>
kube-system        pod/kube-proxy-j9dpq                           1/1     Running   0          12m     192.168.12.20    centos7-03-08   <none>           <none>
kube-system        pod/kube-proxy-mtxlb                           1/1     Running   0          12m     192.168.12.21    centos7-03-09   <none>           <none>
kube-system        pod/kube-proxy-xnwnv                           1/1     Running   0          3h42m   192.168.12.17    centos7-03-05   <none>           <none>
kube-system        pod/kube-scheduler-centos7-03-05               1/1     Running   0          3h42m   192.168.12.17    centos7-03-05   <none>           <none>
ns-test-02         pod/my-nginx                                   0/1     Pending   0          26s     <none>           <none>          <none>           <none>
tigera-operator    pod/tigera-operator-7ff575f7f7-5t4hx           1/1     Running   0          38m     192.168.12.17    centos7-03-05   <none>           <none>

NAMESPACE          NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
calico-apiserver   service/calico-api                        ClusterIP   172.21.180.71    <none>        443/TCP                  36m     apiserver=true
calico-system      service/calico-kube-controllers-metrics   ClusterIP   172.21.185.110   <none>        9094/TCP                 36m     k8s-app=calico-kube-controllers
calico-system      service/calico-typha                      ClusterIP   172.21.8.216     <none>        5473/TCP                 38m     k8s-app=calico-typha
default            service/kubernetes                        ClusterIP   172.21.0.1       <none>        443/TCP                  3h42m   <none>
kube-system        service/kube-dns                          ClusterIP   172.21.0.10      <none>        53/UDP,53/TCP,9153/TCP   3h42m   k8s-app=kube-dns
ns-test-02         service/my-nginx                          ClusterIP   172.21.69.84     <none>        80/TCP                   26s     app=nginx,purpose=learning

Allan K
  • 111
  • 2