0

I have a fresh install of Ubuntu, a fresh install of k3s, and a fresh download of calicoctl. I have installed it the following way.

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644"\
        INSTALL_K3S_EXEC="--flannel-backend=none --cluster-cidr=192.168.0.0/16\
        --disable-network-policy --disable=traefik" sh -

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml

curl -o calicoctl -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.20.2/calicoctl"

When I run kubectl, everything works fine. When I run calicoctl, I get certificate errors.

# calicoctl apply -f V000_000-host-policy.yaml 
Unable to get Cluster Information to verify version mismatch: Get "https://127.0.0.1:6443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority
Use --allow-version-mismatch to override.

I have copied request-header-ca.crt, client-ca.crt and server-ca.crt certificates from /var/lib/rancher/k3s/server/tls to /usr/local/share/ca-certificates and applied them with update-ca-certificates. I can confirm the certs are listed in /etc/ssl/certs/ca-certificates.crt.

Additionally my ~/.kube/config file has the following contents (I do regular reinstalls, none of this is confidential I should hope - correct me if I'm wrong)

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t...LS0K
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: LS0t...LS0K
    client-key-data: LS0t...LQo=

And I have the following configuration in /etc/cni/net.d/calico-kubeconfig

# Kubeconfig file for Calico CNI plugin. Installed by calico/node.
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    server: https://10.43.0.1:443
    certificate-authority-data: "LS0t...tLS0K"
users:
- name: calico
  user:
    token: eyJhb...tk4Q
contexts:
- name: calico-context
  context:
    cluster: local
    user: calico
current-context: calico-context

I have changed the address in calico-kubeconfig from 10.43.0.1:443 to 127.0.0.1:6443 but that made no difference.

Does anyone know how to work around this? Is the certificate error I am seeing a consequence of CA or tokens? Curl to the same address also complains about CA so it makes me think this is not token related.

2 Answers2

1

By setting calicoctl log level to debug (ex. calicoctl -l debug get nodes) I discovered what was happening.

By default calicoctl reads /etc/calico/calicoctl.cfg. This file won't exist if you installed calicoctl the way I have. So the client falls back to using ~/.kube/config. Which contains some information, but not all information.

As part of the debug log information, the loaded configuration is also displayed. I was able to deduce that the config properties were slightly different to those in the documentation.

I created the following /etc/calico/calicoctl.cfg file (yaml format)

apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/home/user/.kube/config"
  K8sAPIToken: "eyJh...xQHA"
  K8sCAFile: "/var/lib/rancher/k3s/server/tls/server-ca.crt"

Where the K8sAPIToken I took from /etc/cni/net.d/calico-kubeconfig. It should be the same token as the one from the question, I am unsure why it changed (refresh?). Either way, the above method solves the problem (at least temporarily).

0

I have a similar setup (except k3s running inside an unprivileged Ubuntu LXD container) with the k3s.service started using:

ExecStart=/usr/local/bin/k3s \
    server --snapshotter=native \
    --kubelet-arg=feature-gates=KubeletInUserNamespace=true \
    --kube-controller-manager-arg=feature-gates=KubeletInUserNamespace=true \
    --kube-apiserver-arg=feature-gates=KubeletInUserNamespace=true,RemoveSelfLink=false \
    --disable=servicelb --disable=traefik --flannel-backend=none --disable-network-policy \
    --cluster-cidr=192.168.0.0/16 --cluster-init

I didn't need to copy any certificates - it was enough to just:

ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config

enter image description here

Stuart Cardall
  • 531
  • 4
  • 7