0

I need to deploy an application which works as a CCM (cloud controller manager), so it needs to have access to the master servers.

I have a K8S cluster that has been set up by Kubespray, all my nodes are running kubelet that takes configuration from /etc/kubernetes/kubelet.conf. The kubelet.conf is shown below:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ***
    server: https://localhost:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem

This configuration file and the certificates are being provided to the CCM service, I added the following volumes and mountpoints to the deployment YAML:

      containers:
      - name: cloud-controller-manager
        image: swisstxt/cloudstack-cloud-controller-manager:v0.0.1
        # Command line arguments: https://kubernetes.io/docs/reference/command-line-tools-reference/cloud-controller-manager/
        command:
        - /root/cloudstack-ccm
        - --cloud-provider=external-cloudstack
        - --cloud-config=/config/cloud-config
        - --kubeconfig=/var/lib/kubelet/kubelet.conf # Connection Params
        - --v=4
        volumeMounts:
        - name: config-volume
          mountPath: /config
        - name: kubeconfig-config-file
          mountPath: /var/lib/kubelet/kubelet.conf
        - name: kubernetes-pki-volume
          mountPath: /var/lib/kubelet/pki
        - name: kubernetes-config-volume
          mountPath: /var/lib/kubernetes
      volumes:
      - name: config-volume
        configMap:
          name: cloud-controller-manager-config
      - name: kubeconfig-config-file
        hostPath:
          path: /etc/kubernetes/kubelet.conf
      - name: kubernetes-pki-volume
        hostPath:
          path: /var/lib/kubelet/pki
      - name: kubernetes-config-volume
        hostPath:
          path: /var/lib/kubernetes

So far, so good.

My problem is that my kubelet.conf is having the following sentence: .clusters.cluster.server: https://localhost:6443. So, kubelet is configured to interact with the master servers via a proxy-server that has been set up by Kubespray to distribute the connections between the master services.

So, when the CCM application inspect the kubelet.conf it understands that it should communicate with the master servers via https://localhost:6443, but inside of the pod of this application localhost:6443 is not being listened by this proxy server, so CCM can't use localhost:6443 to communicate with the master server, as localhost:6443 is accessible only from the node itself.

Here's the question: is there a way to make the node's localhost:6443 accessible from the pod? The only idea I have at this moment is to set up an SSH-tunnel between the pod and the node it's running at, but I don't like it, because (1) it requires to propagate some RSA-key on all the nodes and add it on every new node, (2) I have no idea on how to find out the IP-address of the node from behalf of a container.

Thanks for reading this rant. I'll be very grateful for all the ideas and clues.

Volodymyr Melnyk
  • 537
  • 5
  • 18
  • Found a workaround for the CCM application: it allows to add the `--master` parameter which overrides the `.clusters[0].cluster.server`, so I added `--master=https://kubernetes.default.svc/` as a parameter of the application and now it works fine. Though this workaround doesn't answer to my question on how to implement a tunnel between a pod and the node it's running on. – Volodymyr Melnyk Jun 13 '19 at 10:11
  • You are using your own CCM or Kubernetes one? – Crou Jun 25 '19 at 12:21
  • @Crou, this one, to be exact: https://github.com/swisstxt/cloudstack-cloud-controller-manager – Volodymyr Melnyk Jun 26 '19 at 13:05

0 Answers0