0

I can install Kubernetes successfully on a Ubuntu 16 server, and get the master node into a Ready status. But if I reboot/restart, I get the error message in the title when I try to use KUBECTL.

Do I need to put the following commands given when I originally ran KUBEADM INIT into my profile to persist?

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

I was following the recipe in the O'Reilly Kubernetes Cookbook, together with using Google's K8S docs and Serverfault.

I can successfully get all the system pods running, and I can successfully get both a master and a single worker running as Ready. But they don't persist over a reboot.

NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   etcd-k8s-master                      1/1       Running   0          1m
kube-system   kube-apiserver-k8s-master            1/1       Running   0          2m
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          2m
kube-system   kube-dns-86f4d74b45-phphd            3/3       Running   0          3m
kube-system   kube-proxy-25mtq                     1/1       Running   0          3m
kube-system   kube-scheduler-k8s-master            1/1       Running   0          2m
kube-system   weave-net-rfb6z                      2/2       Running   0          50s

NAME          STATUS     ROLES     AGE       VERSION
k8s-master    Ready      master    9m        v1.10.3
k8s-worker1   Ready     <none>     14s       v1.10.3
chicks
  • 3,639
  • 10
  • 26
  • 36
Peter Schofield
  • 1,639
  • 9
  • 11

2 Answers2

1

First of all check if your cluster is up and running:

$ ps aux | grep apiserver
$ ps aux | grep etcd
$ docker ps

You should see all necessary kubernetes processes running like kube-proxy, kube-apiserver, kube-controller-manager, kube-scheduler, etcd.

If cluster is ready, you can try to access kube-apiserver using kubectl tool.

By default, kubectl takes its configuration from file $HOME/.kube/config

Default behavior can be changed by setting environment variable KUBECONFIG.
The kubectl read all configuration files listed in this variable and merge their configuration settings.
This way you can have the configuration for several clusters in separate files for easy maintenance.
In this case you should set KUBECONFIG variable in user profile, to be sure it has correct value after next login.

The other way to use specific config file is to set --kubeconfig argument in the command line

$ kubectl --kubeconfig config_file_path <other_command_line_arguments>

You can check available configuration (sensitive information will be suppressed) by running following command:

# kubectl config view

There is a possibility to configure multiple context in one config file and to switch the context by specifying its name in command line. It is useful if you are managing several clusters (or different service accounts for the same cluster) using the same user account.

$ kubectl config --kubeconfig=config-demo use-context exp-scratch <other_command_line_arguments>

Check out the documentation for details.

When you create a cluster with the command kubeadm init , it adds the connection configuration for the kubectl to the file /etc/kubernetes/admin.conf

-rw-------   1 root root 5446 May 1 11:11 admin.conf

You can specify it as kubectl config file directly if you are using root account.
To use it with different user account, you need to copy it to user's $HOME/.kube/config file and make it accessible by user.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

It's enough to do it only once for each user's account which you plan to use for cluster management. This is a regular file and it doesn't disappear after reboot, so you should still be able to access Kubernetes cluster running kubectl command.

If you reset the cluster with kubeadm reset and create it again, you need to update your kubectl connection configuration in the user profile, because previous cluster credentials will not work with new cluster.

VAS
  • 370
  • 1
  • 9
1

I hadn't checked the logs. Ubuntu 16.04 logs via systemd, so when I remember to check the logs via journalctl, it was clear immediately that Kubernetes was complaining that the swap was enabled. On the original install, I'd disabled the swap on each server - but I hadn't amended the /etc/fstab to disable the swap permanently.

Peter Schofield
  • 1,639
  • 9
  • 11