0

Currently my k8s cluster is on v1.16.x and I want to upgrade it to v1.17.x for which ETCD has to be upgraded to 3.4 (currently 3.3). My setup is bit complex as I'm running ETCD outside the master nodes and it's a 3 node etcd cluster running as containers in 3 individual EC2.

I'm aware that there is neat documentation about upgrading ETCD from 3.3 to 3.4 but it doesn't describe how it can be done when it's running inside containers. Spent a considerable amount of time googling it but no luck. Kubeadm is of not much help as kubeadm plan doesn't show a major version upgrade for ETCD.

I presume taking a backup and then changing the Image version in the manifest would help but not much sure about it.

Please can anyone guide me on this ?

jagatjyoti
  • 101
  • 2

2 Answers2

0

Your own suggestion is actually the answer, stop the container, change the image to 3.4 and start it again. Wait for the node to connect and you're done. Only once all etcd nodes in the cluster are running 3.4, will it actually "upgrade" to 3.4.

Note: I haven't actually done this particular upgrade myself, but I've done previous upgrades like this, even from 2.x to 3.x, so I would not expect an issue here. If you're unsure, simply rebuild the 3.3 cluster locally on your desktop using docker and try it out! That's the beauty of running containers, they can run anywhere.

Tim Stoop
  • 578
  • 5
  • 19
  • The cluster was built using kubeadm and etcd also comes under it's control. Now, if I'm changing the Image version manually, will kubeadm be aware of it when I do kubeadm upgrade plan ? If it doesn't identify, it will be a blunder. – jagatjyoti Jan 18 '21 at 11:44
0

After much, the solution I got is by upgrading kubeadm and kubelet to v1.17.x post which kubeadm upgrade plan showed etcd update to 3.4.3 and I'm going with this approach.

jagatjyoti
  • 101
  • 2