0

For a while now we've experienced regular errors from operations on kube API in AKS resulting in etcdserver: leader changed message. From what we've learned there is an ETCD snapshot performed every 2h on AKS resulting in this leader change. This 2h window seems coherent with our experience of disruptions.

I was under the impression that ETCD snapshots do not directly or indirectly cause a change of ETCD clusters leadership. The only reason I can see would be related to the snapshot impacting the leader to a level where it would loos the leader election. Am I missing something here? Is it normal to experience ETCD leader changes during snapshots?

0 Answers0