I've got master slave replication working with Kubernetes, but would now like to implement failover. I have pods running with the service=postgresql and role=master or role=slave roles. When the master fails, I want to select another master and change its role label to master, so the postgresql-master service points to the new master.
Two questions:
- Can I connect from a pod to the Kubernetes API to get notified when the master dies and see which pod has to become the new master?
- When I try to change a label by saying 'kubectl label pod postgresql-slave role=master' I get the message saying I can only change the image of a running pod, although I get the impression from the command help that I should be able to change labels of pods. What am I doing wrong?
UPDATE: Exact error when updating label
$ ./kubectl get pod -l type=postgresql
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
postgresql-master-y868v 172.17.0.2 127.0.0.1/127.0.0.1 role=master,type=postgresql Running 3 minutes
postgresql genericsites/postgresql:0.1 Running 3 minutes
vincent@vincent-netbook-e11:~/Documents/Develop/Web/websites-system/installation/kubernetes$ ./kubectl label pod postgresql-master-y868v test=foo
Error from server: Pod "postgresql-master-y868v" is invalid: spec: invalid value '* pod definition in JSON *': may not update fields other than container.image
UPDATE: It does seem to be possible to be able to change the label of a service, which would maybe allow me to hack my way around the problem, but would of course not be optimal