3

I've got master slave replication working with Kubernetes, but would now like to implement failover. I have pods running with the service=postgresql and role=master or role=slave roles. When the master fails, I want to select another master and change its role label to master, so the postgresql-master service points to the new master.

Two questions:

  • Can I connect from a pod to the Kubernetes API to get notified when the master dies and see which pod has to become the new master?
  • When I try to change a label by saying 'kubectl label pod postgresql-slave role=master' I get the message saying I can only change the image of a running pod, although I get the impression from the command help that I should be able to change labels of pods. What am I doing wrong?

UPDATE: Exact error when updating label

$ ./kubectl get pod -l type=postgresql
POD                       IP           CONTAINER(S)   IMAGE(S)                      HOST                  LABELS                        STATUS    CREATED     MESSAGE
postgresql-master-y868v   172.17.0.2                                                127.0.0.1/127.0.0.1   role=master,type=postgresql   Running   3 minutes   
                                    postgresql     genericsites/postgresql:0.1                                                       Running   3 minutes   

vincent@vincent-netbook-e11:~/Documents/Develop/Web/websites-system/installation/kubernetes$ ./kubectl label pod postgresql-master-y868v test=foo
Error from server: Pod "postgresql-master-y868v" is invalid: spec: invalid value '* pod definition in JSON *': may not update fields other than container.image

UPDATE: It does seem to be possible to be able to change the label of a service, which would maybe allow me to hack my way around the problem, but would of course not be optimal

  • Someone (from Google if I'm correct) replied that you should be able to reach the Kubernetes API by the DNS hostname kubernetes and that he'd look into the second issue later, but his answer was deleted. Although the service doesn't seem reachable through the kubernetes hostname, I did notice Kubernetes service environment variables ($KUBERNETES_SERVICE_HOST and _PORT) which seem to work. Thanks for the pointer! – Vincent den Boer Jul 25 '15 at 07:03
  • I'd still appreciate being able to update a pods label as advertised being possible though :) – Vincent den Boer Jul 25 '15 at 07:13
  • > Can I connect from a pod to the Kubernetes API to get notified when the master dies and see which pod has to become the new master? http://kubernetes.io/v1.0/docs/user-guide/accessing-the-cluster.html#programmatic-access-to-the-api –  Jul 27 '15 at 17:23

1 Answers1

1

@vincent It's possible that the server's version is too old. Please try kubectl version to verify that. If it's too old, try updating it.

janetkuo
  • 126
  • 3