0

I've a created Kubernetes v1.12.0 cluster on GCP consisting of 3 controller VMs and 3 node VMs, all running Ubuntu 18-04 LTS (Kernel 4.15.0-1025-gcp)

I'm using weave v2.5.0 for networking and everything works fine except no load balancer is created which I expose a simple webserver. I think the problem is due to some omission on my part regarding my GCP configuration rather than a bug in Kubernetes

I've placed all the VMs in an unmanaged GCP cluster labelled kubernetes like this

gcloud compute instance-groups unmanaged create kubernetes
gcloud compute instance-groups unmanaged add-instances kubernetes --instances gke-controller-0,gke-controller-1,gke-controller-2,gke-worker-0,gke-worker-1,gke-worker-2

The network is labelled kubernetes-the-hard-way and the subnet is labelled kubernetes

kubectl run nginx --image=nginx --port=80 
kubectl expose deploy nginx--port=80 --name=nginx --type=LoadBalancer

The External IP stays in Pending state:

kubectl get svc nginx
NAME    TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
nginx   LoadBalancer   10.32.0.198   <pending>     80:31756/TCP   33m

kubectl describe svc nginx
Name:                     nginx
Namespace:                default
Labels:                   run=nginx
Annotations:              <none>
Selector:                 run=nginx
Type:                     LoadBalancer
IP:                       10.32.0.198
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31756/TCP
Endpoints:                10.200.1.15:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

kubectl get svc nginx -o yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    run: nginx
  name: nginx
  namespace: default
  resourceVersion: "39375"
  selfLink: /api/v1/namespaces/default/services/nginx
spec:
  clusterIP: 10.32.0.198
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 31756
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}

kubectl get ep
NAME         ENDPOINTS                                            AGE
kubernetes   10.240.0.10:6443,10.240.0.11:6443,10.240.0.12:6443   23h
nginx        10.200.1.15:80                                       42m

I've checked the event log and all the pod logs and no issues are reported.

The kube-apiserver.service contains et al:

--cloud-provider=gce \
--cloud-config=/var/lib/kubernetes/gce.conf \
--cloud-provider-gce-lb-src-cidrs=35.204.0.0/16,107.178.0.0/16 \

/var/lib/kubernetes/gce.conf:

[global] 
token-url = nil 
project-id = first-outlet-221910 
network = kubernetes-the-hard-way 
subnetwork = kubernetes 
node-instance-prefix = gke- 
node-tags = controller, kubernetes-the-hard-way, worker

Please could someone explain the correct value for token-url if my entry is incorrect? Also, have I made any other errors/omissions which are causing this problem?

TIA

1 Answers1

0

Generally, when you've considered to expose any web resource outside the cluster using External IP, Kubernetes provides Ingress mechanism to establish HTTP and HTTPS routes to your internal services.

However, when we talk about running cluster on GCP, HTTP(S) Load Balancer is created by default in GKE, once Ingress resource has been implemented successfully, therefore it will take care for routing all the external HTTP/S traffic to the nested Kubernetes services.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: basic-ingress
spec:
  backend:
    serviceName: web
    servicePort: 8080

You can check External IP for Ingress resource:

kubectl get ingress basic-ingress

Check this link for more information about exposing the web application through Ingress service in GCP.

And here is example for gce.conf file.

Nick_Kh
  • 568
  • 4
  • 7
  • How can you debug if the ingress is not getting associated with it's external IP? I've been waiting a few hours and the load balancer is not getting creatd – perrohunter Sep 24 '19 at 17:17