28

Currently I'm working on a small hobby project which I'll make open source once it's ready. This service is running on Google Container Engine. I chose GCE to avoid configuration hassle, the costs are affordable and to learn new stuff.

My pods are running fine and I created a service with type LoadBalancer to expose the service on port 80 and 443. This works perfectly.

However, I discovered that for each LoadBalancer service, a new Google Compute Engine load balancer is created. This load balancer pretty expensive and really over done for a hobby project on a single instance.

To cut the costs I'm looking for a way to expose the ports without the load balancer.

What i've tried so far:

Is there a way to expose port 80 and 443 for a single instance on Google Container Engine without a load balancer?

Ruben Ernst
  • 383
  • 3
  • 6

5 Answers5

12

Yep, through externalIPs on the service. Example service I've used:

apiVersion: v1
kind: Service
metadata:
  name: bind
  labels:
    app: bind
    version: 3.0.0
spec:
  ports:
    - port: 53
      protocol: UDP
  selector:
    app: bind
    version: 3.0.0
  externalIPs:
    - a.b.c.d
    - a.b.c.e

Please be aware that the IPs listed in the config file must be the internal IP on GCE.

ConnorJC
  • 921
  • 1
  • 7
  • 19
  • Thanks! But I think I missed something. The service is deployed but unable from the internet. I set the correct firewall rules. The service is displaying the correct `externalIp` – Ruben Ernst Sep 16 '16 at 15:28
  • Sorry for late reply, forgot that I spent time on the exact same issue. The IPs listed need to be the **internal** IP, not external (At least on GCE). – ConnorJC Sep 18 '16 at 20:25
  • Thanks, that was the solution! Unfortunately I'm not allowed to upvote yet... I dropped this comment to let you know that this answer combined with the comment above (which was the key) solved my issue! – Ruben Ernst Sep 19 '16 at 06:14
  • 2
    Would you (or @RubenErnst) mind expanding on the answer a bit? In particular, "the IPs listed on GCE must be the intrenal IP." Which IP do you mean? Are you able to get this working with a static IP assigned to your single node cluster? – Brett Apr 18 '17 at 00:02
  • @Brett: Sorry for my late response. Is your question already answered in the meantime? – Ruben Ernst Oct 19 '17 at 08:28
  • @RubenErnst Thank you for following up. I wound up needing a LoadBalancer anyway so I didn't pursue it much. – Brett Nov 04 '17 at 17:33
  • How do I know the `internalIP` at this stage? Is it node IP or pod IP? If it's node then what if I have multiple nodes? If it's pod IP, is there a chance my pod IP will change in time? – NeverEndingQueue Jan 29 '19 at 09:52
  • It is internal IP of your instance in GCE under https://console.cloud.google.com/compute/instances – honzajde Apr 27 '19 at 11:05
  • 1
    I agree with @Brett. This is not a full-fledged solution, yet. The internal node IPs are ephemeral. Eventually, the website/webservice will go down because the service will point to a node IP that no longer exists. Until this can be made to work with a *true, static* external IP, it is a ticking bomb. – Cameron Hudson Dec 27 '19 at 02:49
4

In addition to ConnorJC's great and working solution: The same solution is also described in this question: Kubernetes - can I avoid using the GCE Load Balancer to reduce cost?

The "internalIp" refers to the compute instance's (a.k.a. the node's) internal ip (as seen on Google Cloud Platform -> Google Compute Engine -> VM Instances)

This comment gives a hint at why the internal and not the external ip should be configured.

Furthermore, after having configured the service for ports 80 and 443, I had to create a firewall rule allowing traffic to my instance node:

gcloud compute firewall-rules create your-name-for-this-fw-rule --allow tcp:80,tcp:443 --source-ranges=0.0.0.0/0

After this setup, I could access my service through http(s)://externalIp

derMikey
  • 41
  • 2
2

If you only have exactly one pod, you can use hostNetwork: true to achieve this:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: caddy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: caddy
    spec:
      hostNetwork: true # <---------
      containers:
      - name: caddy
        image: your_image
        env:
        - name: STATIC_BACKEND # example env in my custom image
          value: $(STATIC_SERVICE_HOST):80

Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.

This solution is better than using service's externalIP as it bypass kube-proxy, and you will receive the correct source IP.

willwill
  • 141
  • 4
2

To synthesize @ConnorJC @derMikey's answers into exactly what worked for me:

Given a cluster pool running on the Compute Engine Instance:

# gcloud compute instances list
gce vm name: gke-my-app-cluster-pool-blah`
internal ip: 10.123.0.1
external ip: 34.56.7.001 # will be publically exposed

I made the service:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: my-app
  name: my-app-service
spec:
  clusterIP: 10.22.222.222
  externalIPs:
  - 10.123.0.1 # the instance internal ip
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: my-app
  type: ClusterIP

and then opened the firewall for all(?) ips in the project:

gcloud compute firewall-rules create open-my-app --allow tcp:80,tcp:443 --source-ranges=0.0.0.0/0

and then my-app was accessible via the GCE Instance Public IP 34.56.7.001 (not the cluster ip)

micimize
  • 121
  • 2
0

I prefer not to use the cloud load balancers, until necessary, because of cost and vendor lock-in.

Instead I use this: https://kubernetes.github.io/ingress-nginx/deploy/

It's a pod that runs a load balancer for you. That page has GKE specific installation notes.

Michael Cole
  • 452
  • 4
  • 13
  • 1
    I have some bad news for you. `nginx-ingress` creates a load balancer by default when you install it. I'm here because I did that, and want to cut cost. – Cameron Hudson Dec 27 '19 at 02:46