1

I have a pod file that looks like this:

apiVersion: v1
kind: Pod
metadata:
  name: test-api
  labels:
    app: web
spec:
  containers:
    - name: test-api-container
      image: cmgvieira/test-api:latest
  imagePullSecrets:
    - name: regsecret

It's running properly:

$ kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
test-api      1/1       Running   0          10d

I can access it using kubectl port-forward test-api 3000:80 then wget localhost:3000.

When I expose it using kubectl expose -f test-api-pod.yml --port=80 --target-port=80 --type=LoadBalancer, the service gets created successfully:

$ kubectl describe service test-api
Name:                   test-api
Namespace:              default
Labels:                 app=web
Selector:               app=web
Type:                   LoadBalancer
IP:                     100.XXX.XXX.XXX
LoadBalancer Ingress:   XYZ-ABC.us-east-1.elb.amazonaws.com
Port:                   <unset> 80/TCP
NodePort:               <unset> 32310/TCP
Endpoints:              <none>
Session Affinity:       None
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  7m            7m              1       {service-controller }                   Normal          CreatingLoadBalancer    Creating load balancer
  7m            7m              1       {service-controller }                   Normal          CreatedLoadBalancer     Created load balancer

But I can't access with wget XYZ-ABC.us-east-1.elb.amazonaws.com because it times out.

I thought this might be a problem with the hostname, so I added XYZ-ABC.us-east-1.elb.amazonaws.com to my local "hosts" file, making it resolve to 127.0.0.1, and the server responds just fine.

Does anyone know what could be causing this issue?

Also note that when I use the regular nginx image instead of my own app server, both port-forward and expose work just fine.

FWIW:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:52:34Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Gui Prá
  • 121
  • 6
  • I don't have enough reputation to comment, otherwise I would. Have you checked to make sure this isn't a security group issue? What do your security group rules look like on the ELB and the K8s cluster nodes? Another way you can troubleshoot is that LoadBalancer services also expose a port, so you might try curling the port and seeing if you can get a response. If that succeeds, and your ELB times out, it is most likely a SG issue between your ELB and the K8s cluster. – erstaples Mar 03 '18 at 00:07
  • Hi, Eric! Thanks for your comment. I've given up on Kubernetes for months now; and honestly I would only think of coming back to it some 5 years from now. In my opinion it has a very, very long way to go to be even barely usable. It has a lot of potential, but in my experience right now it's just a way to setup an unstable cluster, and the only way to make it reasonably fail-safe is to allocate a lot of nodes, which is very wasteful. This has nothing to do with the question, but I just would like to leave this here. – Gui Prá Mar 03 '18 at 04:13

0 Answers0