0

As per Link , we can create pods with multiples networks , if the application opens port with non-default network (eth2,3 etc) How can we expose it as service? In service yaml i do not find any way to expose the service other than default.

Nasir
  • 101

1 Answers1

1

Multus allows you to attach network devices to your Pods, though we should note these interfaces are not part of our Kubernetes cluster SDN.

With Multus based devices: you would allocate IPs to your Pods, either using DHCP, static IPs, ... up to you. Bearing in mind those addresses are independent from your Kubernetes SDN.

If other containers in that cluster need to connect those endpoints, then their traffic would to leave your SDN, going through their usual default gateway, which in turn should have some knowledge of (route to) your Multus addresses subnet(s).

However, you may still create Services, designating addresses out of your SDN. This would be done by creating an Endpoints object, alongside your Service, such as:

---
apiVersion: v1
kind: Endpoints
metadata:
  name: svc-out-sdn
  namespace: ns
subsets:
- addresses:
  - ip: 10.0.0.1
  ports:
  - name: tcp-80
    port: 80
    protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: svc-out-sdn
  namespace: ns
spec:
  clusterIP: None
  ports:
  - name: tcp-80
    port: 80
SYN
  • 1,751
  • 8
  • 14
  • If we are explicitly specifying EP then if one pod goes down service will be still returning the IP until we manually update it correct? – Nasir Aug 17 '21 at 02:24
  • Also when the service is created it will still have clusterIP which is of main interface? – Nasir Aug 17 '21 at 02:35
  • 1
    1/ correct. up to client to iterate over IPs returned by DNS resolution, should one of them fail. 2/ it could have, and then your nodes firewall would have redirected that Service IP to the actual backends addresses: but here, `spec.clusterIP=None` ensures that no Service IP would be allocated => internal DNS would return with your endpoint backend addresses instead. – SYN Aug 17 '21 at 05:58
  • I think this can not be solution but still this is one possible workaround, why it can not be solution because pods can go up/down and would need application to handle the error when downed ips used. – Nasir Aug 17 '21 at 07:37
  • Would you consider setting up a LoadBalancer in between? Maybe in Kubernetes as well? Some haproxy that does TCP health checks (or whatever, haproxy checks are pretty modular) to your actual endpoints addresses, and make your Service/clients point to that LB? – SYN Aug 17 '21 at 09:34
  • LB is what i have now, but issue is it needs manual intervention when new nodes are added. So auto scaling is the problem. – Nasir Aug 20 '21 at 01:52