5

Problem

I have set up a two-node bare-metal kubernetes cluster configured with weavenet and metallb. I would like services hosted on this cluster to discover and interact with UPnP devices on my home network. I believe that for this to work, the UPnP discovery packets need to be configured to be re-multicasted to my home network. What is the right way to configure re-multicasting between a virtual weavenet network and a local network?

My network

  1. My home network is on 192.168.1.0/24, with both the master and slave node.
  2. Kubernetes deploys pods using the default weavenet setup which places all nodes somewhere in the 10.32.0.1/12 overlay network.
  3. I have the ability to deploy services with a metallb LoadBalancer that will provide a LoadBalancer IP somewhere within 192.168.2.192/26.

What I've tried

I can execute a test discovery script that discovers my UPnP devices using multicast (239.255.255.250:1900) from any computer on my home network. Once I deploy to the cluster (like this), the UPnP devices are no longer detected. I can see the UPnP packets from other pods in the cluster, but not from my computers directly connected to my home network.

I believe the solution to this problem will involve re-broadcasting the UPnP packets from weavenet to my home network and reverse-proxying the responses... but I don't know how to do something like that with weave net. How can I configure any kind of service/deployment/pod/network that will interact with my UPnP devices the way my test script does when using the host's network?

  • Could you share the configuration of Pods or Deployments and Services related to them? Also, Could you share the scripts you are using for multicast requests? – Artem Golenyaev Oct 09 '18 at 10:18
  • @ArtemGolenyaev I added links to both the suggested scripts. I also confirmed that UPnP is working _inside_ the cluster, but the packets are stopping between weavenet and my home network. – rileymcdowell Oct 10 '18 at 00:55

1 Answers1

3

Problem: The uPnP UDP broadcast's are coming from the pod's internal address are being dropped by the node before it egresses into your home network.

i.e. packet would look like IP 10.32.0.x.45196 > 239.255.255.250.1900: UDP, length 215

According to the docs at https://kubernetes.io/docs/tutorials/services/source-ip/

type: LoadBalancer - will automatically source NAT to the node IP.

type: NodePort - will automatically source NAT to the node pod IP.

Using a NodePort with hostNetwork will bind the pod's NodePort to the Node IP, thus UDP broadcast will come from a legal address.

Limitations to this setup:

  • Only one instance of your uPnP pod can be running at a time. Assuming a home network this will suffice. This is due to the direct mapping to the host network.
  • NodePorts can only expose unprivileged ports in the 30000 - 32767 range.

Solution:

See terrarium-service-udp.yaml for NodePort allocation.

See terrarium-deployment.yaml for hostNetwork declaration.

terrarium-service-udp.yaml:

kind: Service
metadata:
  annotations:
    metallb.universe.tf/allow-shared-ip: terrarium
  creationTimestamp: null
  labels:
    io.kompose.service: terrarium
  name: terrarium-udp
spec:
  ports:
  - name: '32767'
    port: 32767
    protocol: UDP
    targetPort: 54321
  - name: '31900'
    port: 31900
    protocol: UDP
    targetPort: 1900
  selector:
    io.kompose.service: terrarium
  type: NodePort

terrarium-deployment.yaml:

kind: Deployment
metadata:
  annotations:
  creationTimestamp: null
  labels:
    io.kompose.service: terrarium
  name: terrarium
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: terrarium
    spec:
      hostNetwork: true
      containers:
      - image: docker.lan/terrarium
        name: terrarium
        ports:
        - containerPort: 80
        - containerPort: 32767
          protocol: UDP
        - containerPort: 1900
          protocol: UDP
        resources: {}
      restartPolicy: Always

I have a working configuration for minidlna. If you need it for comparison, let me know and I will upload to GitHub.