1

Let's say I've created a cluster with a manifest like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
        use_db: "true"
        backend: "true"
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
      - name: my-app
        image: <...>
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 250m
            memory: 256Mi
        ports:
        - containerPort: 80
        - containerPort: <...>
---
apiVersion: v1
kind: Service
metadata:
  name: my-app
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: my-app

I've activated the network plugin and network policies. It's working well. But I want to set up some network policies. I found many examples of how to manage traffic between pods, how to allow external traffic and traffic from all single seeds. But I can't understand how to be in my case. That's what I want:

  • Deny traffic between all pods inside Kubernetes by default (I can do it).
  • Allow external traffic to the pods labeled backend from some foreign subnet to port 80 (but not from internal seeds)
  • Allow external traffic exchange with some database (I know it's DNS name and port) for pods with the label use_db

Please, can somebody give an example of the network policy YAML file for this case?

1 Answers1

0

If you specify the spec.podSelector field as empty, the set of pods the network policy matches to all pods in the namespace, blocking all traffic between pods by default. In this case, you must explicitly create network policies whitelisting all communication between the pods.

You can enable a policy like this by applying the following manifest in your Kubernetes cluster (Source):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress

To allow external traffic to the pods labeled backend from some external subnet to port 80 (but not from internal pods) your NetworkPolicy might look like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.4.0.0/16
    ports:
    - protocol: TCP
      port: 80

A brief explanation about what is happening here is that we are allowing all connections going to pods with backend label, allowing all connections coming from 0.0.0.0/0 (you may change to a different range) and blocking connections from 10.4.0.0/16 (this is my internal network and you need to change to whatever you have). We are also allowing connections going to port 80 only.

To allow external traffic exchange with some database (I know it's dns name and port) for pods with the label use_db you just need to follow the same logic from the previous example.

Doing this way, your pods can communicate with any server outside your Kubernetes Cluster. We are not blocking egress, only ingress. You just need to point to your DB server inside your application as you do normally.

You can always refer to the documentation and have more details on this matter.

Mark Watney
  • 361
  • 1
  • 10
  • The problem is: "I know dns name" is not the same "I know ip address". ip address can be changed, dns name is static – Роман Коптев Nov 13 '19 at 17:38
  • I've edited my answer with further explanation on this matter. If this is not what you need, please update your question with more details and practical examples. – Mark Watney Nov 14 '19 at 08:31
  • The problem is if anybody can access my backend, they try to hack me, and I pay for the traffic because somebody searches folders for phpMyAdmin and so on on my server. But the backend is for my internal use only. I want it communicating only with some services that I know dns names – Роман Коптев Nov 14 '19 at 14:03
  • It's not possible to handle DNS names within NetworkPolices. This is out of the scope of Kubernetes and you need to use a Firewall to achieve this purpose. If you are running your Kubernetes on a Cloud Provider, you must use its firewall solution, if you have Bare Metal, you can use iptables or any external firewall solution to prevent attacks. – Mark Watney Nov 14 '19 at 15:06
  • If this answer helped you, please don't forget to mark as accepted it and/or upvote it. Thank you. – Mark Watney Nov 20 '19 at 07:58