1
Let's say I've created a cluster with a manifest like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
use_db: "true"
backend: "true"
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: my-app
image: <...>
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
- containerPort: <...>
---
apiVersion: v1
kind: Service
metadata:
name: my-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: my-app
I've activated the network plugin and network policies. It's working well. But I want to set up some network policies. I found many examples how to manage traffic between pods, how to allow external traffic and traffic from all internal pods. But I can't understand how to be in my case. That's what I want:
- Deny traffic between all pods inside kubernetes by default (I can do it).
- Allow external traffic to the pods labeled backend from some external subnet to port 80 (but not from internal pods)
- Allow external traffic exchange with some database (I know it's dns name and port) for pods with the label use_db
Please, can some body give an example of the network policy yaml file for this case?