2

I intend to deploy a k8s + Rancher cluster on my local network, but my environment has several VLANs, with pfsense acting as a firewal and router between such VLANs.

My cluster resides in XCP-NG as a hypervisor and I will inform the VLANs that it should pass on to the cluster nodes.

I intend to have some services in different VLANs, because I have VLAN for development, DMZ, production, management, etc., in that I would like to know if I have to take a different approach during the deployment of K8s + Rancher due to my environment?

To deploy a cluster that works with pods on multiple VLANs, must the cluster nodes have multiple NICs, each on a VLAN that I intend to use?

For example, if my cluster has 6 nodes, 3 master and 3 workers, must they be in the same VLAN or are they in different VLANs and having communication between them is enough?

If I want to deploy a pod on the development VLAN, and my cluster resides on the management VLAN, would that be possible?

Thanks in advance for your help.

user562397
  • 21
  • 2
  • 1
    Typically, you wouldn't implement that type of access restriction external to k8s. K8s itself has many options for network policies and ACLs, depending on which network subsystem you choose. – jordanm Mar 02 '20 at 21:53

1 Answers1

1

If I want to deploy a pod on the development VLAN, and my cluster resides on the management VLAN, would that be possible?

This is not possible, kubernetes Clusters have it's own internal network. This network is completely segregated from your local network.

While deploying your kubernetes cluster (doesn't matter if it's rancher or any other on-premises kubernetes cluster) you can define on which CIDR your cluster will sit on.

You may be thinking: So if kubernetes has it's own network, how can I talk to the applications I deployed in my cluster?

You can expose your resources by using a Service or a Ingress. For example: When you create a service with type: LoadBalancer your service will allocate a external or public IP address (endpoint) that can be accessed from your internal network.

$ kubectl get svc
NAME                                      TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)                      AGE
custom-nginx-svc                          LoadBalancer   10.0.10.18   104.155.87.232   80:31549/TCP                 11d
echo-svc                                  LoadBalancer   10.0.10.14   23.251.138.185   80:30668/TCP                 11d
kubernetes                                ClusterIP      10.0.0.1     <none>           443/TCP                      11d
nginx-ing-nginx-ingress-controller        NodePort       10.0.9.184   <none>           80:31745/TCP,443:31748/TCP   25h
nginx-ing-nginx-ingress-default-backend   ClusterIP      10.0.1.169   <none>           80/TCP                       25h

As can be seen in the example above, there are two services with external IP defined.

In your scenario you need these External IPs to be IPs from your local network. This can be achieved using MetalLB.

In MetalLB you can define which IPs from your local network will be used. For example, the following configuration gives MetalLB control over IPs from 192.168.1.240 to 192.168.1.250:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

This is tying MetalLB to only one range and that's not what you need. So, please take a look at this article where it's explained how you can create IPPools and use them.

Mark Watney
  • 361
  • 1
  • 10