6

I'm trying to understand a bit more about Kubernetes networking. That's why I've deployed a cluster in google cloud and checked the networking:

gcloud container clusters describe cluster0 | grep -i cidr

clusterIpv4Cidr: 10.20.0.0/14        # --cluster-cidr
nodeIpv4CidrSize: 24
servicesIpv4Cidr: 10.23.240.0/20     # --service-cluster-ip-range

So the first is for pod IPs:

First IP: 10.20.0.1
Last IP: 10.23.255.254

Service

First IP: 10.23.240.1
Last IP: 10.23.255.254

Is it always like this that the pod range contains the service IP range? Are they using the same network layer?

rcgeorge23
  • 103
  • 3
DenCowboy
  • 283
  • 3
  • 6
  • 14

2 Answers2

5

It is a long story how Kubernetes network is made...

Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Every pod gets its own IP address so you do not need to explicitly create links between pods and you almost never need to deal with mapping container ports to host ports. This creates a clean, backwards-compatible model where pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.

Kubernetes use both private and public accessible IP addresses. Public IP addresses are not mentioned at this moment.

Kubernetes uses private pool of addresses to provide communication inside a cluster. Every pod and service has a private IP address. Services in Kubernetes are virtual - they are created by NAT, and iptables creates port redirection from addressed service to pods.

Basic rules of the communication inside of the cluster:

  • all containers can communicate with all other containers without NAT
  • all nodes can communicate with all containers (and vice-versa) without NAT the IP that a container sees itself as is the same IP that others see it as

Regarding your question: official Kubernetes network documentation states:

--service-cluster-ip-range ipNet -  A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.

So, is not recommended to have service IP in the same range that is used by pods.

I highly recommend watching video about Kubernetes networking or looking at illustrated guide.

d0bry
  • 186
  • 5
  • I still don't get it. Isn't the pod IP address space virtual as well? Why not just use the same address space for services, instead of some weird ip-tables routing rules? – spinkus Jul 04 '21 at 13:40
0
clusterIpv4Cidr: 10.20.0.0/14        # IP subnet which is configured on all pods.
nodeIpv4CidrSize: 24                 # CidrSize which is configured on each node
servicesIpv4Cidr: 10.23.240.0/20     # IP subnet which is configured for Services

In the point of view from pods, all other pods and services are on the same subnet so they are reachable directly. It means no gateway configuration and routing are involved in. By the way, if the serviceIpv4Cider is out of the clusterIPv4Cider range, gateway configuration and routing are required. It's something different.

So I'am just guessing that the warning below tells service-cluster-ip-range must not be used for pods but it's ok they are overlapped with the clusterIPv4Cider range. It's intended.

"This must not overlap with any IP ranges assigned to nodes for pods."

Jenny D
  • 27,358
  • 21
  • 74
  • 110