0

I have a small Kubernetes cluster that I set up using kubeadm. My servers are connected via a VLAN, which my provider lets me add my servers into.

The VLAN adds a network adapter (ens6), which I created a virtual adapter (veth0) on, that assigns each server an IP on my VLAN.

The relevant netcfg looks like this:

vlans:
  veth0:
    id: 0
    link: ens6
    addresses: [10.96.0.1/24]

Server 01 has the IP 10.96.0.1, Server 02 has the IP 10.96.0.2.

Inspecting traffic with bmon, I see some traffic on the veth0 interface between the servers in the cluster.

RX (S01)     TX (S01)
 4.77GiB     60.28GiB

RX (S02)     TX (S02)
59.70GiB      5.48GiB

Looking at my public LAN interface (ens3), I see a suspiciously high amount of traffic for my cluster which has hardly any applications deployed on it:

RX (S01)     TX (S01)
 46.60GiB    304.84GiB

RX (S02)     TX (S02)
309.69GiB     40.86GiB

I bootstrapped the cluster with this command kubeadm init --pod-network-cidr 10.98.0.0/16 --apiserver-advertise-address=10.96.0.1 --apiserver-cert-extra-sans=<public ip for remote kubeadm>. The other server joined using a VLAN IP as well.

Inspecting ens3 using nethogs on both servers I am seeing external traffic from kubelet and kube-apiserver.

How can I verify both nodes are talking over the VLAN and how can I best debug where this excess network traffic is coming from? How can I full restrict cluster traffic to the virtual LAN?

Empty2k12
  • 101
  • 4

1 Answers1

0
  • You can verify the incoming traffic to see if they have VLAN tags by using tcpdump with the -e and vlan option.
    This will show the details of the VLAN header:
# tcpdump -i eno1 -nn -e  vlan
To capture the issue live.

or

# tcpdump -i eno1 -nn -e  vlan -w /tmp/vlan.pcap
To write to the capture to a file.
Nick Rak
  • 167
  • 7