1

I have installed K8S Cluster using 3 VMs (1 Master, 2 Workers).

VM1: Eth0: IPv4-A1, Eth1: IPv4-B1, IPv6-C1

VM1: Eth0: IPv4-A2, Eth1: IPv4-B2, IPv6-C2

VM1: Eth0: IPv4-A3, Eth1: IPv4-B3, IPv6-C3

My K8S Cluster is all IPv4 - PodIp, serviceIP - everything Ipv4 and works fine.

I have referred to https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example for deploying example application exposing it through ingress using nginx.

I'm able to reach the service using Ipv4 address (both A and B). But, I'm not able to reach the service using Ipv6.

I have then created a NodePort service to expose the ingress service and now I see the following worker nodes.

netstat -anlp | grep -w LISTEN | grep 32407
(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)
tcp6       0      0 :::32407                :::*                    LISTEN      -               

Now - when I try to reach the service using Ipv6 - it just times out.

When I try to see what's happening using Wireshark.

When IPv4 is used to search service - TCP, then HTTP GET and we get response.

When IPv6 is used - TCP is established, even TCP KeepAlive is exchanged while curl is waiting.. But, I don't see response for my HTTP GET sent.

Not sure what's happening within the worker node :-( I don't see anything in wireshark.

Bit of searching in Google gave some hint that K8S uses Netfilter to make packets reach the correct destination. Is that not capable of doing it for IPv6 packets?

Kindly help.

1 Answers1

0

According to the official gcloud documentation:

VPC networks only support IPv4 unicast traffic. They do not support broadcast, multicast, or IPv6 traffic within the network: VMs in the VPC network can only send to IPv4 destinations and only receive traffic from IPv4 sources. It is possible to create an IPv6 address for a global load balancer

Please read this article about ipv6 support and dual-stack configurations

In azure:

IPv6 for Azure Virtual Network is currently in public preview. This preview is provided without a service level agreement and is not recommended for production workloads. You can find more information here

Discussion about ipv6 support on github

In addition to work the cluster with ipv support the cluster should have dual-stack implementation supporting IPv4 and IPv6 for both pods and services. As an example please take a look here and here and here kubeadm-dind-cluster

At the moment probably Amazon provide the biggest IPv6 support

Mark
  • 304
  • 1
  • 8
  • Thanks for your response. But, I'm trying to do this in VMs created in my BareMetal and not in gCloud or AWS. – R Kaja Mohideen May 30 '19 at 14:37
  • As can we see [here](https://github.com/leblancd/community/blob/fc4d40bac4ca76d9c46fa7335ea74a186388c313/keps/sig-network/0013-20180612-ipv4-ipv6-dual-stack.md#configuration-of-endpoint-ip-family-in-service-definitions) and [here](https://github.com/kubernetes/enhancements/pull/808) community is working with ipv6. Even if you are working with BareMetal server all components should support dual stack to support ipv and ipv6. – Mark May 30 '19 at 15:47
  • @R Kaja Mohideen did you check last Mark's comment? Did this answer your question? – PjoterS Feb 22 '21 at 08:08