2

I'm playing around with a bare-metal Kubernetes cluster (using the calico CNI plugin) and having troubles redirecting external traffic into the node. I've set up the nginx kubernetes ingress controller in order to expose an HTTPS service via a NodePort (on port 30528). I can access the service just fine on port 30528, so traffic is being redirected there properly within kubernetes.

Of course, I'd like for this to be exposed on port 443 instead of 30528, so I went to the tool I would usually use for servers to do port remapping --- a port-redirect iptables rule. This particular system is using shorewall to manage iptables rules, which I'm used to, so fine. I started with a test shorewall rule to redirect port 1443 to 30528. The shorewall rule looks like this:

REDIRECT        net             30528           tcp     1443

For those not familiar with shorewall, this generates an entry in the PREROUTING table like:

$ iptables -t nat -L -v -n
Chain PREROUTING (policy ACCEPT 160 packets, 8642 bytes)
 pkts bytes target     prot opt in     out     source               destination         
        <snip k8s/calico rules>
    0     0 REDIRECT   tcp  --  eth0   *       192.168.1.0/24       0.0.0.0/0            tcp dpt:1443 redir ports 30528
        <snip following rules>
<snip following chains>

When I attempt to connect to this port from outside the server, something odd happens.

$ curl -v -v https://<server-ip>:1443/
*   Trying 192.168.3.1:1443...
* Connected to <hostname> (<server-ip>) port 1443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):

The TCP connection then hangs until I kill it. Wireshark shows the TLS Client Hello being sent, the server ACKing the packet, plus TCP Keep-Alives being sent every 60s, but nothing ever gets through to kubernetes. If I use redir to bounce connections from 1443 to 30528, it all works perfectly fine. (and, of course, if I redirect 1443 to, say, port 2000 and listen on port 2000 with a netcat, everything works as expected, so I don't think there's any weirdnesses going on with other parts of the setup.)

Anyone have ideas about what might be going wrong here? The thing that makes the most sense to me is that since calico/kubernetes is inserting its own redirection rules into the PREROUTING table before the iptables redir, the handling of port 30528 is missed because of the ordering. In that case, though, I'm really confused as to why the connection is actually established --- I would have expected it to just simply fail!

(Since this machine is sitting behind another firewall box for NAT I can just tweak that firewall to redirect traffic going to port 443 to 30528, this problem is soluble, but I'd prefer to figure out what's going on for future reference...)

Ethereal
  • 163
  • 1
  • 8
  • What version of Kubernetes do you have ? How did you install Kubernetes ? What operating system do you have on Kubernetes nodes ? Do you have any additional cluster configuration e.g. k8s network policy, calico network policy ? – matt_j Feb 04 '21 at 16:22
  • 1
    Were you ever able to figure out the root cause, @Ethereal? I seem to be experiencing the exact same issue currently with TLSv1.3 handshakes timing out. – AdrianoKF Jun 14 '21 at 15:19
  • @AdrianoKF IIRC the calico iptables rules ended up clashing with the `REDIRECT` rule because they captured the packets _following_ the initial connection (since they were established), except without the port redirection, which caused the networking stack to silently drop the packets. – Ethereal Jun 16 '21 at 02:50
  • 1
    Thanks for the update, @Ethereal! I was able to diagnose the cause of our similar issue yesterday - it was caused by an MTU mismatch along the path, leading to the server hello packet being silently dropped (in TLSv1.3 that packet exceeded 1600 bytes in size and has the don't fragment flag set). Manually reducing the MTU in the Calico configuration fixed the issue for us. – AdrianoKF Jun 16 '21 at 06:04

0 Answers0