0

Using EKS with Kubernetes 1.21, managed nodegroups in a private subnet. I'm trying to set the cluster up so that kubectl exec times out after inactivity regardless of the workload being execed into, and without any client configuration.

I'm aware of https://github.com/containerd/containerd/issues/5563, except we're on 1.21 with Docker runtime, not containerd yet.

I set streamingConnectionIdleTimeout: 3600s on the kubelet in the launch template:

cat /etc/kubernetes/kubelet/kubelet-config.json | jq '.streamingConnectionIdleTimeout = "3600s"' > /etc/kubernetes/kubelet/kubelet-config.json

/etc/eks/bootstrap.sh {{CLUSTER_NAME}}

And confirmed with curl -sSL "http://localhost:8001/api/v1/nodes/(node name)/proxy/configz".

However, kubectl exec still does not time out.

I confirmed /proc/sys/net/ipv4/tcp_keepalive_time = 7200 on both the client and the node, so we should be hitting the streaming connection idle timeout before Linux starts sending keepalive probes.

Reading through How kubectl exec Works, it seems possible that the EKS managed control plane is keeping the connection alive. There are people online who have the opposite problem - their connection times out regardless of streamingConnectionIdleTimeout - and they solve it by adjusting the timeout on the load balancer in front of their k8s API server. However, there are no knobs (that I know of) to tweak in that regard on the EKS managed control plane.

I would appreciate any input on this topic.

0 Answers0