There's a lot to the equation.
How low can you really go? One key ingredient is how small is your pod. If the application in one of your pods requires 4G of RAM, then splitting up 1 8G server into 4x2G servers means that application now has no node to run on.
Add on the overhead. Every node has some overhead to run Kubernetes. There are also DaemonSets that run on every node, adding more overhead. If you double the number of nodes, you double the replicas for a DaemonSet, which increases the cost to run the cluster. Even without those, there's the overhead of the OS, Linux distro, kubelet, and other Kubernetes components. The smaller the node, the larger the percentage of the workload on the node is just overhead to run the node, rather than running your applications.
Factor in the management overhead. Yes, infrastructure as code, auto scaling groups, etc help. But the more nodes you have, the more you have to manage, monitor, and debug when there's an issue.
Factor in the network. The more nodes, the more networking traffic between nodes, which is slower than virtual networking within a node over the Linux bridge. Cloud providers also tend to significantly constrain networking on smaller nodes, resulting in timeouts and lag that may make the application unusable.
In my experience, companies quickly get to around 10 nodes for HA. You want enough capacity in the remaining nodes to handle containers failing over from a down node. With 10 nodes, that means leaving a little over 10% extra capacity to handle one down node in the cluster. If you only had 3 nodes, each node would need to be less than 2/3 utilized to handle a single node going down. Before those companies get to 20 nodes, they start scaling up to larger nodes. This keeps node updates in the cluster manageable with a small team and minimal automation. At a certain point, the costs of larger nodes outweighs the cost to more management overhead to run more nodes, and they return to scaling out for more capacity.