Your understanding is correct as far as it goes, but your real limit is going to be file handles. Each socket connection requires a Linux file handle, and the default ulimit on those is 1024. /proc/sys/fs/file-max sets the limit for the entire system. You will need to raise these limits to handle high nginx connection rates.
The NGINX folks have tested up to 50,000 connections on a six-core server:
https://www.nginx.com/blog/nginx-websockets-performance/
The reality is that if you want tens of thousands of connections in the real world, you need to go to multiple reverse proxies behind a round-robin DNS. This is, for example, how Amazon's Elastic Load Balancer works. f you look for something on the AWS cloud like Slack.com and type 'nslookup slack.com', you'll get a long list of IP addresses. Type it again and you'll get a different list with the IP addresses rotated so a different one is at the head of the list. These are Amazon AWS ELB reverse proxies () on a round-robin DNS that forward requests to the actual application servers. The hard part at that point becomes managing registering and deregistering reverse proxies from the DNS as they come and go, or managing IP address takeover if that's what you intend to do. Hard problems, and thus why I use Amazon's solution rather than rolling my own.