13

We are not able to handle more than 3k concurrent request in nginx (Connecton time out). We change also the ulimit to 20000. Following are my nginx.conf and sysctl.conf files:

user www-data; 
worker_processes 4; 
pid /var/run/nginx.pid;
events {
     worker_connections 5000;
     use epoll;
     # multi_accept on; 
} # took it in next line
worker_rlimit_nofile    100000;

http {
     sendfile on;
     tcp_nopush on;
     tcp_nodelay on;
     keepalive_timeout 600;
     send_timeout 600;
     proxy_connect_timeout       600;
     proxy_send_timeout          600;
     proxy_read_timeout          600;
     reset_timedout_connection on;
     types_hash_max_size 2048;
     client_header_buffer_size 5k;
     open_file_cache max=10000 inactive=30s;
     open_file_cache_valid    60s;
     open_file_cache_min_uses 2;
     open_file_cache_errors   on;
     include /etc/nginx/mime.types;
     default_type application/octet-stream;
     access_log off; 
     error_log /var/log/nginx/error.log;
     gzip on;
     gzip_disable "msie6";
     include /etc/nginx/conf.d/*.conf;
     include /etc/nginx/sites-enabled/*; 
}

systel.conf

# Increase size of file handles and inode cache
fs.file-max = 2097152

# Do less swapping
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2

### GENERAL NETWORK SECURITY OPTIONS ###

# Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2

# Allowed local port range
net.ipv4.ip_local_port_range = 2000 65535

# Protect Against TCP Time-Wait
net.ipv4.tcp_rfc1337 = 1

# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

# Decrease the time default value for connections to keep alive
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15

### TUNING NETWORK PERFORMANCE ###

# Default Socket Receive Buffer
net.core.rmem_default = 31457280

# Maximum Socket Receive Buffer
net.core.rmem_max = 12582912

# Default Socket Send Buffer
net.core.wmem_default = 31457280

# Maximum Socket Send Buffer
net.core.wmem_max = 12582912

# Increase number of incoming connections
net.core.somaxconn = 65536

# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65536

# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824

# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144

# Increase the read-buffer space allocatable
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384

# Increase the write-buffer-space allocatable
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384

# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1

We are using ubuntu OS with 4 CPU and 8 GB RAM. I hardly see any CPU and RAM usage. I am just firing nginx default page URL.

user50442
  • 241
  • 1
  • 2
  • 5

2 Answers2

7

I strongly suggest you to use micro-caching.

Example : http://www.howtoforge.com/why-you-should-always-use-nginx-with-microcaching http://reviewsignal.com/blog/2014/06/25/40-million-hits-a-day-on-wordpress-using-a-10-vps/

I recently set up micro-caching on my box. With apache benchmark it holds up to 50 000 connections , CPU goes only to 6%. No timeouts, page is served in 1,1ms.

Those example manuals above i suggest only for "view only", because they are not correct. In my case , I spent many hours setting this up, but it is worth that stress :)

2

You mention you set user file limit to 20000 but not how..

On Ubuntu you change the hard and soft limits in /etc/security/limits.conf Assuming the user who is running nginx is www-data, you would then just add this at end of file:

www-data soft nofile 100000
www-data hard nofile 120000
www-data soft nproc 100000
www-data hard nproc 120000

If you want to check what your current limit is run:

su - www-data
ulimit -Hn
ulimit -Sn
Flup
  • 7,688
  • 1
  • 31
  • 43
Alex R
  • 2,107
  • 2
  • 15
  • 14
  • 2
    That won't work. limits.conf is part of the PAM suite, and as such will only apply the limits to processes that go through PAM. "su" does, so you're good there. Services started from init (upstart) do not. Upstart provides a way of setting limits into the upstart configuration file. – Chris Cogdon Aug 26 '15 at 21:38
  • How to do it via [systemd](https://serverfault.com/questions/628610/increasing-nproc-for-processes-launched-by-systemd-on-centos-7) – Alex Jun 30 '17 at 12:52