0

I have a Rails 4.2 app that is currently running on an Ubuntu server with Nginx and Passenger and it get's a lot of traffic which Passenger doesn't handle very well (very often processes hang).

I decided to replace Passenger with Puma as I did with other apps on another server where things improved drastically but with this one as soon as i deployed the new version running on Puma, i noticed problems started happening, getting a lot of 502 bad gateway errors and looking in the logs i saw a lot of either of these errors:

puma.sock failed (11: Resource temporarily unavailable)

[error] 6658#6658: *5788 upstream timed out (110: Connection timed out) while reading response header from upstream

After googling around and ended up trying several things including the following Sysctl tweaks:

/etc/sysctl.conf

# Increase number of incoming connections
net.core.somaxconn = 65535

# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65536

Then reloaded with sudo sysctl -p

I've also tweaked the following Nginx configs:

/etc/nginx/nginx.conf

user www-data;
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 400000;

events {
        worker_connections 10000;
        use epoll;
        multi_accept on;
}
http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        keepalive_requests 100000;
        server_tokens off;

        server_names_hash_bucket_size 256;

[...]
}

Here's my Puma config:

workers 3
preload_app!

threads 1, 6

app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"

# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env

# Set up socket location
bind "unix://#{shared_dir}/sockets/puma.sock"

# Logging
if rails_env == "production"
  stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
end

# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"

on_worker_boot do
  #reconnect to mongo
  Mongoid::Clients.clients.each do |name, client|
    client.close
    client.reconnect
  end

  #reconnect to redis
  $redis.redis.client.reconnect
end

before_fork do
  Mongoid.disconnect_clients
end

I've also tried specifying the backlog value when biding to the socket like so:

bind "unix://#{shared_dir}/sockets/puma.sock?backlog=1024"

Here's the nginx config for the app:

upstream pumamyapp {
  server unix:///var/www/myapp/shared/sockets/puma.sock;
}

server {
  listen   80;

  listen 443 ssl; # managed by Certbot
  ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem; # managed by Certbot
  ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem; # managed by Certbot
  include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

  server_name www.myapp.com;
  rewrite ^(.*) https://myapp.com$1 permanent;
}

server {
  listen   80;

  listen 443 ssl; # managed by Certbot
  ssl_certificate /etc/letsencrypt/live/myapp/fullchain.pem; # managed by Certbot
  ssl_certificate_key /etc/letsencrypt/live/myapp/privkey.pem; # managed by Certbot
  include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

  root /var/www/myapp/public;
  server_name myapp.com;

  if ($ssl_protocol = "") {
    rewrite     ^   https://$server_name$request_uri? permanent;
  }

  client_max_body_size 100M;

  location ~* ^/assets/ {
    # Per RFC2616 - 1 year maximum expiry
    # http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
    expires 1y;
    add_header Cache-Control public;

    # Some browsers still send conditional-GET requests if there's a
    # Last-Modified header or an ETag header even if they haven't
    # reached the expiry date sent in the Expires header.
    add_header Last-Modified "";
    add_header ETag "";
    break;
  }

  location /cgi-bin {
    return 404;
  }

  location /setup.cgi {
    return 404;
  }

  location / {
    try_files $uri @app;
  }

  location @app {
    proxy_pass http://pumamyapp;

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Host $http_host;

    proxy_headers_hash_max_size 512;
    proxy_headers_hash_bucket_size 128;

    proxy_redirect off;
  }

}

I had to rollback to the previous version that runs on passenger because the site was unusable, any idea what is wrong and how can I make it right?

jm3
  • 2,405
  • 1
  • 14
  • 10
Julien
  • 222
  • 1
  • 2
  • 13
  • 1
    Most likely your app is simply too heavy for your setup. You need to look into load balancing and multiple servers for running your app. Or, you can try to profile your application and find where the bottlenecks are. – Tero Kilkanen Aug 03 '22 at 20:22

0 Answers0