0

I'm trying to understand some numbers I'm seeing as well as why Haproxy seems to be hitting variable limits on requests/s.

The test set up is pretty straightforward:

  • 20 backend servers running on Ubuntu VMs serving empty HTTP 200 responses
  • 1 Haproxy node running on an Ubuntu VM

I ran a perf test against each of the backend servers and they're serving up ~8k req/s using the wrk tool

Putting Haproxy in front of them seems to cap out at ~22k req/s.

CPU utilization is extremely low on the Haproxy box (5-7%).

I initially suspected it might be an outbound connection limit, so I ran the wrk tool in parallel on the Haproxy box against all 20 backend servers individually. That scaled out linearly - each of the processes were able to hit ~8k req/s.

Next up I switched the backend servers for an nginx instance serving up HTTP 204s

  • Hitting the nginx endpoint directly using wrk gave me ~350k req/s
  • Adding Haproxy in front of it reduced this to ~35k req/s ??

I then configured Haproxy to add multiple entries for the same endpoint, i.e.

server nginx1 nginxbox:80 
server nginx2 nginxbox:80

That dropped throughput down even further, to the ~25k req/s range

I've run similar experiments with Nginx hitting the backends (that caps out at ~19k/s) and running Haproxy on Centos, Haproxy versions 1.6 and the new 1.7, but the results seem to be pretty consistent. This makes me feel like there's some sort of configuration I'm missing in Haproxy, but I haven't been able to figure it out.

Note: I've specifically removed the global maxconn because for some reason adding that in seemed to increase the number of error responses

Here's the Haproxy config:

global
    daemon
    log 127.0.0.1 local0
    user haproxy
    group haproxy

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    option  abortonclose
    retries 3
    timeout connect  5000
    timeout client  10000
    timeout server  10000


frontend www-http
    log     global
    bind *:80
    default_backend myapp_api-nossl


backend myapp_api-nossl
    http-response add-header X-App-Server %b/%s
    server myapp_vm-0 myapp_apivm-0:80 check maxconn 200
    server myapp_vm-1 myapp_apivm-1:80 check maxconn 200
    server myapp_vm-2 myapp_apivm-2:80 check maxconn 200
    server myapp_vm-3 myapp_apivm-3:80 check maxconn 200
    server myapp_vm-4 myapp_apivm-4:80 check maxconn 200
    server myapp_vm-5 myapp_apivm-5:80 check maxconn 200
    server myapp_vm-6 myapp_apivm-6:80 check maxconn 200
    server myapp_vm-7 myapp_apivm-7:80 check maxconn 200
    server myapp_vm-8 myapp_apivm-8:80 check maxconn 200
    server myapp_vm-9 myapp_apivm-9:80 check maxconn 200
    server myapp_vm-10 myapp_apivm-10:80 check maxconn 200
    server myapp_vm-11 myapp_apivm-11:80 check maxconn 200
    server myapp_vm-12 myapp_apivm-12:80 check maxconn 200
    server myapp_vm-13 myapp_apivm-13:80 check maxconn 200
    server myapp_vm-14 myapp_apivm-14:80 check maxconn 200
    server myapp_vm-15 myapp_apivm-15:80 check maxconn 200
    server myapp_vm-16 myapp_apivm-16:80 check maxconn 200
    server myapp_vm-17 myapp_apivm-17:80 check maxconn 200
    server myapp_vm-18 myapp_apivm-18:80 check maxconn 200
    server myapp_vm-19 myapp_apivm-19:80 check maxconn 200
RohanC
  • 1

1 Answers1

0

You are probably hitting the default (calculated or compiled) maxconn limit.

You didn't say which version you are using, so I'm assuming 1.6 (current stable)

You can set this on either the front-end or back-end. Additionally you might need to adjust your file handle sysctl settings.

Zypher
  • 36,995
  • 5
  • 52
  • 95
  • Setting the maxconn limit didn't affect req/s unfortunately. Increasing that limit resulted in a higher number of concurrent connections (which helps with our requests that take longer, but doesn't affect our req/s numbers) – RohanC Dec 02 '16 at 19:36
  • Which settings specifically? I've increased ulimit settings – RohanC Dec 02 '16 at 19:42
  • Yea ulimit is the one. Would you be able to post a log snippet from when you are maxing out? – Zypher Dec 02 '16 at 19:54