0

Let us have a web-application, served by Nginx, listening to port 80.

When two or more users try to access some URL of that application, which response takes lot of time. For example, ten users try to load /give_me_some_charts page and server responses within 100 seconds for single-user request.

Each user tries to access the application over the port 80. I believe the server (Nginx in our example) shall block that port until the response is sent. So processing ten users shall take 10 users * 100 seconds == 1000 seconds.

But there is some feature in Nginx, called balancing. And now i am not sure on how server processes parallel requests.

So, how does the server processes parallel requests and how it responses? And what is that balancing feature of Nginx?

shybovycha
  • 124
  • 5
  • What do you mean with "balancing"? The only balancing I know of in nginx is load balancing, which is not directly related to what you are asking. Maybe you could provide a link explaining in more detail what you mean by "balancing". – Isaac Aug 27 '13 at 09:08
  • Here you go: http://wiki.nginx.org/HttpUpstreamModule `This module provides simple load-balancing` – shybovycha Aug 27 '13 at 13:05
  • So that feature is not directly related to nginx ability to serve multiple request in parallel, see @usd-matt s answer. – Isaac Aug 27 '13 at 13:10

1 Answers1

2

I'm not certain about nginx specifically (haven't had a reason to use it yet) but nearly all servers which provide a service on a port function similar to the below:

  • A listener process opens the port and waits for connections
  • As soon as a new connection is established, that connection (socket) is handed off to another process or thread
  • The 'main' process goes back to wait for another connection.

A server will not block all further connections until it's finished with the current one. If it did, even mildly busy websites would become unusable. The block of code outlined above is usually as tight (small) as possible so it completes as quick as possible. Some servers that fork new processes (like Apache) will also keep spare 'child' processes hanging around so that they don't have to do a slow fork system call when a client connects.

This allows multiple (tens, hundreds, possibly even thousands) of clients to be connected at the same time. If 10 users connect to your website, then the web server with have 10 individual threads or processes serving those clients simultaneously. Whether all the requests finish in 100 seconds or take longer depends mostly on what the code is doing. (Whether they are fighting for CPU time, blocking on file system or database calls, etc.)

A quick web search suggests that the 'balancing' feature of nginx may be related to it's ability to proxy requests to back-end servers (using it as a load balancer), not its ability to handle multiple clients at the same time.

USD Matt
  • 5,321
  • 14
  • 23
  • This is definatly true for nginx: http://stackoverflow.com/questions/3436808/how-does-nginx-handle-http-requests – Isaac Aug 27 '13 at 09:09