17

I'm trying to replicate the traffic that one specific nginx server receives to two servers. The goal is not to load-balance, but to replay the same input on all nginx servers.

An example: Nginx receives a HTTP POST. I want to send this same POST to other servers.

** UPDATE **

The situation is easy and non-complex. I just need to resend the POST data (or GET or any request data) to another server IP (it also running a nginx instance). Just it.

USER -> POST DATA -> NGINX INSTANCE ----REDIRECT ---> SERVER 1 AND SERVER 2

Bernard Bay
  • 171
  • 1
  • 4
  • 1
    Can you expand on your architecture? What are the other two servers? Is there a shared DB, shared filesystem, etc.? Does the POST write to the DB, to the filesystem, what? Actually, what are you trying to accomplish that can't be done with clustered filesystems and database instances? – cjc Aug 01 '12 at 19:35
  • I rephrased you question to more accurately reflect what you seem to be asking. – gWaldo Aug 01 '12 at 19:43
  • 1
    This type of behaviour is sometimes used in A/B testing – gWaldo Aug 01 '12 at 19:43
  • 2
    That is not the way to go, you're breaking HTTP, http://www.w3.org/Protocols/rfc2616/rfc2616.html – Daniel Prata Almeida Aug 01 '12 at 19:57
  • I've seen this type of thing asked about before. I think that what you want to look into can be searched for as "http replay". – gWaldo Aug 01 '12 at 20:02

4 Answers4

12

I was able to replicate using post_action state.

upstream main_upstream {
least_conn;
server 192.168.9.10:80;
keepalive 1024;
}

server {
listen 80;
server_name _;
client_body_buffer_size 1512k;
client_max_body_size 10m;

location /1/ {
fastcgi_pass main_upstream;
post_action @replayevent ;

}
# Send the post_action request to a FastCGI backend for logging.
location @replayevent {
fastcgi_pass 192.168.9.14:80;
}

Now it send data two servers.

If your upstream do not support fastcgi (happened in my case), replace with proxy_pass.

fresskoma
  • 1,343
  • 1
  • 10
  • 13
Chucks
  • 501
  • 1
  • 7
  • 13
4

I don't believe you can do this with nginx by itself; a quick perusal of the relevant bits of the nginx documentation (upstream and proxy directives) doesn't suggest you can. As noted in comments, this also breaks HTTP, as there's no clarity on which of the two rear servers will respond.

One alternative is to use something like varnish and do a replay to the second rear server using varnishreplay:

https://www.varnish-cache.org/docs/2.1/reference/varnishreplay.html

I haven't used it, so I don't know if you can make it replay the traffic nearly simultaneously with the first rear server.

cjc
  • 24,533
  • 2
  • 49
  • 69
3

What you want to use is something like EM-Proxy[1]. It easily handles splitting http requests across any number of servers. It also correctly handles returning data from only the live server and blocking the others so the user doesn't get multiple responses.

[1] https://github.com/igrigorik/em-proxy/

2

Use central storage like an NFS server and each nginx web node mounts the NFS share (file-level). Or use a cluster file system like OCFS2 and each web node mounts the LUN/partition (block-level).

HTTP500
  • 4,827
  • 4
  • 22
  • 31