0

Is Low rate DOS attack (unintended) possible through asynchronous communication? Consider this scenario:

I have a client server application were there is a WAN communication between the client and server. Client makes synchronous calls to the service and so waits for a response. Now there happens some nasty WAN issue causing the latency to spike. In this case the server might wait for the packets to arrive while the client could timeout and start a new connection. If this happens fast enough the server might get saturated with connections causing legitimate connection requests to be dropped causing DOS.

Now if the communication is asynchronous the client will not wait for the response and repeated attempts wont happen. This seems to protect from DOS?

Am I correct here? or where did I go wrong ?

broun
  • 103
  • 2

2 Answers2

2

You are correct. The ability to receive multiple communications will protect from that kind of DoS, or rather: will increase the "power" needed by the DoS to be harmful (this in the single-client scenario).

On the other hand, keeping more communications open at the same time will have an impact on the server (memory and resources - which are basically memory again, handled by a negligible amount of CPU). So you are trading one vulnerability to "slow DoS" with another.

For typical systems, it makes sense, since they have memory to spare and CPU to burn, Unless protocol or architecture prevents from doing it, multiple asynchronous support is desirable.

The other (more typical) possibility is that we're looking at a large series of parallel continuous communications, i.e. a large number of persistent HTTP client channels. In that case, you ask, if the latency causes N connections out of M to drop and being reinstated, it will cause M+N connections to be simultaneously open, then M+2N, then M+3N...,; if the grow rate exceeds the rate at which the server closes stuck connections, the server resources will be eventually exhausted.

There are several lines of defense against this. Usually, when a connection drops (at HTTP level), the server will be aware of the fact at the TCP level, when the client closes the channel. Unless the client endeavors to drop the connection without a by-your-leave or keeps the old socket alive and opens another (such leaks are mainly due to sloppy programming - but it happens); in that case the server will notice the fact after a suitable TCP timeout, which can be further shortened by employing the keepalive option. This may not work against sloppy clients, since the underlying socket isn't actually dead at all.

Then, the server may be able to recognize the client (e.g. using sessions, or IP address; a firewall can also do the latter), and tear down any other extant communications with the same client either explicitly or implicitly (e.g. because the client has an assigned "slot" and can only be occupied once, therefore the incoming communication replaces the first; this needs be done after successful identification and authentication of the client, since otherwise it would open the door to a different DoS).

Client session recognition may be the only practical way to deal with sloppy clients if IP address isn't an option (e.g. several legit clients behind a NAT sharing its address); apart from upgrading the client software, of course. Such a situation would be readily visible with a simple netstat on any such client.

Finally, such a scenario would require for the whole architecture to be almost at capacity; you can avoid this by monitoring performances and upgrading capacity before entering the "danger zone". The farther you stay from the red line, the more power (and demonstrable will and culpability) will a DoS require before driving the system up to there, and the more warning you will have when this happens. Briefly eating up 5% of system capacity may be accidental, consistently eating 50% that much less. Of course, such a safety margin will have its costs.

LSerni
  • 22,521
  • 4
  • 51
  • 60
1

As we know that HTTP is a synchronous request/response protocol in which client initiates the request and server responds to the request. For emulating asynchronous communication over this synchronous HTTP different techniques has been proposed that are

  1. Polling
  2. Long Polling
  3. Streaming
  4. WebSockets

Apart from websockets all these methods built asynchronous communication over synchronous HTTP channels and have resulted in increased server side traffic. I recommend reading "Methods for asynchronous communication over HTTP" and "Known Issues and Best Practices for the Use of Long Polling and Streaming in Bidirectional HTTP". WebSocket is the current emerging true method of asynchronous communication from client to server and have security issues i.e "Hacking with Websockets"

Ali Ahmad
  • 4,784
  • 8
  • 35
  • 61