You are correct. The ability to receive multiple communications will protect from that kind of DoS, or rather: will increase the "power" needed by the DoS to be harmful (this in the single-client scenario).
On the other hand, keeping more communications open at the same time will have an impact on the server (memory and resources - which are basically memory again, handled by a negligible amount of CPU). So you are trading one vulnerability to "slow DoS" with another.
For typical systems, it makes sense, since they have memory to spare and CPU to burn, Unless protocol or architecture prevents from doing it, multiple asynchronous support is desirable.
The other (more typical) possibility is that we're looking at a large series of parallel continuous communications, i.e. a large number of persistent HTTP client channels. In that case, you ask, if the latency causes N connections out of M to drop and being reinstated, it will cause M+N connections to be simultaneously open, then M+2N, then M+3N...,; if the grow rate exceeds the rate at which the server closes stuck connections, the server resources will be eventually exhausted.
There are several lines of defense against this. Usually, when a connection drops (at HTTP level), the server will be aware of the fact at the TCP level, when the client closes the channel. Unless the client endeavors to drop the connection without a by-your-leave or keeps the old socket alive and opens another (such leaks are mainly due to sloppy programming - but it happens); in that case the server will notice the fact after a suitable TCP timeout, which can be further shortened by employing the keepalive option. This may not work against sloppy clients, since the underlying socket isn't actually dead at all.
Then, the server may be able to recognize the client (e.g. using sessions, or IP address; a firewall can also do the latter), and tear down any other extant communications with the same client either explicitly or implicitly (e.g. because the client has an assigned "slot" and can only be occupied once, therefore the incoming communication replaces the first; this needs be done after successful identification and authentication of the client, since otherwise it would open the door to a different DoS).
Client session recognition may be the only practical way to deal with sloppy clients if IP address isn't an option (e.g. several legit clients behind a NAT sharing its address); apart from upgrading the client software, of course. Such a situation would be readily visible with a simple netstat
on any such client.
Finally, such a scenario would require for the whole architecture to be almost at capacity; you can avoid this by monitoring performances and upgrading capacity before entering the "danger zone". The farther you stay from the red line, the more power (and demonstrable will and culpability) will a DoS require before driving the system up to there, and the more warning you will have when this happens. Briefly eating up 5% of system capacity may be accidental, consistently eating 50% that much less. Of course, such a safety margin will have its costs.