3

I need to nmap scan a class B network within a short span of time. The requirements are quite straightforward. I want to:

  1. Scan as fast as possible
  2. Fulfill point 1 while maintaining reliability (Missing 1/2 out of 10 is acceptable)
  3. Fulfill point 1 while minimizing disruption to other network users

I will be scanning from inside the network. Here is what i've decided on so far.
nmap -p 0-65535 172.22.0.0/16 (port range and ip are just samples)
-Pn skip host discovery
--min-hostgroup 256 scan 256 ip addresses at a time
--ttl 10 I think this reduces network noise. Correct me if i am wrong
--max-retries 1 I found that this speed up the scan without sacrificing too much reliability

Here are my questions

  1. I am thinking of using the option -T4 but am not sure how much it will affect reliability and other users' network speeds. How should i determine whether or not to use this option?
  2. Are there any other sure ways of improving scan speed without affecting requirements 2 and 3?
akgren_soar
  • 181
  • 2
  • 7
  • 1
    Network classes died over 20 years ago, killed by VLSM and CIDR. See RFCs 1518 and 1519. Please let them rest in peace. Network classes have no place in modern networking. Also, you need to start thinking ahead of what you intend to do with IPv6, where, even at 1 million addresses per second, it will take over half a million years to scan all the IPv6 addresses in a standard `/64` subnet. – Ron Maupin Oct 30 '16 at 16:15
  • 1
    Let's see, with `18,446,744,073,709,551,616` possible addresses per standard IPv6 `/64` subnet, it will take over `584,542` years at `1,000,000` addresses scanned per second, or over `584` years at `1,000,000,000` addresses scanned per second. You really need to rethink why you are doing this, and how you could achieve what you want without a brute-force scan of every address. – Ron Maupin Oct 30 '16 at 16:54
  • @RonMaupin The question was specifically about IPv4, so IPv6 arguments don't really apply. I agree that `-Pn` to blindly scan every address is unwise, so I addressed that in my answer. If you have ideas for alternative host discovery methods, I would be glad to learn them, too. – bonsaiviking Oct 31 '16 at 03:59

1 Answers1

8

There are basically 4 things that make a scan take a long time:

  1. Sending probes you don't need to send
  2. Latency
  3. Dropped packets
  4. Rate limiting of responses by targets

Speeding up your scan is usually accomplished by measuring and planning each of these aspects until you reach the speed you want while maintaining accuracy.

Regarding 1, the biggest problem you have here is -Pn, which disables host discovery. Host discovery is how Nmap knows which addresses are worth port scanning ("up") and which will not respond in any way, usually due to no host having that address configured. In a /16 network, you will be scanning 65536 addresses. If you know you only have 5000 assets on the network, then 92% of your scanning will be wasted. Play around with the various -P* options using -sn to avoid the actual port scan until you find a set of probes that works well on your network. Now, if you can do discovery some other way, like using -iL to import a list of active addresses from your internal IDS sensors, then -Pn can be used to avoid skipping an address that you know is up just because it doesn't respond to the default discovery probes.

Another potential waste of probes that you may be missing is reverse DNS name resolution. This is a great source of info, and Nmap is very fast at it, but if you don't need to know the DNS name (PTR record) for each address, then adding -n will cut that phase out entirely, saving you some precious time.

For 2, latency is usually something you can't control. But you can be smart about letting Nmap know what latencies you expect. If you're on a LAN, then setting --max-rtt-timeout can help speed up scanning by telling Nmap not to wait too long to hear back on any particular packet. But be careful not to be too optimistic; if Nmap gives up too early, it counts the packet as dropped, and will slow down to avoid further drops. Use the latency info from a trial -sn run to get an idea of the worst case, then double that to be safe. It will still be less than the default if your network is reasonably fast.

Speaking of dropped packets (number 3 on our list), this is the main source of inaccuracy when you try to scan too fast. You can overwhelm either your own link or the capabilities of the target itself if it is very resource-constrained (like ICS or IoT devices can be). If your network is fast and capable enough to not have many dropped packets, you can set --max-retries to a lower number than default (which is 10) to speed up a bit, at the risk of having some inaccuracy. Because Nmap detects dropped packets and slows down, you will probably not end up affecting anyone else's traffic for more than a few seconds unless you are using --min-rate to continue firehosing packets out.

Number 4, rate limiting by targets, is tricky because it's not under your control (unless you can have your scanning machine whitelisted by whatever is doing the rate limiting). There are a few tricks, though: for a specific kind of TCP RST ratelimiting, the --defeat-rst-ratelimit option will allow you to maintain scanning speed at the cost of having some ports labeled "filtered" which are probably actually "closed." Open ports will not be affected, and that's usually all you are interested in anyway.

Timing templates (like -T4 that you mentioned) will set some of these options for you, but you are always free to override them with more-specific options. Check the man page for the version of Nmap you are using to see exactly which options are set by each template. Be aware that -T5 sets the --host-timeout option, so if any target takes more than 15 minutes to finish (very possible with an all-ports scan), it will be dropped and no output will be shown.

Setting a lower IP TTL value with --ttl will not reduce network noise unless you have routing loops. It will prevent your probes from reaching targets that are more than 10 hops away, if that is important to you.

Finally, always be sure to use the latest Nmap version available. We are always making improvements that make scanning faster and more reliable.

bonsaiviking
  • 4,355
  • 16
  • 26
  • Wouldn't `--randomize-hosts` also increase performance of the scan? In my opinion, `--max-retries` is only useful if network-based IPS is at play somewhere throughout the large network. I prefer a simple `nmap -n -Pn --min-rate 400 --randomize-hosts --defeat-rst-ratelimit` without specifying any other of these performance-affecting flags, but maybe that's just my experience – atdre Oct 31 '16 at 22:15
  • 1
    @atdre It really depends on your network and what is really causing slowdowns. Randomizing hosts can help distribute traffic between various links at the endpoints, so that if you have one or more low-bandwidth subnets mixed in with your targets, you won't try to scan all addresses on that net at once. – bonsaiviking Nov 01 '16 at 14:56