12

While reading about Internet Protocol, I found myself reading about ping of death attacks: the thing that attracted my curiosity was the fact that these attacks could ever work!

I mean, why wasn't IPv4 packet dropping (for packets larger than 56 byte + header) immediately implemented on computer systems? Was/is there a reason for a host to receive a packet larger than 84 byte total?

John Kugelman
  • 139
  • 1
  • 8
Nomerandom1
  • 121
  • 1
  • 3
  • Note that [ICMP (Internet Control Message Protocol)](https://en.wikipedia.org/wiki/Internet_Control_Message_Protocol) is used for a lot more than "ping" (echo request, echo response). – user Nov 11 '15 at 10:26
  • One of the core functions of ICMP is in fact figuring out how the network is working. For path MTU discovery, you need to send a big IP packet, but what do you put **in** the packet? You can't go out and send random TCP bytes; some application may pick it up. ICMP allows you to send a big packet which you know won't interfere with applications. Well, except for the whole ping-of-death interference. – MSalters Nov 11 '15 at 13:55

4 Answers4

11

There are many reasons why packet dropping isn't implemented. One example is that there is a technology known as jumbo frames, which allows for up to 9000 bytes of payload. These are primarily supported on LANs where large amounts of data are moved around, and where the repetitive nature of headers is unnecessary.

It's also worth noting that most packets are larger than 56 bytes + header. The standard Ethernet frame size is 1500 bytes. If you are referring solely to ICMP packets, it would be possible to implement such a restriction, but why? There are valid use cases for wanting larger ICMP packets. Having a network stack that properly handles non-standard packets is generally more desirable than having a network stack that arbitrarily limits functionality.

Jesse K
  • 1,068
  • 6
  • 13
6

Was/is there a reason for a host to receive a packet larger than 84 byte total?

There are several layers involved here, and that's the first part of the problem. ICMP's 84 bytes for a ping message header (the payload can be larger... you can put whatever you'd like in the payload field) mean that the packets have to be received by the system on the wire protocol, reassembled into the larger packet on the IP protocol if there's fragmentation, passed on to the ICMP layer which will then check the protocol type and only after all of that can you length check it.

There are many places when writing code to allocate memory that you can make mistakes along the way. For example since you can't get more than 64k of data in a valid IPv4 packet, you might allocate a 64k memory buffer. Then when you assemble the fragments, you may be in trouble if you didn't check that the sum of the fragments is less than the allocated memory size. A specific failure made was trusting the length header of the last possible fragment as always valid within the whole reassembled packet even though the maximum offset packet wasn't allowed to be full size without breaking things.

Thus you can't immediately drop most packets without reassembly. You could add a match check on the last packet size (if offset > x and offset * 8 + len(y) > 64k), but that would require checking extra bits in the packet which is really only suited at the firewall and host levels.

Jeff Ferland
  • 38,090
  • 9
  • 93
  • 171
3

The internet only works because everyone has agreed to follow the standards of TCP/IP. Ping packets larger than 56 bytes+header are allowed by TCP/IP, and simply dropping them because your implementation can't handle them would be breaking TCP/IP.

The problem with the Ping Of Death wasn't the design of TCP/IP, but an implementation problem. You don't change the rules of valid IP packets because one vendor didn't write their code properly.

As to valid uses of large ping packets, it can be useful in chasing down other network problems. A simple example is determining the MTU between you and another host by setting the packet size lower and lower until it isn't fragmented.

Steve Sether
  • 21,480
  • 8
  • 50
  • 76
  • TCP/IP does not require ICMP echo requests to be honored. (It's called a *request* for a reason...) It's often a good thing to do for a wide variety of reasons, but it's not required. – user Nov 11 '15 at 10:25
  • @MichaelKjörling That's not what RFC 792 says: Description The data received in the echo message must be returned in the echo reply message. You can choose to ignore the spec without a lot of blowback, and many people do, but you're still breaking the specification. – Steve Sether Nov 11 '15 at 14:55
  • I haven't reviewed the relevant RFCs, but I can't imagine that they place a *requirement* that *all* ICMP messages *unconditionally must* be both delivered to their final destination host and acted upon by that host. You can even block things like ICMP source quench messages in the firewall, but you do so at your own peril... – user Nov 11 '15 at 18:08
  • @MichaelKjörling I'm not sure you understand RFCs. An RFC is just a spec. You can choose to ignore it if you like, but you do so at the risk of breaking interoperability. Nobody is going to take you to TCP/IP jail for doing so. The internet is a collective agreement to follow certain rules. That's really my point. It's in your interests to follow the rules, and for the most part most people return ping packets because it's advantageous to them. – Steve Sether Nov 11 '15 at 19:00
3

There are a couple of good answers here already.

I wanted to add that packet filters, firewalls, and IPSs can handle this kind of thing. These kinds of devices are frequently configured to disregard the normal rules of TCP/IP to be able to drop correct, but malicious, traffic. Examples include large ICMP packets, fragmented TCP packets, crafted packet headers, and many others that are correctly formed, but problematic for all too frequently buggy TCP/IP or app protocol stacks.

Rules like this cannot be baked into all devices as it would make TCP/IP non-functional in general for reasons the other answers have described: 1) need for specialized packet sizes in diagnostics 2) need for large/jumbo packets for efficiency (ex: SAN throughput with lower overhead)

Alain O'Dea
  • 1,615
  • 9
  • 13
  • 1
    Good points. Security appliances like IPSs and NIDSs are often deployed by organizations that know they have no business need for out-of-the-ordinary packets. To most companies, a large ICMP payload is never valid, so discarding them won't disrupt their normal business. If such a packet is found, it may be the sign of an attacker or malware that has breached the perimeter; it may even signal them to dispatch the security teams. – John Deters Nov 10 '15 at 23:12