18

Looking at the Ethernet entry on Wikipedia, I can't figure out how it's indicated how long the Ethernet frame is. The EtherType/Length header field apparently can indicate either a frame type or an explicit length, and I'm guessing that in the case of a frame type, it has to do some other logic to figure out how long the packet is. For example, if the EtherType field is 0x0800, that indicates an IPv4 payload, and so the receiving NIC would have to examine the first 32 bits of the payload to find the length of the IP packet, and therefore to figure out the total length of the Ethernet frame, and know when to look for the end-of-frame checksum and interframe gap.

Does this sound correct? I also looked at the IEEE 802.3 spec for Ethernet (part 1, anyway) which seems to corroborate this, but it's pretty opaque.

dirtside
  • 1,481
  • 4
  • 17
  • 22
  • 1
    See http://stackoverflow.com/questions/3416990/how-to-determine-the-length-of-an-ethernet-ii-frame – ysdx Aug 27 '11 at 02:07

4 Answers4

23

The Physical Coding Sublayer is responsible for delimiting the frames, and sending them up to the MAC layer.

In Gigabit Ethernet, for example, the 8B/10B encoding scheme uses a 10 bit codegroup to encode an 8-bit byte. The extra two bits tell whether a byte is control information or data. Control information can be Configuration, Start_of_packet, End_of_packet, IDLE, Carrier_extend, Error_propagation.

That is how a NIC knows where a frame start and ends. This also means that the length of the frame is not known before it has fully decoded, analogous to a NULL-terminated string in C.

Hroi Sigurdsson
  • 346
  • 2
  • 3
  • 1
    This is specified in IEEE Std 802.3-2015 (Section Three), 36.2.4.2 and 36.2.4.15 (among other places in this unreadable thing they call standard ;). – stefanct Dec 01 '17 at 12:01
1

The article you really want to answer your question is http://en.wikipedia.org/wiki/Ethernet_II_framing; which says:

As this industry-developed standard went through a formal IEEE standardization process, the EtherType field was changed to a (data) length field in the new 802.3 standard. (Original Ethernet packets define their length with the framing that surrounds it, rather than with an explicit length count.) Since the packet recipient still needs to know how to interpret the packet, the standard required an IEEE 802.2 header to follow the length and specify the packet type.

womble
  • 95,029
  • 29
  • 173
  • 228
  • I guess I'm unclear on what "Original Ethernet packets define their length with the framing that surrounds it" means. The preamble/begin-frame bits are pretty clear, but how does the client know the end of the frame has been reached? How does it distinguish between the CRC and the interframe gap? Is the IFG random electrical noise, easily distinguishable from real signal? – dirtside Nov 24 '09 at 22:43
  • The end of frame in "classic Ethernet" is signalled by an illegal encoding. – Vatine Oct 24 '10 at 18:24
  • 1
    @womble, The article you linked is good but the bit you quoted is very misleading out of context. Most frames on an Ethernet network today do not use an explicit length field. – Peter Green Sep 16 '16 at 16:17
-3

Took a while to work this out once before, and again now. Not much info on it available, which is surprising as it's such an obvious question. I finally settled on the solution that the length fields in the packet headers are used. See the following link

http://www3.rad.com/networks/infrastructure/lans/etherform.htm#_ieee

-3

Logically, there are only three options:

  1. Using a static frame size like in .
  2. Specifying the frame size in its header, or elsewhere, or even bounding it with some flags.
  3. No sending of frames at all

One of these works in Ethernet because there are no other options currently available for modern networking ;) 1st and 3rd are wrong for Ethernet, so you're correct!

kolypto
  • 10,738
  • 12
  • 51
  • 66