Is little-endian still faster on Intel processors or it does not matter if I parse big-endian on intel?

0

It used to be in past architectures. Is it still the case? I am reading from the network into byte buffers in Java.


Basically, will read bytes on Intel using big-endian instead of little-endian make a difference in performance? I know intel is little-endian but when you read from a OS byte buffer, you can read in big-endian or little-endian.

chrisapotek

Posted 2012-10-01T17:59:25.773

Reputation: 159

Question was closed 2012-10-01T19:39:16.540

2I'm not sure your question makes sense. A given computer architecture is either little-endian or big-endian (though a few are switchable). Byte data read from a device is generally read in sequential order, with "endianness" not being a factor one way or the other. – Daniel R Hicks – 2012-10-01T18:02:31.013

@DanielRHicks You are right. I will try to make it clearer – chrisapotek – 2012-10-01T18:08:27.650

Answers

1

You don't get to choose. Processors are (with a very small number of exceptions) fixed in their endianness, so when you have data to manipulate, you MUST put it into the proper format for your CPU. It's not a choice, and it's not a performance issue: it's just what must be done.

The only time it comes up much is when serializing data. Many common network protocols were designed with big-endian on-the-wire representations. If you read those bytes straight into memory on an Intel CPU, you have to swap the bytes around to make them little-endian before you can work with them. This isn't a big deal, and again, isn't a choice - the network protocol is what it is, and you don't get a choice about how to put things on the wire if you want other systems to be able to understand your packets.

Michael Kohne

Posted 2012-10-01T17:59:25.773

Reputation: 3 808

some architectures like ARM can deal with both big and little endian. Intel Itanium can even work with both endian types in code and data space – phuclv – 2014-07-24T05:06:59.740

Before someone asks: The TI TMS34020 Graphics System Processor could in fact be run in either little or big endian mode. The only problem was that the compilers I had at the time generated bad code in big-endian mode, so we ended up using it in little endian anyway. – Michael Kohne – 2012-10-01T18:11:39.150

In software you get to choose how your parse. So I guess my question is: You should, for performance reasons, place the bytes in little-endian, when working on intel, otherwise more work will have to be done "somwhere" to convert to little-endian. – chrisapotek – 2012-10-01T18:12:44.323

1@chrisapotek - If you are doing stuff in-memory, then you'll use the processor's native representation, because you want to work with the data. If you are implementing standard protocols, you again don't get to choose - the standard defines things. If you are designing your own communications protocols, then you can do what you want (and mirroring the architecture of most of the involved systems is probably a good idea). – Michael Kohne – 2012-10-01T18:24:23.307