The exact ratio can only be obtained if the denominator is divisible by the denominator of the aspect ratio you want. 768 isn't divisible by 9, so there won't be any 16:9 integer resolution with that height. So why wasn't 1360:765 chosen?
Because dimensions of display resolutions tend to be a power of 2 (or a multiple of a power of 2 that is as large as possible), possibly because powers of 2 work better for a binary computer
- 2D image formats as well as video codecs process the images in blocks instead of pixel-by-pixel individually or line line-by-line. The block sizes are always powers of 2 like 8x8, 16x16 or less frequently 4x8, 8x16, 4x16 because they're easier to arrange in memory, and also more suitable for the CPU's SIMD unit... That's why you'll see blocky artifacts when viewing a low quality image or video file.
- 3D graphics renderers often use a technique called mipmapping that involves using images with sizes that are powers of 2 of each other, to increase rendering speed and reduce aliasing artifacts. If you're interested, check out How does Mipmapping improve performance?
So regardless of the graphics type, using powers of 2 eases the job of the encoder/decoder and/or GPU/CPU. Images with a non-power-of-2 side length will always have the corresponding side rounded up to a power of 2 (which you'll see later on the example of 1920x1080) and you'll end up wasting some memory at the edges for storing those dummy pixels. Transforming those odd-sized images like that also introduces artifacts (which are sometimes unavoidable) due to the dummy values. For example rotating odd-sized JPEGs will introduce noise to the result
Rotations where the image is not a multiple of 8 or 16, which value depends upon the chroma subsampling, are not lossless. Rotating such an image causes the blocks to be recomputed which results in loss of quality.[17]
https://en.wikipedia.org/wiki/JPEG#Lossless_editing
See
Now obviously 1360:765 is precisely 16:9 as you said, but 765 isn't divisible by any power of 2, while 768 can be divisible by 256 (28), so 768 for the height is a better choice. Moreover using 768 as the height has the advantage of being able to display the old 1024x768 natively without scaling
768/(16/9) = 1365.333...
, so if you round it down, you'll get a value that's closest to 16:9. However it's an odd value, so people round it up to 1366x768, which is quite close to 16:9. But again, 1366 is only divisible by 2 so some screen manufacturers use 1360x768 instead since 1360 is divisible by 16 which is much better. 1360/768 = 1.7708333... which approximates 16/9 to about 2 decimal places, and that's enough. 1360x768 also has the bonus that it fits nicely inside 1MB RAM (whereas 1366x768 doesn't). 1344x768, another less commonly used resolution, is also divisible by 16.
WXGA can also refer to a 1360×768 resolution (and some others that are less common), which was made to reduce costs in integrated circuits. 1366×768 8-bit pixels would take just above 1-MiB to be stored (1024.5KiB), so that would not fit into an 8-Mbit memory chip and you would have to have a 16-Mbit memory chip just to store a few pixels. That is why something a bit lower that 1366 was chosen. Why 1360? Because you can divide it by 8 (or even 16) which is far simpler to handle when processing graphics (and could bring to optimized algorithms).
Why Does the 1366×768 Screen Resolution Exist?
Many 12MP cameras have effective resolution of 4000x3000, and when shooting in 16:9, instead of using the resolution 4000x2250 which is exactly 16:9, they use 4000x2248 because 2248 is divisible by 8 (which is the common block size in many video codecs), and 2250 is divisible by 2.
Some Kodak cameras use 4000x2256 too, since 2256 is divisible by 16, and 4000/2256 still approximates 16/9 to about 2 decimal places. If shooting in 3:2 they'll use 4000x2664, not 4000x2667 or 4000x2666 which are closer to 3:2, for the same reason.
And this is true for other resolutions too. You won't find any image resolutions that are odd. Most will be at least divisible by 4 - or better, 8. The full HD resolution, 1920x1080, has a height not divisible by 16, so many codecs will round it up to 1920x1088 instead, with 8 dummy lines of pixels, then crop it down when displaying or after processing. But sometimes it's not cropped so you can see there are many 1920x1088 videos on the net. Some files are reported as 1080 but actually 1088 inside.
You may also find the option to crop 1088 to 1080 in various video decoder's settings.
Back to your example 1920/1200 = 8/5, it's not strange at all because it's the common 16:10 aspect ratio that is close to the golden ratio. You can find it in 1280x800, 640x400, 2560x1600, 1440x900, 1680x1050... No one would advertised it as 16:9 because they're clearly 16:10
I assume that every pixel is a perfect square. Is this assumption wrong?
That's wrong. In the past pixels are often not a square but a rectangular shape. Other pixel arrangements like hexagon do exist although not very common. See Why are pixels square?
I've already given some techniques in my answer – phuclv – 2018-07-09T08:09:19.343
@Thomas: never heard that. Can you substantiate ? Can you give an example ? Or maybe you mean a multiple of a power of two, which can somewhat help address computation (shifts instead of division). But not rendering algorithms themselves. – Harry Cover – 2018-07-09T09:05:25.157
1Are you sure that the pixels are square? You might actually have the exact ratio but not realise it... – Toby Speight – 2019-04-09T16:52:52.540
<irony>In love and in advertising lying is not only allowed, but expected...</irony>; – lexu – 2013-01-06T10:41:43.120
5I would say 1366 x 768 is close enough 16:9. To be 16:9 exactly, it would have to be 1365 1/3 x 768 or 1366 x 768 3/8. – Bavi_H – 2013-01-06T20:39:10.487
or simply 1360 x 765 which would be exactly 16:9 – Martin Thoma – 2013-01-07T09:45:18.900
2@moose Which is a horrible choice because the height is not an even number. This breaks a lot of applications, in particular graphics rendering, where many techniques, both software and hardware, implicitly require height in pixels to be a multiple of two. – Thomas – 2013-01-10T04:04:05.990
1@Thomas I didn't know that. This is the kind of answer I've expected to get with this question. Can you tell me an example for such a technique? – Martin Thoma – 2013-01-10T08:50:17.587