Why does Windows 7 work with Unicode and not with UTF-8?
Terminology
Unicode and UTF-8 are not the same kind of thing: Unicode is a character-set that defines a set of characters (a repertoire) and assigns numbers (code points) to each of those characters. UTF‑8 is one of several encodings that can be used to represent a stream of Unicode characters on disk or in transmission. The same stream of Unicode characters could also be encoded as UTF‑16, UTF‑32 or UTF‑7, for example.
However, Notepad offers you "encoding" options including ANSI
, Unicode
, Unicode big-endian
and UTF-8
. The Microsoft developers who wrote this have used the wrong terms. When they say "Unicode"
they most likely mean "UTF-16
little-endian". When they say "ANSI" they mean Code Page 1252 (CP-1252).
Microsoft Notepad
I believe Microsoft's Notepad writes UTF-16 with a byte order mark (BOM) and that Notepad looks for the BOM when reading a text file. The BOM tells the app that the file is UTF-16 and indicates whether it is big-endian or little-endian.
If Notepad doesn't find the BOM, it calls a library function IsTextUnicode
, which looks at the data and attempts to guess what encoding was used. Sometimes (inevitably) it guesses incorrectly. Sometimes it guesses that an "ANSI" file is "Unicode". Trying to interpret a UTF-16 or UTF-8 file as Code Page 1252 would cause it to display the wrong glyphs and be unable to find glyphs to render some 8-bit values – these would then be shown as squares.
As harrymc says in his answer, there are better alternatives to Notepad. But Notepad lets you explicitly choose the encoding when opening a file (rather than leaving Notepad to try to guess).
Byte Order Marks
According to the Unicode consortium, Byte Order Marks (BOMs) are optional. However, Windows relies on BOMs to distinguish between some encodings.
So in short, maybe your files lacked a BOM for some reason? Maybe the BOM was lost sometime during the upgrade process?
If you still have the original files that show as squares, you could make a hex dump of them to see if they contain a BOM.
Plain text file standards
The problem is that there are effectively none – no universal standards for plain text files. Instead we have a number of incompatibilites and unknowns.
How have line-endings been marked? Some platforms use the control-characters Carriage Return (CR) followed by Line Feed (LF), some use CR alone and some use LF alone.
Are the above terminators or separators? This has an effect at the end of a file and has been known to cause problems.
Treatment of tabs and other control characters. We might assume that a tab is used to align to a multiple of 8 standard character widths from the start of the line, but really there is no certainty to this. Many programs allow tab positions to be altered.
Character set & Encoding? There is no universal standard for indicating which of these have been used for the text in the file. The nearest we have is to look for the presence of a BOM which indicates the encoding is one of those used for Unicode. From the BOM value the program reading the file can distinguish between UTF-8 and UTF-16, etc., and between Little-Endian and Big-Endian variants of UTF-16, etc. There is no universal standard for indicating that a file is encoded in any other popular encoding such as CP-1252 or KOI-8.
And so on. None of the above metadata is written into the text file – so the end-user must inform the program when reading the file. The end-user has to know the metadata values for any specific file or run the risk that their program will use the wrong metadata values.
Bush hid the facts
Try this on Windows XP.
- Open Notepad.
- Set the font to Arial Unicode MS. (You may need to install it first;
if you don't see it in the menu, click on "Show more fonts".)
- Enter the text "Bush hid the facts".
- Choose
Save As
. From the Encoding
menu, select ANSI
.
- Close Notepad.
- Reopen the document (e.g., using
Start
, My Recent Documents
).
- You will see 畂桳栠摩琠敨映捡獴 instead of "Bush hid the facts".
This illustrates that the IsTextUnicode
function used by Notepad incorrectly guesses that the ANSI (really Code Page 1252) text is Unicode UTF-16LE without a BOM. There is no BOM in a file saved as ANSI
.
Windows 7
With Windows 7, Microsoft adjusted IsTextUnicode
so that the above does not happen. In the absense of a BOM, it is now more likely to guess ANSI (CP 1252) than Unicode (UTF-16LE). With Windows-7 I expect you are therefore more likely to have the reverse problem: A file containing Unicode characters with code points greater than 255, but with no BOM, is now more likely to be guessed as being ANSI – and therefore displayed incorrectly.
Preventing encoding problems
Currently, the best approach seems to be to use UTF-8 everywhere. Ideally you would re-encode all old text files into UTF-8 and only ever save text files as UTF-8. There are tools such as recode and iconv that can help with this.
3
According to Wikipedia: in Windows Vista and Windows 7 [..] IsTextUnicode has been altered to make it much more likely to guess a byte-based encoding rather than UTF-16LE.
– Arjan – 2010-12-13T22:47:58.963Yes, for sure we those files have BOM since we generate those file with BOM. It is interesting that Windows 7 does not read BOM created by the older OS. – Sha Le – 2010-12-14T04:10:04.807
BOM hasn't changed. It might be that your files are missing the BOM, but that previously the default format was some Unicode variant, where it is now ASCII. See my answer. – harrymc – 2010-12-14T09:33:46.970
@Sha Le: If the file has a BOM, Windows 7 Notepad should open it correctly, so the problem you describe doesn't fit the known issues with
isTextUnicode
. Can you create a small sample file that illustrates the problem you have with a file that includes a BOM? – RedGrittyBrick – 2010-12-14T09:33:57.767There is also
this app can break
for the same effect asBush hid the facts
– Regent – 2011-04-12T10:29:26.287