It is not "ASCII" nor "ASCII Russian".
Before Unicode became widespread, most computer systems used the ISO-8859 character encodings, of which there were 16, each for a different region (Central European, Cyrillic, Greek...). Windows had its own 'code pages', very similar but with extra glyphs in otherwise-unused ranges. All these character encodings are 8-bit and only differ in the second half (128-255).
The problem with these encodings is that it's next to impossible for a program to determine which encoding was used to save a file, unless it was specified explicitly (such as in HTML pages; however, plain text files have no such metadata tags). Read the Wikipedia article on Mojibake for a more detailed description.
In your example, the document was saved using Windows-1251 (Cyrillic), but your program reads it as if it were Windows-1252 (Western European), which has very different characters in the same positions. To the computer, it looks perfectly okay – it doesn't understand languages or scripts. (There are programs which do statistical analysis in order to determine the correct encoding, though – some web browsers have such a function.)
There are several ways you could convert such text to Unicode:
Use online tools such as this one or this one.
Use your web browser:
Drag the .txt
file into the browser.
From View → Character Encoding (or Firefox → Web Developer → Character Encoding, or Wrench → Tools → Encoding), pick the correct original encoding: "Cyrillic (Windows-1251)" in your case.
Use the Notepad2 text editor:
Open the file.
From File → Encoding → Recode..., choose the right original encoding.
Use GNU iconv
, with Windows binaries either from GnuWin32 or Gettext for Win32.
iconv -f cp1251 -t utf-8 < myfile.txt > myfile.fixed.txt
Windows Notepad will correctly read UTF-8 and UTF-16 encoded text.
Open the file in MS Word. It can guess the encoding correctly most of the time – phuclv – 2015-08-14T16:12:01.000