It doesn't.
The string terminator is a byte containing all 0 bits.
The unsigned int is two or four bytes (depending on your environment) each containing all 0 bits.
The two items are stored at different addresses. Your compiled code performs operations suitable for strings on the former location, and operations suitable for unsigned binary numbers on the latter. (Unless you have either a bug in your code, or some dangerously clever code!)
But all of these bytes look the same to the CPU. Data in memory (in most currently-common instruction set architectures) doesn't have any type associated with it. That's an abstraction that exists only in the source code and means something only to the compiler.
Edit-added: As an example: It is perfectly possible, even common, to perform arithmetic on the bytes that make up a string. If you have a string of 8-bit ASCII characters, you can convert the letters in the string between upper and lower case by adding or subtracting 32 (decimal). Or if you are translating to another character code you can use their values as indices into an array whose elements provide the equivalent bit coding in the other code.
To the CPU the chars are really extra-short integers. (eight bits each instead of 16, 32, or 64.) To us humans their values happen to be associated with readable characters, but the CPU has no idea of that. It also doesn't know anything about the "C" convention of "null byte ends a string", either (and as many have noted in other answers and comments, there are programming environments in which that convention isn't used at all).
To be sure, there are some instructions in x86/x64 that tend to be used a lot with strings - the REP prefix, for example - but you can just as well use them on an array of integers, if they achieve the desired result.
18
You're asking about typical computers about which the answers are completely right. However, there used to be some architectures which use tagged memory to distinguish between data types.
– user1686 – 2018-10-01T12:09:03.42012The same way the computer cannot differentiate a 4 byte float from a 4 byte integer (reperesenting a very different number). – Hagen von Eitzen – 2018-10-01T14:47:21.410
6While ending a string with 0x00 is common, there are languages which use length-prefixed strings. The first byte or two would contain the number of bytes in the string. In this way, a 0x00 at the end is not needed. I seem to recall Pascal and BASIC doing that. Perhaps COBOL as well. – lit – 2018-10-02T13:57:21.850
@lit also header formats in many communication protocols. "Hello I am this kind of message and I am this many bytes long". Often because you need to store complex data types inside, then null termination becomes much more troublesome to parse. – mathreadler – 2018-10-03T18:11:52.677
1@lit: Most variants of Pascal and BASIC yes, and PL/I and Ada -- and in Java since substring sharing was dropped in 7u6 effectively uses the array length prefix -- but COBOL only sort-of: you can read data from
pic X occurs m to n depending on v
(and the count can be anywhere, not just immediately before), but storing it is more complicated. – dave_thompson_085 – 2018-10-03T22:05:38.507