Security by obscurity. A question to which the expected answer is wrong when given only the information presented in the challenge requires some additional "secret" knowledge that only "in" software has. The trouble is that the software is in the hands of your attacker, who can decompile it to discover the secret. This is therefore very weak, because it relies on a static secret of low entropy (the "secret" is to add 1 to the answer to the stated question).
If two programs must trust each other, each knowing that the other cannot be 100% guaranteed to be who they say they are and not an impostor, the usual method is some soft of "independent verification"; if a trusted third party says that this program is who it says it is, that's "evidence" that you can use to increase your confidence.
Certificates are one form of this verification; a server wishing to prove itself obtains a certificate from an independent third party, which has had its information encrypted using a private key that is not given to the server, but which can be read by anyone who requests the server's certificate, using a public key distributed independently by the third party. The server (or an attacker wishing to mimic this location) therefore can't change the information in the certificate, and so as long as the information matches the actual location and public identifiers of the server, clients can be confident the server is who it says it is.
Without certificates, most systems rely on a "zero-knowledge" evidence model. Zero-knowledge proofs usually still require some sort of third party, which distributes the pieces of evidence that programs use to answer challenges. In the real world, this is usually an authentication server. The difference is that nobody has to know everything about the authentication scheme, and the information used can be obtained in real time and thus can change every time it is performed.
Here's an example: Alice is greeted by Bob, whom Alice does not trust. Bob says he knows Cindy, and therefore, he says, he's trustworthy. To prove this fact, Alice calls Cindy, who knows Alice, and asks for half of an asymmetric key pair. She then challenges Bob to encrypt a secret message that can be decrypted by Alice's key. Bob calls Cindy, who also knows Bob, and gives him the other half of the key pair. Bob encrypts the message, which Cindy never knows, and gives it to Alice, who decrypts it with her key and gets the original message. Bob couldn't have encrypted the message correctly without knowing the other half of the key, and the only way he could have gotten that is from Cindy. Cindy, for her part, never knows the secret message, so she can't give Bob the message to send back unless Bob tells her (and if Cindy had asked, Bob would be suspicious that maybe Cindy isn't who she says she is).
In the real world, Alice and Bob would be programs used by end users (maybe the same end user), and Cindy would be a central authentication system. The end users of the two programs would have offline secrets (username/password) they'd use to authenticate with the central system, and once that's done the programs can prove to each other that their end users are valid users on the system, without either app knowing the credentials of the other user, or the central service knowing the secret passed between the two programs as part of their handshake.
In order for this kind of scheme to be broken, an attacker David must either convince Cindy that he's actually Bob, or he must also have an accomplice Emily, who must convince Alice that she's Cindy, before freely giving Bob the other half of the key she generated for Alice. How Alice knows that Cindy is really Cindy and not Emily, and how Cindy knows Alice and Bob are who they say, require their own schemes with their own secrets. Those schemes can involve third parties as well, but eventually you run out of third parties; at some point you must rely on the transfer of an offline secret, such as a set of user credentials, to verify that someone is who they say without having to consult a third party.