You need encryption to deter passive attackers (evil people who spy on the line and try to work out the data contents), and integrity (and its sibling authentication) to block active attackers who alter the data while in transit. SSL provides both. However, this makes sense only in contexts where attackers can actually spy on or alter data in transit. For two process on the same machine, communicating through the "localhost" network, attackers who can do that have, usually, "already won".
Details may differ depending on the OS. However, your security will normally depend on the OS, not on encryption. Consider, for instance, a Unix system with multiple users. If your two processes talk to each other, then processes from other users cannot peek into the data exchanges (unless the attacker is root
, and then he can do anything); however, they may try impersonation: at some point, one of your components must connect to the other, and the rendez-vous point for TCP sockets is a port number. An attacker who can run his own code (as an ordinary user) on the machine may maintain a fake server which will receive the connection request in lieu of the intended component. To protect against that, the client would have to make sure that it talks to the expected server (and vice versa): this can be done with SSL, but better solutions exist (in this case, using Unix-domain sockets with a path protected by system access rights and/or getpeereid()
).
If, in your case, the local machine is assumed to be "clean" (no hostile code runs on the machine, under any identity), then connections to localhost are safe and there is no need for any extra protection. If potentially hostile but unprivileged code may run on the machine (that's the "shared multi-user machine" model, as was typical of Unix systems in the late 1980s), then you should use the OS features to make sure that connections are indeed conveyed to the right process (i.e. to a process running on the expected identity). SSL would be overkill.