92

Without being a programmer or a computer expert, how can I know if a particular program or any piece of software in general doesn't have hidden unwanted functions compromising privacy and security?

Longpoke
  • 188
  • 6
user3533
  • 989
  • 6
  • 7
  • Many of the comments are on point. Additionally, if the software runs on a *nix or bsd machine, you can put a trace on the software and observe the low level functionality (i.e. what system calls it makes.) – Stephan Feb 07 '13 at 02:06
  • 1
    If you were a programming expert, you could use static analysis. Basically decompile the program using IDAPro and see a map of all the system apis being called aka the ones that could do damage. Other than that you can see if the app is opening ports it should or making phone calls home by using a firewall. – j_mcnally Feb 07 '13 at 03:09
  • 1
    If a particular piece of software claims to do exactly nothing, then this problem is easy. – emory Feb 07 '13 at 13:26
  • 1
    @emory, you sohuld check NaDa: http://www.bernardbelanger.com/computing/NaDa/index.php – That Brazilian Guy Feb 08 '13 at 13:49
  • In general, you can not know a piece of software only does what the author claims. However to answer your specific question: how can I know if a particular program has hidden unwanted functions compromising privacy and security? Install this software on a computer that is totally unconnected - no internet, wifi, lan, etc. Use this computer only for running this program. If there are any hidden unwanted functions trying to compromise privacy and security, they will fail. – emory Feb 09 '13 at 16:29
  • 1
    This concern is one of the reasons some people choose Open Source Software. If anyone can read the source code, you have a much better chance of knowing if the program does anything untoward. – Jay Bazuzi Feb 11 '13 at 23:55
  • Btw, anyone remembers those real computer 'viruses' from the pre-internet era? - They would alter executable files found on the victim's computer, injecting their own code into them. This is another case where no one can make assertions about the programs' behavior, even if they were cleanly built from trusted source code. – JimmyB Feb 12 '13 at 07:58

7 Answers7

167

You can know whether some software does only what it announces in the same way that you can know whether the food they serve you at restaurants is poisoned or not. In plain words, you cannot, but Society has come up with various schemes to cope with the issue:

  • You can listen to friends and critics to know if the food at a given restaurant has good reputation or not.
  • You can take a sample and send it to a lab which will look for many (but not all) known poisonous substances.
  • You can ask nicely if you may observe the cook while he prepares the dishes.
  • The cook has a vested business interest in his customer being happy with the food quality, and happiness includes, in particular, not being dead.
  • Society punishes poisoners with the utmost severity and it can usually be assumed that the cook knows it.
  • You always have the extreme option of not eating there if you are too worried.

All of these can be directly transposed into the world of software. Extreme methods of ascertaining software quality and adherence to its published behaviour include very expensive and boring things like Common Criteria which boil down to, basically, knowing who made the program and with what tools.

Alternative answer: every piece of software has bugs, so it is 100% guaranteed that it does not do exactly what it is supposed to do. (This assertion includes the software which runs in the dozen or so small computers which are embedded in your car, by the way.)

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • 20
    One of the best analogies ever – Lucas Kauffman Feb 06 '13 at 21:40
  • 11
    That's good. And the alternative answer is also brilliant. – David Stratton Feb 06 '13 at 21:42
  • One point: a restaurant is fairly high-profile and established, while the person who makes a piece of software may not be. Some anonymous person may not be punished if the software is bad, nor do you necessarily know their reputation. If you know who actually made a piece of software and they have a strong reputation to uphold, _then_ the analogy works better. – cpast Feb 07 '13 at 00:01
  • 11
    The analogy covers that fine too, @cpast. You're safer eating at a well-known, well-reviewed restaurant than buying sushi from a street vendor who may just disappear after accidentally selling some bad fish. – amalloy Feb 07 '13 at 06:01
  • There are also things like formal specification (you can read the specification if you know the specification language) and test if the SW piece conforms to it. – Sulthan Feb 07 '13 at 09:32
  • 3
    Also, you can ask for or look up the recipe and make the food yourself at home. – Christoffer Hammarström Feb 07 '13 at 10:06
  • 2
    Or you can ask for the ingredients and cook it yourself (download source code & compile). – Konerak Feb 07 '13 at 15:45
  • This is the typical answer to this sort of question. It's not entirely true. Code != food because the language designer chooses the rules of the universe. Contrived example: in a pure functional language, I already know 100% for sure that any function cannot cause side effects outside of space/time usage. "every piece of software has bugs", is false; the identity function `id : a -> a; id x = x` has no bugs, the type alone is already proof. I also know that the multi-precision addition function in x64 assembly I wrote today has no bugs by intuition - it's blatantly obvious that it's correct. – Longpoke Feb 07 '13 at 20:22
  • 8
    Every analogy breaks down when looked at too closely. That's how analogies go: they _illustrate_ concepts so that the human mind can digest them. As for your function, since hardware itself is not bug-free... – Tom Leek Feb 07 '13 at 20:33
  • 1
    The code I wrote is bug free, regardless of whether the hardware is bug free. Without making assumptions one can never "know" anything. That being said, don't go on and trust intel hardware. Also: "Society punishes poisoners with the utmost severity and it can usually be assumed that the cook knows it." Did society punish Debian for the random generator vuln that effected it for 2 years (equivalent to a back door)? No. They didn't even lose their credibility. – Longpoke Feb 08 '13 at 19:22
  • @Longpoke perhaps the code you wrote is bug free and perhaps the hardware is bug free. Is the compiler and/or interpreter bug free? Regardless space/time usage can be a problem itself. Imagine the computer is running (a) a life support system and (b) your identity function. The LFS has some weird concurrency bug that would never surface in a million years except your id functions space/time usage forced it. You id function arguably did do something beyond what you claimed. – emory Feb 09 '13 at 16:20
  • @emory: It's easy to verify a non-optimizing implementation of an FP language. The cause of the concurrency bug isn't `id`, the problem is that some other part of the system has a concurrency bug. If you care about security you verify the entire TCB (which means you can't use an insane stack like gcc/*nix/x86) and make sure there are no such concurrency bugs in it. That said, if this question is really only about verifying *nix software, the answer is quite simple: you can't. Anyone who says otherwise is using an industry definition of "secure". By *nix, I mean BSD,Microsoft,Apple,UNIX, etc. – Longpoke Feb 10 '13 at 03:56
  • "The cook has a vested business interest in his customer not being dead." I swear, if I ever write an introductory textbook on business economics (likely never, but hey), this sentence will be in there. – us2012 Feb 10 '13 at 06:03
  • Now I just need to invent a latent neuro-toxin that will cause the imbiber to re-write their will too me just before they die. (Sorry, I'm on a bit of a backdoor trip, and this analogy was missing one ;)) – naught101 Sep 11 '13 at 06:42
23

You can't, at least not with 100% accuracy. Speaking as a programmer, it's very easy to code in whatever I want, and it's not necessarily just what's advertised.

Not all unexpected activity, however, is malicious. I'm assuming you're worried more about malicious activity. Even that is not 100% possible to detect all the time, but there's hope.

You can use software that monitors things like network traffic, file activity, etc, to find clues that software is behaving in an unexpected way. For example (and I know this is just a basic tool) you can use Fiddler to see if a particular application is accessing the Internet via http(s). (Yes, I know there are better tools out there, though. Fiddler is just the first that comes to mind.) On Windows, you can use Process Monitor to get even more insight. Similar tools exist for other platforms.

There are also several other services that are available for you to use that will perform the analysis for you.

David Stratton
  • 2,646
  • 2
  • 20
  • 36
  • 1
    Dynamic analysis doesn't buy you anything. You are stuck with the halting problem either way. A simple logic bomb written by a 12 year old will thwart all types of dynamic analysis as long as the code is dense enough. – Longpoke Feb 20 '13 at 05:25
9

Especially as software becomes larger and more complicated, it becomes impossible* for even experts to answer that. To that extent, privacy and security from an application are best handled by using sandbox or Mandatory Access Control methods. The idea is behind these methods is that the software is run in a system that controls what it can do and you permit it to only do what you expect it to do. Done properly, you can limit possible connections, and be notified if the program ever tries to access files you didn't expect it to. Very advanced methods can be used to monitor memory or decrypt network traffic through a proxy service.

In short, if you can't understand everything it does, the answer is to restrict everything it can do with something it runs inside of (the operating system).

Jeff Ferland
  • 38,090
  • 9
  • 93
  • 171
  • 2
    There is a dangling asterisk after your "impossible" but, by right, you should quote Donald Knuth and MetaPost here. – Tom Leek Feb 06 '13 at 21:42
  • The dangling asterisk implies that the time wouldn't be infinite, just too damn long. – Jeff Ferland Feb 07 '13 at 08:36
  • 1
    typical sandboxes (VM,java,etc)/MAC/ACL/DAC etc have all failed. The only model I know that currently is known to work is the capability model. On the other hand, if you are stuck using *nix, your only choice is really the things you mentioned. – Longpoke Feb 07 '13 at 20:25
  • @Longpoke SELinux, at least, controls every system call and thus includes capability control. – Jeff Ferland Feb 07 '13 at 20:37
  • @JeffFerland I'm not talking about linux capabilities, I'm talking about the capability model. – Longpoke Feb 07 '13 at 20:44
7

In his widely known ACM Turing Award Lecture "Reflections on Trusting Trust" (now almost exactly 30 years ago!) Ken Thompson said "You can't trust code that you did not totally create yourself." In practice commercial software are no exception to other commercial products in that those from producers having good names on the market usually have a higher probability of being better. However there is no absulute guarantee for that. Decades ago I got diskettes from a reknown producer that had virus. In that case I personally believe that that was not a malicious act of anyone inside the firm but that some computers of the firm got infected by virus from the outside. However it is evidently not possible in general to 100% exclude the possibility of backdoors being introduced into the software by insiders of the firm, whether this is known to its CEO or not. Backdoors could be IMHO an extremely critical issue, now that cyber-wars are looming in the world. A secret agency of a government could namely manage in some way (via money, coersion or even malware) to have such backdoors implanted in certain software that normally serve to ensure the security of communications (e.g. those relevant to digital signatures) and that are sold to and used by certain non-friendly or potentially non-friendly foreign nations and either immediately or at some appropriate later time points ("time bombs" etc.) exploit the backdoors to achieve their goals of disrupting the target nations' critical infrastructure, etc. etc. Stuxnet, Flame and Gauss are a couple of names that should give some indications of the capabilities of the potential malfaiteurs.

Mok-Kong Shen
  • 1,199
  • 1
  • 10
  • 14
  • 2
    Extending your point... Even if you do compile from source that you wrote yourself... Who is to say that the compiler you use isn't doing something nefarious (assuming you didn't write the compiler from assembly...) – Josh Feb 07 '13 at 17:13
  • @Josh: It all depends on how high your stake is. Not only application software but also compilers, OS and firmware/hardware could be potential sources of danger. You have to wisely decide what safety measures are necessary in your case and what are superflous (and take the responsibilities for omissions). In December 2012 there was a US DAPRA conference aiming to find, shut backdoor malware holes in commercial IT devices (https://www.fbo.gov/?s=opportunity&mode=form&id=55b80a80971c739699e410584819e767&tab=core&_cview=0). See in particular the section "Background" in the pdf-file it links to. – Mok-Kong Shen Feb 07 '13 at 20:51
  • 1
    @Josh - Then there's the hardware. This is why, to build my computer, I started with sand to make silicon. Of course, I'm not **quite** finished with it yet... ;) – Nathan Long Feb 08 '13 at 19:13
  • @NathanLong: According to informations from normal media there is at least one country working to develop chips of their own, aiming thus to be independent of the designs like those of Intel. I speculate though don't know whether the security issue couldn't eventually also be a minor motivation for that project. – Mok-Kong Shen Feb 09 '13 at 08:28
1

Unfortunately, you can't...

As a good programmer could be called wizard by his users, a good trojan would completely fake a normal environment for make victim quiet.

Some virus/trojan do a sanitization of victim system, in order to

  • ensure another virus won't break his work
  • ensure an anti-virus won't find him
  • make the victim system work fine for ensuring victim stay confident.

So, you can't!! If in doubt consult!!!!

1

It ultimately comes down to trust. Do you trust the reputation of the company releasing the software. If it is open source, is it used by enough developers that they would be raising flags if there were issues. There is a certain amount of strength in numbers since a commonly used product is more likely to have extensive research done on if it is trustworthy. Unless you are very paranoid, generally looking at what the community has to say about a particular piece of software is the best bet, but there will still always be bugs and there will still always be mistakes.

AJ Henderson
  • 41,816
  • 5
  • 63
  • 110
  • One should note what is commonly overlooked regarding open-source software: There is no technical means which would guarantee that some binary/compiled program was built from exactly the source code it claims it was built from. – JimmyB Feb 07 '13 at 15:44
  • @HannoBinder - true, though you also have the option to build your own version and while not simple, if well done, it should be doable for a limited technical user. – AJ Henderson Feb 07 '13 at 15:56
0

As pointed out by others, there is no guaranteed way to know. A lot of the time, you have to trust the integrity and reputation of the vendor. Following secure practices, such as only installing software from sources you trust can help, but just like real life, sometimes, we trust the wrong people.

In the end, I think we should adopt a certain level of paranoia. If you install an app on your phone, don't just accept or say yes when your phone OS informs you the phone wants access to your private information, your location, etc. Ask yourself, why does it need that access. If you feel the access the application is requesting is justified based on what you are expecting it to do, then saying yes maybe OK. On the other hand, if it seems to be requesting access to information or services which are way outside what it should need or be interested in, then be a little suspicious and consier carefully before just saying yes.

Tim X
  • 3,242
  • 13
  • 13