1

When reading about how antivirus and sandbox work, my understanding is that an AV scans a file to see whether it matches with known viruses signatures. Sandbox can be used to obtain the behaviors of a file when it runs. However, I don't know how sandboxes can be used to detect unknown malware or not. Could you please let me know this? Thanks!

user3404735
  • 465
  • 5
  • 7
  • Detecting malware, a form of computer-based deception performed by a human or humans, can only be detected by a human or a team of humans that have analytical knowledge of counterdeception and domain knowledge of malware and anti-malware practices – atdre Jan 12 '16 at 20:07
  • Sandboxing is an implementation while AV is a product. AVs often use sandboxing as one of their techniques to detect malwares and trojans. – Ugnes May 07 '18 at 07:12

2 Answers2

3

These are two quite different things. A little simplified:

An AV is a piece of software that can (among other things?) scan your system to identify and attempt to isolate and remove threats like viruses or other malware.

A sandbox on the other hand, is basically a context in which a piece of software can be run isolated from the rest of the world. Java applets running in a browser is a classic example, as should be Flash (though it is apparently not nearly as well isolated and safe as would be ideal): These are contexts where programs can run without having access to resources on the host machine (your PC), such as your file system, etc.

Another example can be a set of virtual machines. You could build your own "virus lab" by setting up several VM's running different OS's, then linking them together in an internal network, which will only be visible to those VM's. Now you could experiment by running malware on these machines, and seeing how it affects them, and how they affect one another, without affecting anything outside of their virtual network.

Another way to think of think of a sandbox, is as (ideally) a kind of digital software aquarium.

Kjartan
  • 999
  • 11
  • 17
  • 1
    To more directly address the question: When running something in a sandbox, its attempts to interact with the system can be closely monitored for specific malicious behaviors. If software in this sandbox exhibits these malicious behaviors, it will be treated as malware. – ztk Jan 12 '16 at 14:37
  • @ztk conceptually a "sandbox" that does this would be a reactive type of security system like an IPS or an heuristics-based AV tool. AV software does require some mediation of system-app behaviour if it aims to proactively detect malicious behaviours, so I think it's best to view the question in terms not of technical capabilities but theoretical attitudes towards how to tackle malicious behaviour. See my answer :-) – Steve Dodier-Lazaro Jan 12 '16 at 18:01
3

AntiVirus (AV) software operates based on the idea that you can decide what is bad, detect which programs do bad things, and kill/uninstall them. Reactive security systems like AV software require very good knowledge about the threats you're facing, or about the difference between malicious and normal behaviour. That makes them costly and imprecise.

Sandboxes operate on the idea that you cannot decide what is good or bad, but the user can decide what they choose to trust. They provide you with the ability to confine or isolate specific programs so that if they do bad things, it doesn't do any harm to things outside the sandbox. The downside of confinement systems is that there must be a logical and simple way for the system or for its users of deciding which programs get confined where, and which are also to talk to each other.

These are completely different concepts, and the two families of tools can be used in conjunction. It's the case on OSs like Windows where store applications all run within a sandbox, and AV software can be used to detect modifications to the system or to detect the installation of a known piece of malware (which might or might not be sandboxed).

From a theoretical perspective, the use of confinement transforms the problem of classifying / making sense of application behaviours into a problem of deciding what security properties hold given a set of arbitrary boundaries between untrusted actors. As far as I'm concerned there's ample experimental evidence that the former approach is doomed to fail, based on the poor performance of AV software and the role of contextuality in deciding whether any given program behaviour is malicious or not.

Steve Dodier-Lazaro
  • 6,798
  • 29
  • 45
  • Thanks Steve. May I ask you one more question? What is the "criteria" to decide whether a program does bad things or not. In other word, could you please give me an example? For example, if I scan a malicious program with AVs, the results returned will show that whether this file contains backdoor or not. I am also using Cuckoo sandbox to observer the behavior of that malicious file, which part of the output of sandbox should I take a look at to see whether the file contain backdoor or not? – user3404735 Jan 13 '16 at 01:46
  • Reliable backdoor detection is impossible. AV software scans for *known* backdoors, by trying to establish the similarity between code in your app and a sample code for that backdoor, circa some mutations. The key is that whether an atomic action is bad or not depends on its context of application. Is this outgoing connection normal activity or is it your app connecting to a botnet? If you build behavioural profiles of apps, how do you deal with app plugins or seldom-used features? Is the app meant to automatically open this document? And what about that other document? Did you agree to that? – Steve Dodier-Lazaro Jan 13 '16 at 09:36
  • Thanks Steve. I agree with your points. Whether an atomic action is bad or not depends on the context of application. Could you please let me know how an AV create a signature for a backdoor? Is the signature used by an AV to detect a backdoor is something supposed to be secrete? – user3404735 Jan 13 '16 at 12:56