9

I wonder how hackers find vulnerabilities.

If they use fuzzing, security engineers do it, and it's probably that security engineers (that work in a firm) have more resources than a group of hackers.

Reverse-engineering takes a lot of time, and I think it's not reliable enough to depend on. Is that really the case, or am I missing something?

What is in my mind is these contests where hackers have one day to find a vulnerability for a browser and exploit it. How do they do that? And the time allocated for an open-source browser (chrome) is the same time allocated to a closed-source browser (Internet Explorer).

David Stubley
  • 2,886
  • 1
  • 17
  • 28
jaja
  • 91
  • 1
  • 2

3 Answers3

12

The basic path for exploiting an overflow-related vulnerability is to find a crash (often by fuzzing), evaluate the crash and whether it presents an attack path, and then build something to exploit it.

Sometimes where one looks can involve knowledge of the architecture, such as when Charlie Miller noted that iOS 4.3 has a section of memory which will run unsigned code. In the case of all his Apple attacks, I believe the source was closed. SQL injection weaknesses are similar -- you're starting with a crib knowing where a good place to look is.

While it is simplest to identify and correct a bug via source after finding a crash, the work to setup the environment can be without value if one's goal is simply to write a working exploit. Often, the relevant space of a program will involve looking at less than 1 KB of machine code.

From my perspective, open source allows one to view bugs being added as they happen and say, "Hey, that bit of source looks insane." Beyond that, the work for exploiting closed-source and open-source work is often the same. As for reverse engineering, that's rare unless you're trying to alter or recreate an application rather than launch an exploit from it.

I wouldn't underestimate the ability of hacker groups to dedicate a lot of computing resources to a task. Warez groups have been known throughout the history of the Internet to muster substantial storage and bandwidth resources. It might be naive to think hacker groups don't muster the same scale of CPU power.

Finally, most people who do this for a long enough time keep a discovered exploit or two up their sleeves rather than releasing it.

Jeff Ferland
  • 38,090
  • 9
  • 93
  • 171
9

To look at your assumptions:

many attack groups have resources vastly bigger than that of companies

  • in a typical company security is a cost centre, so they never have enough staff or money
  • in the black hat world, finding security flaws is a revenue stream

fuzzing is done by security engineers and black hats

  • fuzzing is by its very nature fallible. So many different fuzzing algorithms can be used that you will not find all issues (this is obviously true for all security testing)

reverse engineering is time consuming

  • absolutely, however this is something the black hats have. Some hire teams of smart kids to do nothing but reverse engineer code. Some place individuals into development teams to smuggle code out. If it makes money, the general rule of thumb is that someone will try it.

the one day exploit competitions

  • Charlie Miller himself states that he develops exploits on the run up to these competitions so he has some tools ready.
Rory Alsop
  • 61,367
  • 12
  • 115
  • 320
1

The primary source of vulnerabilities are accidents rather than intentional.

When the browser crashes, security experts debug how that happened with a debugger, and see if they can reproduce the crash. Thus, it comes from noticing vulns as they happen, rather than looking for vulns.

When programming, such as building a website, when using some obscure feature, experts ask themselves how that obscure feature works, and what happens when they use it wrongly.

Only a relatively small number of vulnerabilities are found by the likes of Charlie Miller. In those cases, it's because they have spent years reverse engineering a product, and can repeatedly find vulnerabilities based on existing knowledge.

Robert David Graham
  • 3,883
  • 1
  • 15
  • 14