3

It may be a silly question, but hackers do use publicly-available tools to find vulnerabilities. So why don't companies, before releasing their products (Windows, Adobe etc...), use the same tools to find those vulnerabilities and fix them?

I don't understand. Am I missing something?

bonsaiviking
  • 11,316
  • 1
  • 27
  • 50
Nick
  • 71
  • 1
  • 2
  • I think that "use metasploit to test your software" doesnt really work. Having used metasploit its not a simple "point and pwn" piece of kit. To be able to use it effectively you need to really understand its core... Think of it as less of a solution for penetration & security testing and more of an "enabler". Just my two cents... – Aaron Dobbing Apr 07 '15 at 10:07
  • 3
    @Aaron Dobbing the premise is a bit worse than that even. The majority of issues chronicled in Metasploit and Nessus and similar scanners are developed *after* the software is in the wild. The question sounds as if there are new bugs that will be seen just by running a scanner. This is pretty much never the case, ever. – Jeff Meden Apr 07 '15 at 13:07
  • 1
    If the question were edited to remove reference to Metasploit, you may get more thoroughly thought-out answers. If you want to mention categories of tools, relevant ones would be fuzzers and code analyzers. – bonsaiviking Apr 07 '15 at 14:48
  • @JeffMeden In all honesty - I am more thinking web application in that sense, but you are definitely right. – Aaron Dobbing Apr 07 '15 at 15:42
  • 1
    (1) costs money, (2) often seen as useless since adds nothing new in terms of functionality, (3) not all developers are familiar with static/dynamic analysis/fuzzing, (4) costs money, (5) not everyone is a security expert or even cares about security, (6) need to ship products fast and when it comes to shortcuts to achieve that security is prime target, (7) some companies don't even do unit testing so you can't expect them to invest much time/money in security. After vulnerabilities are found and patched, there is always some people with outdated software running (easy targets). – Daniel Apr 07 '15 at 16:26
  • "Hacker tools" ≠ "Hacker" – Digital fire Apr 07 '15 at 18:11

5 Answers5

8

They are doing it, or at least they're expected to do so, as part of their Software Development Life Cycle (SDLC).

In terms of vulnerability management, a best practice would be to first perform a source-code review (static/dynamic), and to scan their product using a vulnerability scanner. Note that there are also other steps to enforce a secure SDLC, such as threat modeling, secure coding guidelines, etc. that should be performed as well.

In real life, this is 90% of the time seen as a "useless" activity, because it requires resources but doesn't generate revenue or add new functionality to the application.

ack__
  • 2,728
  • 14
  • 25
  • 1
    I have witnessed many orgs (not going to name them) use Metasploit, Nessus, or some variant/combination in order to do internal testing. Most use it as a first line of defense to catch obviously bad configurations, and then go deeper based on the results. – Jeff Meden Apr 07 '15 at 13:06
  • 1
    But, Also some companies are forgetting something called Social Enginnering, this could be a big problem if information about application leak and thats what can cause a surprising attack. – NathanWay Apr 07 '15 at 15:48
2

In addition to ack_'s answer, there is a significant learning curve to using these tools. As a penetration tester myself the time spent weeding out false positives and non-exploitable vulnerabilities often takes just as much (if not more) time as trying to leverage an attack. Add to that the time it takes to become proficient at using these tools and you will find yourself looking to fill another full time IT position. Hydra for example should be a simple tool to use but in my experience is a royal PITA. Automated tools are great place to start but no where near as valuable as an experienced engineer.

HillBillyHacker
  • 314
  • 1
  • 9
  • Isn't this why we have QA departments? – schroeder Apr 07 '15 at 17:39
  • @schroeder - Sure, some companies MAY have a QA team. But even if they do; who is to say they are looking for (or even qualified to look for) security faults? This is the exact reason the OP's question is valid. If there is a QA department, why are production systems and applications released with vulnerabilities? If having a QA team fixes the issue, why don't all companies implement them? Your comment neither adds or clarified anything. – HillBillyHacker Apr 07 '15 at 18:02
  • 1
    The OP sets the scope as "Windows, Adobe, etc." and these large dev teams have dedicated QA departments. You answer that testing is hard, and having a qualified person amounts to a dedicated testing position. I'm saying that there already exists a department to handle that: QA. Every QA team needs to be properly trained and tasked with locating issues in the software, whether security-related or not. – schroeder Apr 07 '15 at 18:07
  • The question is why companies do not run freely available security tools on their products prior to release (which I addressed). Your comments stating that it is the QA teams responsibility lends nothing to the thread. – HillBillyHacker Apr 07 '15 at 18:21
1

They (should) do, specially companies that have to conform to some security standards like payment card industry: PCI-DSS or iso 27001. They have to have a secure SDLC incorporating secure coding training, code review by security expert (white box testing), and periodic penetration testing (black box testing) performed by external companies as well. Also the entire network and not only the applications go under assessment.

DavidC
  • 51
  • 3
1

Software bugs are discovered primarily through source code auditing, reverse engineering, and fuzzing. Fuzzer is the tool which can be termed as "the hacker tool" which you mentioned. Google spend thousands of hours fuzzing chrome and other critical software before release. Adobe is following these practices as well after the horrible abuse of vulnerabilities within their Reader product. The reason why most of the low or mid sized companies can't afford to use these tools effectively is:

  1. Lack of dedicated security teams
  2. Lack of understanding security bugs
  3. No management support for extra time and effort
  4. No visible incentive

We in security usually think that security bugs are a great deal and should be dealt with utmost importance. Software teams don't think like this. Just like Haroon Meer mentioned in the Troopers15 keynote last week, bosses who receive bonuses and promotion based upon software product releases, when bugs are found after months or years, either they have left the company or are in an untouchable positions.

Also, when was the last time a company got bankrupt after security bugs were found in their software? Adobe, Oracle, IE, Flash, and CMS software have almost unlimited supply of remote code execution bugs found every month but the business of none of these companies are getting effected. That is the reason companies don't spend time and effort mitigating the vulnerabilities. It is simply not business critical.

void_in
  • 5,541
  • 1
  • 20
  • 28
1

Personally, I do use adversary tools and techniques to test my target systems, networks, and apps. For the web layer, as an example, tools like Acunetix WVS have historically been warezed by online criminals.

Metasploit is a bad example for this scenario proposed by the questioner (and as alluded to in some of the comments). A better example would be to use adversary simulation by way of Cobalt Strike's Malleable C2 to repurpose active, real adversary IoCs such as network traffic and on-disk (or even in-memory) malware file hashes.

The social engineering techniques, or really any TTP, can be matched closely to realistic events. This is more of the purpose of a red teaming analysis, to be used during a cyber exercise, than in a penetration test. Penetration tests seek to understand the perspective of a trusted insider, such as the developer him or herself, not that of a simple unintentional insider, such as via spear phishing, waterhole attacks, or similar.

atdre
  • 18,885
  • 6
  • 58
  • 107