37

I noticed a lot of companies do not have social engineering as in-scope of bug bounties/responsible disclosure guidelines, even though it is often used in real-world attacks. I understand that for popular bug bounty programs the amount of social engineering attempts would probably become problematic, but for small companies I would see it as beneficial to once in a while have someone try to get into the systems via social engineering as this would keep everyone sharp.

Are there reasons I am missing here for excluding this from allowed methods?

Glorfindel
  • 2,235
  • 6
  • 18
  • 30
Z3r0byte
  • 473
  • 4
  • 6
  • 9
    How do you intend to fix any social engineering bug that would be reported in a bug bounty? It's simply too complex for a simple system. – Mast Sep 25 '20 at 08:12
  • 3
    @Mast: The bug report just documents how it's possible/how it happened. The next step ought to be what process or procedure allowed it to happen. Fixing it is only the last step, and how you fix it will depend on the analysis in the previous step. – MSalters Sep 25 '20 at 09:47
  • The least I want to hear from this is: "Looks like you have a bug in your brain, eh?" – Andrew T. Sep 27 '20 at 11:17
  • When you get 'em by Social Engineering, is there really a bug which someone should payout? It's a bounty on bugs. S.E. may be included in penetration testing. – Mobutu Sese Seko Kuku Ngbendu Sep 28 '20 at 07:01

4 Answers4

69

Because Human Factor vulnerabilities are complex, undefined, non-linear, and often not repeatable in a predictable way. Being able to successfully soceng one person is not enough for an organisation to use as a basis for action. In short, if you could do a soceng test, the results would not be useful.

SQLi on a web form, on the other hand, is simple, defined, linear, and the results are repeatable.

Bug bounties are for technical issues, not "all possible issues that could possibly go wrong".

SocEng also creates massive liabilities that include the individual, and that brings in a host of other issues. How far do you go? How much personal information do you gather and weaponise? How do you get permission from the person and still keep the test legitimate? Permission alone places this activity beyond a simple bug bounty program.

Testing Human Factor vulnerabilities has to be done carefully, by professionals who know what they are doing, and the scope is very tightly defined. That's why soceng tests tend to keep to phishing simulations; it's very complex and there are a lot of side factors to take into account. It's not something for some random bug bounty tester to go mucking around with.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • 9
    There are also perverse incentives: a bounty hunger could be in cahoots with the employee who fails the social engineering test, with the pair splitting the bounty. – minnmass Sep 25 '20 at 15:41
  • 4
    @minnmass that's possible with technical bug bounties, as well – schroeder Sep 25 '20 at 15:43
  • 4
    true, but there's typically much stronger evidence that that a bug was planted intentionally (eg., commit logs) than that someone intentionally "accidentally" clicked a phishing link. It's also harder, frequently, because of pull request review and other such processes. – minnmass Sep 25 '20 at 15:51
  • 2
    @minnmass it doesn't have to be an intentional bug. The more common scenario is a *discovered* bug and the dev doesn't fix it but sells the bug to a bug bounty hunter ... It's still collusion. So your comment is not at all bound to soceng findings. – schroeder Sep 25 '20 at 19:48
  • I have to agree with @schroeder regarding the unpredictability of human nature and how the same steps can give different results from the same person at different times,or different people. But then this raises a question. But then how would bug bounty work when AI enters the world? How would one report a social engineering bug of an AI bot that became a victim of social engineering,and spilled all its proprietary code / secrets? Since AI in the future could incorporate the human factor,how would someone be able to report a social engineering bug that COULD or MAY NOT cause the AI to go rouge? – Amol Soneji Sep 27 '20 at 01:50
37

We want to find unknown, fixable vulnerabilities

The goal of penetration testing is to obtain actionable results - to have someone find vulnerabilities that the organization did not know that they have, but could fix if only they know that they exist.

Social engineering does not fit this purpose, it reveals vulnerabilities that are neither new nor fixable. We already know that employees are vulnerable to spearphishing and other classes of social engineering attacks, so a successful social engineering attack just confirms what we know, it does not provide new knowledge. And we in general consider that the likelihood of people falling for such attacks can be reduced, but not eliminated; so identifying some social engineering vector works on your company does not necessarily mean that there's anything that should or could be changed to prevent such attacks in the future.

You can (and possibly should) run some awareness campaigns, but you should expect that a good social engineering attack will sometimes succeed anyway even if you have done everything that reasonably can be done. An user falling for an attack does not mean that the awareness campaigns were flawed or too limited or that the particular person has 'failed a test' and needs to be penalized (if this seems contentious to you, this is a longer discussion for a separate question).

Detect and mitigate consequences of social engineering

For a security-wise mature organization, the key part of the response to social engineering attacks is to assume that some employees will fall for the "social engineering" part of the attack, and to work on measures that detect such attacks and limit the consequences.

It may be an "assume breach" strategy (which is useful when trying to mitigate insider attacks), but not only that - it can and should also involve technical measures to prevent the breach that assume that the user-facing part may succeed but prevent the attack from getting to the user (various measures to limit spoofed emails or websites), prevent the attack from succeeding even with user 'cooperation' (for example, 2FA systems that won't send the required credentials to a spoofed login page), or mitigate the consequences of the attack (proper access controls so that a random employee compromise does not mean compromise of everything, endpoint monitoring, etc).

You can test attacks and response without involving the users

You can do simulated "social engineering" attacks to test your response without any actual social engineering that harms the actual users in your organization (because, to emphasize this, even simulated social engineering attacks do cause harm to the users they target and victimize). You can test your ability to detect and respond to spearphishing by targeting an "informed accomplice" in the organization that will intentionally click whatever needs to be clicked (because we already know that this part succeeds often enough) if the existing systems and controls in place will allow the payload to get to them, and see how your response works, there is no need to mass-target unwitting employees to gain the same benefit to the systems security audit.

And you can test your ability to mitigate the consequences of social engineering attacks by starting the penetration test from a foothold in your organization - you can give the penetration testers remote access to a workstation and an unprivileged user account credentials (assuming that this would be the result of a successful social engineering attack), you don't need to disrupt the workday of a real user to do this test.

Peteris
  • 8,369
  • 1
  • 26
  • 35
  • "a successful social engineering attack just confirms what we know, it does not provide new knowledge." I think it does provide new knowledge: a determination of which employee was vulnerable. – nick012000 Sep 25 '20 at 03:45
  • 5
    @nick012000: Everyone is vulnerable to social engineering. Some are just easier. – Esa Jokinen Sep 25 '20 at 04:35
  • The only fix for social engineering is not allow humans near your systems and forcibly refusing them access. Which creates new problems. – Mast Sep 25 '20 at 08:15
  • 5
    @nick012000 as I said, this is a deeper topic for a separate question, but we don't consider "a determination of which employee was vulnerable" as valuable knowledge but as counterproductive knowledge; when doing a security test with social engineering elements we would *not* include the particular employee who fell for a scam in the deliverables, quite the opposite, we should go out of our way to ensure that they remain anonymous. If given an individual "scapegoat", many managers and institutions will naturally take actions that won't fix anything but will actually *harm* future security. – Peteris Sep 25 '20 at 13:11
  • @Peteris "we should go out of our way to ensure that they remain anonymous" That seems counterproductive. You'd want to get rid of the "weak links", or at least refer them to educational courses to correct their mistakes. – nick012000 Sep 25 '20 at 13:13
  • 5
    @nick012000 I strongly disagree, but once more, that is a separate topic for an in-depth answer. (a) *everyone* is vulnerable to social engineering attacks, and someone falling for one is not evidence that they are a "weak link" but rather that we chose to target them, mostly for reasons outside of their control (e.g. the position they hold, or random choice). (b) as anyone may fall for social engineering attacks, the key part is what the employee will do afterwards? Victims often notice that they might have been phished moments after the event. We want them to immediately report the fact! – Peteris Sep 25 '20 at 13:23
  • 4
    /cont/ However, this requires a culture of trust where employees know that they will not be punished for falling such a scam, otherwise there will be a natural tendency to try to hide the possible compromise with the hope that this might not be a real attack. But a single mid-manager with the attitude as in your comment (which is popular) can easily destroy that trust for years in the whole organization by punishing a single employee, harming the oranization's future security. So we need to prevent that possibilty, ensuring that their managers are unable get this info and do what you proposed. – Peteris Sep 25 '20 at 13:23
  • 1
    @nick012000 I'm not saying that educational courses aren't useful - however, we do *not* need to target them at "weak links", if some audit indicates that your organization needs extra awareness, then it whould be organization-wide, not targeted at some hypothetical minority. The whole concept of "weak links" is flawed; as much as there are differences you could suppose that perhaps you have 5% "paranoid" users and everyone else is a "weak link" ... but in practice, *everyone* is vulnerable, including CISOs, senior sysadmins, infosec auditors, myself and yourself. – Peteris Sep 25 '20 at 13:28
  • @Peteris I agree with your point. For me, the classical example is holding the door open for the person after you. In general, if one does not do this then others assume that one is an "asshole." The reason I hold the door open for others is because (1) my organization has not told me not to and (2) I do not want the "asshole" label - not because I am a "weak link". A general education course would void both reasons and I would stop doing this. But being labelled a "weak link" would not be productive. – emory Sep 25 '20 at 13:37
  • 1
    @emory There are many other aspects that matter, for example, emotional, social and 'employee morale' aspect. Social engineering attempts often have quite unpleasant long-term consequences, I've seen them, they can cause a real harm to organization relations and trust. The people who fell for an attack will *naturally* be more careful in the future even without additional prodding. Such attacks (especially if followed up by impersonation) can cause serious personal issues and drive good employees to quit, killing careers for no good reason - even a simulated attack victimizes the targets. – Peteris Sep 25 '20 at 13:49
  • 1
    @Peteris I was just observing that a significant risk factor for being a victim of a social engineering attack is being a nice person. If organizations decide to just punish the "weak links" then you push out the nice people and retain the jerks. Is that really good for the organization? – emory Sep 25 '20 at 13:57
  • 1
    @emory "Is that really good for the organization?" If some of the stuff I've read about Amazon's corporate culture is halfway accurate, there are definitely companies that would say "Yes". – nick012000 Sep 25 '20 at 13:59
  • @nick012000 I was inclined to agree with Peteris in that "we should go out of our way to ensure that they remain anonymous" but I see your point - it definitely depends on what the organization values. – emory Sep 25 '20 at 14:10
  • 2
    Human Factors in cybersecurity are far, far more complex than a training issue. Focusing solely on the end user who was closest to the error is folly. Please take a look at [HFACS](https://www.hfacs.com/hfacs-framework.html) as a way to re-think human error. – schroeder Sep 25 '20 at 17:45
10

Social Engineering most commonly takes the form of manipulating, bullying or lying to a company's employees, and offering a bounty for it would be to invite this torment on your staff. Valid customer-service calls would be outnumbered by bounty-hunters berating call centers trying to talk their way into other's accounts; VIPs' office staff would be inundated with calls pretending to be from their bosses. Actual customer service and productivity would plummet under the sheer volume.

Basically, it's fine to invite people to beat up your hardware and software. You can't invite people to beat up your people.

CCTO
  • 358
  • 1
  • 5
  • 2
    I feel like this hits the nail on its head. From what I've head, about 95% of typical bug bounty submissions are low-quality "noise", mostly from people who don't understand which issues are in scope. I can't imagine what some of these people might try if they were basically told to harass the staff in any way a bad guy might... – ManfP Sep 27 '20 at 00:18
6

Claiming that social engineering is a bug within software is akin to saying that a knife-preferring serial killer is a bug within the knife.

Social engineering is a human bug so you need to patch your humans via training, reprimands, etc...

This question really makes me think of Facebook and how they implemented a giant warning in the developer console:

enter image description here

Do note that the warning is only as effective as the user's comprehension skill so you might have to individually patch each user. Hopefully, this paints a better light as to why bounties are not offered on human manipulation.

MonkeyZeus
  • 507
  • 3
  • 10