88

I currently work on the IT security team at my workplace in a senior role. Recently, I assisted management in designing the phishing / social engineering training campaigns, by which IT security will send out phishing "test" emails to see how aware the company employees are to spotting such emails.

We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see. The content have been varied to include emails asking for sensitive content (e.g: updating a password) to fake social media posts, to targeted advertising.

We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails. They have been requests to scale back the difficulty of these tests from our team.

Edit to address some comments that say spear phishing simulations are too extreme / bad design of simulations

In analyzing the past results of phishing simulations, the users who clicked tended to show certain patterns. Also, one particular successful phish that resulted in financial loss (unnecessary online purchase) was pretending to be a member of senior management.

To respond to comments on depth of targeting / GDPR, methods of customization are based on public company data (i.e: job function), rather than private user data known to that person only. The "content that users are likey to see" is based on "typical scenarios", not what content users at our workplace see specifically

Questions

  1. When is phishing education going too far?

  2. Is pushback from the end users demonstrative that their awareness is still lacking and need further training, specifically the inability to recognize legitimate from malicious emails?

Anthony
  • 1,736
  • 1
  • 12
  • 22
  • 30
    I would re-word the title from "education" to "testing" or "simulations" – schroeder Apr 14 '19 at 19:05
  • 10
    This question seems to me like it lacks key details. *Why* are your users claiming that the phishing emails you send them are indistinguishable from legitimate ones? Is it because they truly are (at least with the tools at a normal user's disposal), or is it because they're screwing up? Receiving an email from a person you've not previously had contact with is not inherently suspicious, so it matters how you are measuring failure. Based on them actually handing over sensitive information? Or just based on them clicking a link in an email that they could not reasonably know was fake in advance? – Mark Amery Apr 15 '19 at 13:00
  • 1
    Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/92473/discussion-on-question-by-anthony-when-is-phishing-education-going-too-far). – Rory Alsop Apr 15 '19 at 18:22

12 Answers12

102

I think there is an underlying problem that you will need to address. Why do the users care that they are failing?

Phishing simulations should, first and foremost, be an education tool not a testing tool.

If there are negative consequences to failing, then yes, your users are going to complain if the tests are more difficult than you have prepared them for. You would complain, too.

So, your response should be:

  • educate them more (or differently) so that they can pass the tests (or rather, the comprehension tests, which is what they should be)
  • remove negative consequences to failing

This might not require any content changes to your education material, but might only require a re-framing of the phishing simulations for users, management, and your security team.

Another tactic to try is to graduate the phishing simulations so that they get harder as the users are successful in responding to phishing. I have done this with my custom programmes. It's more complex on the back end, but the payoffs are huge if you can do it.

Your focus needs to be the evolving maturity of your organisation's ability to resist phishing attacks, not getting everyone to be perfect on tests. Once you take this perspective, the culture around these tests and the complaints will change.

Do it right, and your users will ask for the phishing simulations to be harder not easier. If you aim for that end result, you will have a much more resilient organisation.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/92525/discussion-on-answer-by-schroeder-when-is-phishing-education-going-too-far). – Rory Alsop Apr 16 '19 at 21:02
59

We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails.

This is an indication that tests that could be rooted out as fakes by trained security professionals are being used to evaluate people who aren't. You may have the skills to pick an email apart and interpret the headers, but Dan in Accounting probably doesn't and his management's not likely to agree that a master class in RFC 822 is a good use of his time.

Crafting targeted emails to increase the hit rate has to be done based on intelligence collected about your users and your purported sender. This is not information to which a phisher will be privy and, as Michael Hampton pointed out in his comment, rises to spearphishing. That's a different ball game played on a different field.

If there are adversaries (real or potential) capable of good-enough spearphishing to damage your business, all of the phishing countermeasures and training won't help. Your job is to deploy tools that will give Dan in Accounting a way to distinguish the real ones from the fakes. That might mean security on the sending end like a cryptographic signature that users' mail clients can check and post a prominent warning when something is unsigned or the signature doesn't match. You can't depend on humans to get this stuff right 100% of the time, especially as your organization gets larger and people don't know each other so well.

Blrfl
  • 1,628
  • 1
  • 11
  • 8
  • This seems to suggest that the fix is to have an automated process that can check for these kinds of signs and warn the user. Gmail does this, putting up a red banner warning if the mail looks suspicious, e.g. fake headers. – user25221 Apr 16 '19 at 11:16
  • RFC822 has been superseded long ago by RFC5322. – Patrick Mevzek Apr 16 '19 at 14:48
  • @PatrickMevzek RFC 2822 existed between the two, but some of us old geezers are going to cling to the old numbers 'til you pry them from our cold, dead hands. – Blrfl Apr 16 '19 at 15:21
  • Which is against the IETF way of doing things. If an RFC supersedes another one, there is no reasons to cling to the former version. Except for historical reasons and to show knowledge. Newer versions include bugfixes and disambiguitions. But this is mostly unrelated to the question. – Patrick Mevzek Apr 16 '19 at 15:23
  • @PatrickMevzek Clingage is in name only; I certainly wouldn't implement something based on an obsoleted RFC. – Blrfl Apr 16 '19 at 15:49
36

There's one possible point to make that I haven't seen in other answers, but have seen in the real world.

Users say they "have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails". What this may tell you, is that legitimate emails about password renewals, service changes and such, do not obey the rules that users are expected to follow.

I have certainly seen organisations whose training materials tell users not to click links in emails, and definitely not to put their passwords into the sites those links point to, or to install software from them. And the service teams at those organisations then send out mass emails about service updates that require action (such as password updates, software installs, etc), with helpful links to click.

One thing that might help would be to clarify that users should report these legitimate emails. It might not help the users directly, but it may help to remind the service team that their emails have rules to follow, which should make things clearer for users in the long run.

James_pic
  • 2,520
  • 2
  • 17
  • 22
  • 15
    This. I've worked in an organisation that sends similar phishing test emails, but then regularly sends "legitimate" emails that are indistinguishable from spam/phishing, often containing links to external sites (sometimes requiring logins) which my company has previously had no connection with. The problem may very well be that the legitimate mails are too spammy, rather than your tests going to o far. – Mohirl Apr 15 '19 at 12:27
  • 6
    This is absolutely a problem. If the organization is sending out legitimate emails that the users are expected to click links in, and you are not explicitly identifying those emails as legitimate *and* teaching the users how to identify them, your legitimate emails are actively working against and undoing the training you are trying to provide. – Colin Young Apr 15 '19 at 13:41
  • 8
    I used to make a point of reporting emails from IT security to IT security as apparent phishing attempts. They never liked it. – Michael Kay Apr 15 '19 at 16:37
  • @MichaelKay: It really irks me that so many organizations send out real messages that are indistinguishable from phishing attempts. If Acme's VISA card moves from BankCorp to MegaBank, it should not inform customers about it by leaving phone messages asking them to visit `AcmeViSAupdate.com` [a domain the customers have never used before] but I had a real that did precisely that (names changed to protect the guilty), but instead inform them how to get the information using the phone number or web site printed on their card. – supercat Apr 15 '19 at 21:25
  • I've been known to receive purchase orders from a previously unknown sender saying simply "please find our purchase order attached". And of course the spam filter might well zap them before I have to make a decision. – Michael Kay Apr 15 '19 at 22:36
  • In other words: arguably the succes of a spearfishing attack is not primarily the failure of the recipient / target / victim to recognize a particular e-mail message as such, but rather a strong indication that your other procedures and processes are currently either not followed, unknown or insufficient to prevent them from succeeding. - The requested course of action in the spearfishing attack is "normal" and not deviant from "business as usual". – HBruijn Apr 16 '19 at 08:30
15
  1. When is phishing education going too far?

When the cost exceeds the benefit. Benefit is generally measured in lower click-through rates and increased rates of reporting of genuine phishing emails. Cost can be measured in:

  • the effort to implement the test
  • false positive reporting of (not) phishing emails
  • lower engagement rates on legitimate emails
  • ill will towards the Security group.

The last is the hardest to measure, and often ignored, but if your job is to trick your own people, you shouldn't be surprised if they start viewing you with suspicion.

  1. Is pushback from the end users demonstrative that their awareness is still lacking and need further training, specifically the inability to recognize legitimate from malicious emails?

Um, maybe?

If their click-through rates remain high, then awareness is still lacking and they need further training.

If click-through rates in general have dropped, but the test emails consistently fool them, then their concerns about the testing may be legitimate.

It sounds like your content is pretty closely tailored to your users and even their job roles. This may be what is generating the negative reaction. Ideally, a phishing test should not rely upon knowledge or understanding of internal email practices, just as an attacker should not have access to those. (And note, your internal messaging should not look like your external messaging, for the same reason).

You may want to consider outsourcing your phishing tests. The organizations that are dedicated to offering this service have a better feel for what "in the wild" looks like, and their tools for measuring and reporting on engagement rates are usually better than you can do on your own.

Personally, I'm not fond of phish testing, because I believe it erodes trust between users and Security. But the fact of the matter is it's one of the best ways to improve your users' defences.

schroeder
  • 123,438
  • 55
  • 284
  • 319
gowenfawr
  • 71,975
  • 17
  • 161
  • 198
  • 1
    Forgive me if i am wrong.But if a few people click,wouldnt that be a failure? – yeah_well Apr 14 '19 at 17:44
  • 8
    @VipulNair eradication is not a realistic goal for phish training. I believe I've seen 10-20% click-through described as ideal improvement. I have seen organizations celebrate pushing down below 50%. – gowenfawr Apr 14 '19 at 17:55
  • 4
    @gowenfawr most recent research shows that getting below 10% is not realistic. Even CISOs click phishing emails (one CISO I know gets 600 emails a day and sometimes he clicks on a well-crafted phish). – schroeder Apr 14 '19 at 19:11
  • Where are you guys getting these stats on targets for click through? I'm not in our IS group but I'm on their steering team, we're routinely around 5 - 6% click through for a fairly non-technical workforce of around 500 employees, and what I would consider very realistic test emails. I'm surprised that your comments seem to imply we're way ahead of average (or my interpretation of how difficult our simulated emails are is totally wrong). – dwizum Apr 15 '19 at 13:23
  • 2
    @dwizum Lance Spitzner, who's a SME in this area, [claims <5% is "good"](https://www.sans.org/security-awareness-training/blog/why-phishing-click-rate-0-bad). However, my comments about 10-20% and starting >50% stem from personal experience with a handful of organizations. My gut says that Lance has a self-selecting population ("people who care enough about this to hire him") and that 10-20% is a realistic churn point for good organizations. You may very well be doing better than average :) – gowenfawr Apr 15 '19 at 13:39
  • @dwizum these are industry stats from a broad spectrum. At 5%, you are *very* good and you would likely benefit others by being a case study for how you are getting those numbers. You might also benefit from someone digging into your campaigns to see if those numbers are trustworthy. I've personally seen lower numbers, but there were a set of specific factors that facilitated that level of success. – schroeder Apr 15 '19 at 17:57
  • I think the future holds an e-mail client that simply doesn't have clickable links... – user3067860 Apr 15 '19 at 18:39
  • @dwizum - Let talk in community chat at the DMZ. What your company is doing to achieve such low click rates sounds promising...love to learn what practices you are using – Anthony Apr 16 '19 at 03:50
  • SE chat sites don't work on the network I'm usually on. I don't know if we're doing anything novel, really. We just keep the idea of phishing "in your face." It starts with a few hours of training during new employee orientation. Then LOTS of educational material, on a regular basis, via email, in person training sessions, online training, intranet site, and so on. Regular testing with specific feedback to individuals and management of areas that get poor results (the feedback is educational in nature, not punitive). We even have signs on the mirrors in the restrooms and the break rooms. – dwizum Apr 16 '19 at 12:42
  • We do outsource the creation of the phishing emails but we try to link the training with the test emails - they're designed to fail the criteria we educate on (thinks like: we train staff to hover over a link and look at the URL before clicking, even if you trust the source. So we may test that by sending an email from what looks like a legitimate source, but the URL is suspicious). And we ensure that internal support staff are trained to not do the things we teach people not to do, i.e. ask for a password via email, etc. – dwizum Apr 16 '19 at 12:52
8

There's one way in which this may have gone too far:

We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see.

You need to ask yourself whether employees at your company will actually be subject to this level of spearphishing. If the answer is no, then you've gone too far. Of course, this is all dependent on what the group does. If its the DNC, then the answer is yes.

Cliff AB
  • 241
  • 1
  • 4
5

You've seemingly committed a very common mistake among us security professionals: You have gone too much into the mindset of the attacker and you are trying too hard to defeat your fellow employees, instead of making them your allies.

Your phishing campaign should be based on your threat model and risk analysis. Are your employees likely to be a target of carefully crafted spearphishing attacks, or is the higher risk the more common untargeted, mass-phishing campaign of moderate attacker skill?

In the later case, don't do things to your employees that are exceptionally unlikely according to your risk analysis. You simply can't explain to management why you're doing it and it will seem that you are trying to get a high out of appearing smarter and "beating" regular employees. (which of course you can in your field of expertise, just like they could beat you hands down in budgeting, handling customer complaints or supply management).

If you do have targeted, high-skill spearphishing campaigns in your threat model, then you need to gradually escalate and plan a campaign in multiple steps. Because your goal is to teach, not to defeat and embarass. So you do what every teacher does: You start with the simple base excersise and then follow with the more difficult ones.

Example

For example, in a three-step process, you would start with a mail that is fairly easy to spot as a fake, but also contains elements that are more difficult to see. When a user correctly identifies it as a phishing mail, you congratulate them and then point out all the clues, including the better hidden ones. This is the learning part - they get positive reinforcement for the clues they spotted, and are taught additional clues that they missed.

In the second round, you send a phishing mail that is roughly targeted (say, to a department or function) and has fewer obvious and more of the difficult to spot clues. At least half of them should include those that were taught in the previous mail. Again when a user correctly spots the phishing attempt, you congratulate and point out all the clues, including the new ones you introduced. This reinforces, teaches new clues and raises awareness that some clues can be more difficult to spot than the user thought before.

In the third round, you send your personally targeted mails, with no obvious clues, but at least half the hidden clues must be in the set the user was taught before. Again, if a user correctly identifies, you congratulate and highlight all the clues, so he can again learn even more.

In all the cases, if a user misidentifies the phishing mail, you also point out all the clues, and then repeat that step until he gets it. Don't progress to more difficult lessons while the learning person is still struggling with the current one.

This is much more work on your part, but will provide a much stronger reinforcement and higher involvement on the employees side, and in the end you are doing it for them.

Tom
  • 10,124
  • 18
  • 51
1

The question of "going too far" requires context; what part is going too far?

The thing that phishing tests are trying to do is to make people suspicious of their email, because when they aren't then they are at risk of literally inviting unauthorized users onto the network.

So there shouldn't be an overwhelming amount of emails to the point that they are sifting through known bad emails to get to the ones they need to do their job, but there should be enough that it is commonly known that someone in the organization is portraying an attacker and trying to get them to click the wrong link because there are already people outside the organization trying to get them to do that.

The question then becomes when someone does go for the ploy, are you glad that you caught them instead of a malicious actor? As other people have mentioned here (and @BoredToolBox should not have been downvoted in my opinion) this is about education.

If you put that into the wording of the question then, I'm sure that it's not meant as "How much education is going too far?" right?

What is probably going too far in most organizations is the reaction to people who are clicking thorough, and especially if there is a punitive aspect to it. You should be glad when you are the one that caught the action, because it is a chance for you to help the user understand what could possibly have happened and why you are performing this exercise. People should not be punished or shamed.

Imagine that this was an exercise on how to prevent an illness from spreading worker to worker. A deadly virus that will lay dormant until it has found an appropriate host and will then possibly kill everyone, but they don't know that it is spread by people that are randomly coming in the front door handing them packages.

We have enough common sense to know not to just accept packages from people that walk into the building, but what people don't see is that this is exactly what is happening with their emails. So this is about a change in culture and perspective, and I don't really see what part of the knowledge of this is going too far when you are talking about education.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • 5
    The purpose of phishing simulations is not to make people suspicious, but to practice the procedures and behavours taught in a safe simulation of an attack. – schroeder Apr 14 '19 at 19:16
  • 1
    Right...but if they leave that simulation without being suspicious of emails then what was the point? They should be suspicious of anything that looks different, and the point of training is to make them so, right? – Roostercrab Apr 14 '19 at 20:00
  • 3
    No. That's my entire point. The goal is *not* suspicion. I'm afraid to explain further will be to simply repeat my first comment. – schroeder Apr 14 '19 at 20:43
  • I guess the question then is what *do* you want them to think when they look through their email inbox if not suspicion...I know that I am suspicious of emails and having users share my suspicion is the prime objective. – Roostercrab Apr 15 '19 at 02:13
0

Faced something similar and currently part of a team that runs something similar. Here are my two cents:

Education is a very tricky concept as the way people learn are different for different individuals. But what I have seen is that if you try to concise the information you want to convey in 2-4 points, in as few words as possible that always help. We do something like this when it comes to educating people:

Whenever you get an email from someone outside the org ask these questions:

  • Do you personally know this email id?
  • Does the email id and the domain name look fishy to you?
  • Do you really want to click that link or want to give this guy your personal info?

And lastly we always mention that:

  • if you are not sure please forward this email to {email id that verifies this}@{yourorg}.com

    1. Definitely. Since all they need to do (I guess) is to ignore that email or maybe forward it to your internal security team for review.

I guess what needs to be done here is more on education. Because the employees need to know how a successful phish can not only hurt the company but also the employee as well.

schroeder
  • 123,438
  • 55
  • 284
  • 319
0

I don't know whether this applies to your case or not, but one potential problem may be if your expectations about user awareness are higher than the security norms put into use. For example:

  • You may educate users to always check the https certificates, but at the same time some internal web sites may use self-signed or expired certificates, or even require submitting usernames and passwords through plain unencrypted http.
  • Or you may educate users that all official internal tools reside on your company domain, but in reality you use popular third-party services like Gmail or Slack connected with OAuth.

While the first example is an actual issue with the infrastructure, the second one is a safe practice paired with out-of-date recommendations. I have seen both happening in the wild and in these cases the principles that you are trying to teach can not be applied in day-to-day practice and may ultimately lead to confusion and failure to comply.

Zoltan
  • 274
  • 2
  • 8
0

I'm not sure the size of your organization, but the most practical advice I can offer is that you can go too far when you overthink it. - Make some spoofy emails, send them to users, see what users do.

We use a tool (KnowBe4)- run a few trials against the users, and use that to educate them/get them aware. We capture who passed, who failed, and use the overall process to educate and demonstrate that we educate.

Don't overthink the audience with custom targeting; don't do complicated data analysis... If you are, you are probably wasting time you could spend on the next challenge.

If you see there's spear phishing at your execs or certain folks, engage them personally and often, and maybe do something operational to make sure that if they are fooled, you catch it. By operational change, for example, if someone's trying to get your CFO to release wire payments- then the CFO better have an additional maker/checker process, or get secondary non-email (Voice?) confirmation that a wire should go out.

subs
  • 101
0

It sounds to me like their may be two issues here:

  1. Users are frustrated that they are regularly being lambasted because they fail a test they consider impossible.

  2. Users are annoyed that IT is wasting their time with endless tests of dubious value.

RE #1, there are three possibilities:

A: You ARE making impossible demands on your users. At least, impossible in the sense that you are demanding they demonstrate a level of sophistication far beyond what can reasonably be expected of people who are not experts on security. To spin an analogy, it might be reasonable to demand that all employees be prepared to perform basic first aid: put on a bandage, give someone an aspirin, etc. But surely you would not expect all employees to be able to perform emergency heart surgery. If you start giving them practice drills on emergency heart surgery and blast the employees who are unable to adequately describe how they would implant a stent or who can't correctly list all 182 steps in a heart transplant, clearly that would be unreasonable. Making unrealistic demands and then berating employees for failing to meet them accomplishes nothing except building resentment and killing morale.

B: Your expectations are completely reasonable, and the employees are insufficiently trained. If that's the case, the obvious answer is to provide training. If you have never provided any training, and you are now berating employees for not knowing something that they have never been taught, again, you are being unreasonable. Bear in mind that what is "obvious" to a computer security professional is not necessarily obvious to someone with no such background. I'm sure there are many things about accounting that are obvious to professional accountants but not to me, or things about auto maintenance that are obvious to professional mechanics, etc.

C: Your expectations are completely reasonable, and the employees are too lazy or irresponsible to make the effort. If that's the case, it's a management issue. Someone has to give the employees the proper incentive to work harder, which could range from an encouraging pep talk to firing those who don't measure up.

RE #2: When I was in the Air Force, of course security was a major concern. We had people who wanted to destroy our aircraft and kill us. But even in that extreme situation, the security people were well aware that more strict security is not always best. The standard was that security should be as effective as possible to deal with realistic risks while interfering as little as possible with people doing their jobs.

In this case, of course it's a bad thing if some hostile hacker gets hold of passwords and steals or vandalizes your data. That could cost you big money, maybe even drive you out of business. But unless the threat is huge, you can't expect the employees to spend 90% of their time warding off threats and only 10% doing work that brings in income for the company. That's a recipe for going broke, too. You have to have a reasonable balance between protecting against threats and making it impossible for anyone to do their job.

Jay
  • 859
  • 5
  • 5
-2

I suspect that your simulation is using knowledge about your intended targets that no genuine phisher would ever know. That is why they complain about your fakes being too hard to distinguish from the real thing. In a word, you are cheating.

BoarGules
  • 97
  • 1
  • 3
    Not neccissarily, there can be malicious actors within an orginisation. – meowcat Apr 16 '19 at 01:39
  • 1
    Please review Shannon's Maxim: _The enemy knows the system._ – forest Apr 16 '19 at 02:31
  • What things might a "genuine phisher" *not* know? – schroeder Apr 16 '19 at 07:39
  • adding to @meowcat, you'll also be surprised how much information you can find online on someone (varies per person). – Alex Probert Apr 16 '19 at 09:23
  • If a malicious actor inside the system can send me an email that MS Exchange assures me comes from my employer, but has spoofed the sender, so it appears to come from my manager, but doesn't, then no amount of training is going to let me reliably distinguish good from bad. I can devote effort to examining emails from outside the organization to see if they are trustworthy. If I have to expend the same effort on every single internal email then the battle is already lost. – BoarGules Apr 16 '19 at 10:23
  • Being able to inspect the email to determine if it is from a legitimate sender is only a small part of the whole of phishing training. And, as I hinted at, "genuine phishers" can know quite a lot that most people would think could only be known by an insider. – schroeder Apr 16 '19 at 16:00