79

We have been recently contracted to run phishing tests for a company. Let's call it a company but basically they are obligated, by law, to assess the security of their environment with phishing campaigns.

We ran our first campaigns not too long ago and the results were pretty bad. Over 70% of their users trusted the "malicious emails" we sent and did whatever the email asked of them.

After it was over, we of course had a out brief detailing our findings. Long story short, they did not want ~any~ Identifiers (email, username, whatever) of who fell for the phish. They wanted "X out of 300" failed to identify the email. Their reason was they did not want to offend anyone. (I wanted to say your customers feelings will be hurt when your employees fall for a potential attack and leak info) I politely accused them of checking a box and not actually being interested in educating their users. They weren't very happy. I should elaborate by saying they didn't even want 2 reports, one showing the emails and another not showing them. I offered it to them because at least they can see how these individual people react to different campaigns overtime. This would absolutely help if user "Sam" clicks on every single link in a email every time over the course of ten campaigns. Surely you would want to educate Sam differently than other users?

My question is, does this not defeat the purpose of phishing campaigns and improving the security of information in your network? Is this even normal?

pm1391
  • 1,427
  • 2
  • 7
  • 19
  • 125
    "I politely accused them of checking a box and not actually being interested in educating their users." It isn't really possible to do that politely. – jpmc26 Jan 30 '18 at 09:26
  • 1
    Is this a single run phishing campaign? or ongoing? At a previous company, we had ongoing runs - monthly, quarterly... There were general reports (8 of 300) - and specific reports for management (those who ran the tests could tell who were repeat offenders). There's a difference between "Jill clicked the email once" and "Jill clicked the email 4 times in a row despite repeated classes and warnings"... More than once lead to increased visibility - and consequences. – WernerCD Jan 30 '18 at 13:02
  • @pm1391 Ongoing... Are there different reports? "8 of 300"... vs detailed breakdowns by, say, repeat offenders? First runs are different than 5th runs - after being warned repeatedly. Is management unwilling to tackle repeat offenders (if you've gotten that far)? THAT would be a bigger issue... – WernerCD Jan 30 '18 at 13:06
  • 33
    Would they be open to breakdown by department/location/seniority? In my experience, this is extremely useful in helping to narrow in on areas that need help. And this way, you don't have to deal with individuals. Also from experience, I've seen this request before. And it's when the decision-makers don't want to be the ones identified ... – schroeder Jan 30 '18 at 13:18
  • 10
    Hate to continually ask questions: If they deny detailed reporting and the results don't improve... who is culpable in the case of a breech? If you don't give good info - are you at fault? If they refuse the information, are they at fault? In other words - in a year when a breach happens... who is going to get sued? – WernerCD Jan 30 '18 at 13:23
  • Comments are not for extended discussion or for answering the question; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/72590/discussion-on-question-by-pm1391-company-does-not-want-any-names-on-phishing-rep). – Rory Alsop Feb 02 '18 at 14:00
  • As a non-expert I find this quite smart, because it avoids pretending that the tested individuals are the problem, and not the system / culture. – xLeitix Feb 05 '18 at 10:21

8 Answers8

153

This initial campaign established a baseline first. So, yes, it's normal. "How do we as a company stand? To what level do we need to train? Do we have, as a whole, secure users or do we have, as a whole, unsecure users?" This report establishes this and the extent to which management needs to engage in phishing training. Were only 5% of users to fall for the phish attempt, then the training focus would be very different. As it stands, now management knows that they have, essentially, a corporate-wide problem and that a phish campaign basically stands a 70% chance of succeeding.

Now, when the company does future phishing training, they can compare results and determine whether the training was successful. "We initially fell for it 70% of the time. This time, we fell for it 68% of the time. It was therefore, not successful." Or "We initially fell for it 70% of the time and now fell for it 50% of the time. We're doing better, but need further training."

baldPrussian
  • 2,768
  • 2
  • 9
  • 14
  • Good point, I would tend to agree with you but they never said anything about establishing a baseline when I expressed my concerns. Furthermore, they didnt even want me to store the results, so I was puzzled. – pm1391 Jan 30 '18 at 01:09
  • 1
    You're the one doing the phishing test. You should know that improvement takes training and time – BlueWizard Jan 30 '18 at 07:16
  • 122
    One might argue that a company where 70% of the users fall for a phishing campaigns stands basically a 100% chance of falling for a phish campaign as usally it's enough to have a single person fall for it. – Christoph Jan 30 '18 at 10:17
  • Indeed, 1 person falling for the phishing, is one person too many. And training needs to be improved, as to whether to assign names or not if it is a widespread problem then probably not needed. Get everyone trained anyway. If it is a small subset, then identification is probably good, especially if it is a core identifiable group like a list of people who have a job title beginning with C (US) or D (UK), a particular department, etc. – ewanm89 Jan 30 '18 at 18:31
  • 12
    +1 and one nitpick: If 70% of users are expected to respond positively, i. e. each user has a likelihood of p=0.7 to respond positively, to a phishing campaign, then the campaign's chance of success is the likelihood P of at least one positive response and `P = 1-(1-p)^n` where n is the number of users addressed during the campaign. This is pretty much in line with the comment by @Christoph. – David Foerster Jan 30 '18 at 22:55
  • 30
    … for a medium-sized company with n=50 and p=0.7 the likelihood of campaign *failure* 1−P is roughly 10^−26. – David Foerster Jan 30 '18 at 23:02
  • 14
    @pm1391 I don't know how your campaign was run, but in any of the dozen I've been through, the users who click the bad link or whatever are immediately called out, because the link leads to a page saying "You failed the test" or similar. There's no need (at least initially) to report exactly *who* failed, because the campaign itself was a sort-of training. – thanby Jan 31 '18 at 12:53
  • 1
    On the flip side, people who reported the suspicious activity to the security team got a personal pat on the back, positively reinforcing their behavior. It's possible a report was delivered higher up the food chain with actual user names and responses, but I don't see (especially in a large company) how that would be a useful expenditure of effort to compose/consume. – thanby Jan 31 '18 at 12:55
  • @thanby Right we did something similar (notifying the users they have failed). But management did not want any user information and they don't intend to track anything overtime. In my view, you can get so much information from a phishing test and to shrink it down to x/300 clicked, is foolish in my opinion – pm1391 Jan 31 '18 at 14:03
  • 1
    Fair enough, there's not much arguing with that. But in the end all you can do is provide your best advice, and if they don't want it you shouldn't force the issue. Just document that they didn't follow your recommendations to CYA – thanby Jan 31 '18 at 14:58
  • 7
    @pm1391 Really they're paying you to do something and as long as it's not illegal, immoral, or unethical, what they do with your work is really their issue. It's hard for us to accept "we got paid and they can do with our work whatever they want", but that's kind of the nature of a consulting gig. Yes, they could do more. But in the end we provide the service required and walk away knowing that we did the best we could. – baldPrussian Jan 31 '18 at 15:06
  • @DavidFoerster: saying "blah-blah 10^-26 chance of phishing campaign failure blah-blah" will make absolutely no impression in any boardroom in America. The proper way to put this is, "Unless you do something immediate and drastic, everyone in this room will lose their bonuses, stock grants, options, and severance packages after the feds close your doors". THAT will focus attention like nothing else - and even *that* will result in nothing more than a few managers being fired and the C-suite being reshuffled. Just rearrange those deck chairs and we're good, Cap'n..! – Bob Jarvis - Слава Україні Feb 04 '18 at 03:40
  • 1
    @BobJarvis: Sure. My argument wasn't meant for a board of directors though. It was a directed at the author of the question and a criticism of their maths. – David Foerster Feb 04 '18 at 10:25
128

No, because by giving names you are assigning blame, security needs to move away from blaming individuals and instead take it as a whole. It's the same as finding a security vulnerability in a web site: you shouldn't blame the developer but should instead look to improve the entire process.

We run phishing campaigns and do not identify users. What we use it for is to identify weakness on our part and that we need to train our staff better. There is no point focusing this training on just a single person.

After a campaign we email all staff, provide the statistics of failures / success, and then provide tips for spotting phishing and how to treat email in general.

Luc
  • 31,973
  • 8
  • 71
  • 135
McMatty
  • 3,192
  • 1
  • 7
  • 16
  • Isn't the "taking security as a whole" made up of your individuals? So your environment is only as secure as your individuals in this sense. I get that it's not about blaming users, but there is a possibility that some users are more susceptible to attacks and therefore should be treated and educated differently. I don't think it relates to a web developer, because I would blame a web developer for a poorly set up website, it's his job. – pm1391 Jan 30 '18 at 03:44
  • 2
    @pm1391 one of the potential problem that I can think of if the users are identified is that it'll be difficult to forgetting those users, and they could be judged biasedly in the future when there's an incidence related with trusting something (e.g. data leak, etc.). – Andrew T. Jan 30 '18 at 04:30
  • 41
    @pm1391 If you take into account the fact that users tends to leave and join you understand that there is no reason to focus on specific user, only on training processes and security procedures. – talex Jan 30 '18 at 06:05
  • 37
    At 70% they clearly have a systemic problem. They can have a massive improvement with company-wide training and no need to blame anyone. At those levels if anyone is to blame it's those at the top anyway ~(assuming the company isn't made up of security pros who should know about these things); the other 30% might picked up a little bit of caution on their own time, or maybe they were simply too busy and ignored the phishing attempts. – Chris H Jan 30 '18 at 09:21
  • 8
    Also having a non nominative reports does not bind them on private data regulations, while still extracting meaningful results. – M'vy Jan 30 '18 at 11:05
  • 2
    @pm1391 you should always try to give everyone the same treatment. If you start to single out people they might feel discriminated in the future. – J_rite Jan 30 '18 at 12:36
  • 1
    I agree, you all have changed my mind. I just did not like the answer they gave me. But, that's their business I guess, not mine – pm1391 Jan 30 '18 at 12:57
  • 10
    This is the correct answer. I don't know where the idea that _shaming people is good security_ came from. If anything, it makes people more defensive and less likely to follow advice. I really wish security "professionals" would realise there is a huge world outside of their tiny little domain. – Shantnu Jan 30 '18 at 13:50
  • 1
    @Jorrit that's true in this case where ~70% fell for it. Probably the entire company needs training. But if 2% of the entire company were susceptible to this, it would make no sense to train the entire company based off a few mistakes. An individual level talk could be much more effective here. – Cullub Jan 30 '18 at 16:43
  • 2
    @ChrisH there are pressures to respond to phishing. if your supervisor routinely emails you from his/her personal account and expects a response then it would be rational to respond to a phishing attack. it may be the case that the 70% are just doing the "right thing" given the way the organization actually operates ... if that were true, it would not be their fault. – emory Jan 30 '18 at 21:18
  • 2
    @emory, that's what I was getting at with *At those levels if anyone is to blame it's those at the top anyway* – Chris H Jan 30 '18 at 21:35
  • If I was running that company's security and had the names I'd want to talk with a small sample of victims to try and identify the specific problem, before educating the whole company. Otherwise my "education" might miss the mark entirely! You find out what the problem is, then you fix it. You don't just throw fixes at the wall, with no knowledge of the problem, until they stick. – user253751 Jan 30 '18 at 21:43
49

I think the correct angle to look at this, is to ask the following question:

With the amount of people that failed the test, what (security) goals would be accomplished if the company had these names?

I would say: none.


What is the security goal of a company-wide phishing test anyway?

Typically in every company that relies on IT and has a certain amount of employees, these employees are subject to information security trainings. These trainings mostly cover basic topics like e-mail communication, desktop security and so on. When running a phishing test, management wants to know:

  1. if these trainings were successful (as in: worth their money)
  2. if any data or IT system that belongs to the company can be compromised due to a lack of good training

If you as their contractor tell them "70% of your employees failed the test", that answers the two questions above. If the management asks for names in a company with 300+ employees, they do not gain any more relevant information and are not doing their job correctly.

The next step is now, to define a new security goal. It should read something like this:

"In the next X months every employee has to participate in a security training. By $month of $year we want $contractor to conduct another phishing test and the percentage of people that fail this test should be below X%."

Would these trainings be more cost efficient, if only those employees had to participate, that failed the phishing test? Probably.
But: you present them to 30% of the company (the ones that don't have to go) as "too stupid to identify a phishing attempt". What this does to morale outweighs all the cost of just sending all your employees to a training. Also: Another reminder for the 30% about information security doesn't really hurt.
There's another reason why this is a good idea: Typically if you run a phishing test, you don't know, why people did not fall for it. Maybe some of them didn't read the e-mail because they were on vacation, sick or just skipped it, because they have an inbox full of more important mails. Nobody can tell you, if they'll pass the test next time. Employees are always your number one risk factor, train them if you can.

Another point I want to mention that was missed so far in the other answers is, that depending on how you communicate the results: most people will know themselves that they failed that test.
You have to inform your employees in one way or another and I assume this is the way that most companies do it: Send a company-wide e-mail with a screenshot of the phishing mail.

"Dear employees, sorry to tell you, but this was a phishing test. There is no free yacht waiting for you. The numbers of people who didn't pass the test were bad, that's why we'll have some security trainings in the near future. A contractor did this for us and we did not collect any personal data, so we do not know who clicked on a link and who didn't. There will be no repercussions. Phishing mails can have really really bad consequences such as... yadda, yadda, yadda..".

People will check their inbox and if it's not too long ago, remember what they did. This will boost acceptance towards a security training and an adjustment in behavior. Invoking fear and pressuring people does no good.

Benoit Esnard
  • 13,942
  • 7
  • 65
  • 65
Tom K.
  • 7,913
  • 3
  • 30
  • 53
  • 5
    "Send a company-wide e-mail with a screenshot of the phishing mail." Good point – pm1391 Jan 30 '18 at 14:35
  • 7
    +1 especially for noting that you do not know WHY they failed or passed and how much moral is impacted. The same counts, in a way, even for the failures! Yes, even one person failing the test is really bad for the company, but even that person may have been influenced by circumstances. And if the company has circumstances that may make ONE person fail, it may make ANY person fail. And again, putting the blame on that one person STILL leaves a potential structural issue in the open, and the one employee feels terrible. – Layna Jan 31 '18 at 08:57
  • 1
    If there are multiple phishing tests, I would think it might be useful to know what fraction of users were falling for any of them, and I'm not sure how that could be ascertained without somehow recording which users had fallen for which tests. – supercat Jan 31 '18 at 17:06
  • "Post owner or mod" who overrode edit approval, please explain why the phrase "participate in a security training", and "some security trainings" is better English than "participate in security training" and "some security training". There is no plural of "training", and there is no such thing as "a training". It is not a noun. – Phil Feb 02 '18 at 09:58
  • [Actually it is a countable noun](https://english.stackexchange.com/questions/354625/what-is-the-correct-plural-of-training). More importantly, the question (and the answer) were referring to several training*s*. Your edit changed the meaning of the answer, that's why I overrode. – Tom K. Feb 02 '18 at 10:29
  • 2
    The one thing I _would_ like to see as management is a break down by department or functional area... not individual names, but which individual offices are weaker. This can be valuable because it puts pressure on the managers in the weaker areas to follow up on their own. – Joel Coehoorn Feb 02 '18 at 22:33
  • @TomK, from the SE answer you linked: Training is both countable and uncountable. *Usually, referring to a process, it is uncountable, and has no plural.* Your usage was wrong. I am a native English speaker, and like the native English speaker in that question, am equally "amused" at your misuse of the word. For the countable noun, you want "Training course" (which would then become "Training courses"). – Phil Feb 03 '18 at 22:00
  • @Phil Did you read the accepted answer? If not, please do. If you then want to continue to argue about this, please do so in chat and not in the comments to this answer. Thanks. – Tom K. Feb 03 '18 at 22:13
22

There seems to be some miscommunication about the purpose of these tests.

Using identifiers means finding responsible people, to educate and/or punish them. It might be an explicit parameter of the test that no people can be identified. In many european countries, the local workers council or union representatives would have to agree to such a test and might put this as a condition.

Using statistics means identifying performance indicators. You can measure these against each other to identify, for example, if you are improving or if some awareness campaign you ran was effective. You don't need identified people for this and it might even blur the results.

Finally, the customer pays the bill. You work for them, so while you should point out any professional concerns or thoughts you have, unless it goes against your personal or professional ethics (e.g. the ISACA ethics standards if you are a member), you deliver what the customer asks for.

Tom
  • 10,124
  • 18
  • 51
  • 2
    Understood. I just found it troubling that they didn't want to keep this reference for future phishing tests. And it does go against my "personal" ethics in a way. I have pride in keeping people accountable. But from the other answers it seems like it's more important to address it systematically – pm1391 Jan 30 '18 at 13:02
  • 5
    "I have pride in keeping people accountable." - But how would you do that in this case? Do you expect the company to chastise their customers for falling for the phishing? Like should they send out a second email that says "If you fell for the last email you should feel bad."? – industry7 Jan 30 '18 at 18:38
  • 1
    @pm1391, If you're interested in challenging your personal beliefs, read any book by W. Edward Deming. https://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=w+edward+deming He was big on not blaming the shortcomings of a company on the actual workforce but on its actual system designed by its management. – Stephan Branczyk Jan 31 '18 at 05:18
  • @industry7 accountable in the sense that after 10 campaigns, if user x does not improve, we need to sit down with user x. Because user x is not improving and does not care. – pm1391 Jan 31 '18 at 05:31
  • 3
    @pm1391 at the level of top level management, individual performance is irrelevant. They have bigger fish to fry. They might be interested in the percentage of employees who don't get it after 10 campaigns, but not in individuals. – Tom Jan 31 '18 at 10:09
  • @Tom Without leaking who this contract is, the people within the organization have the potential to leak **a lot** more than their amazon password – pm1391 Jan 31 '18 at 20:23
  • 2
    That might well be true and you should include it in your impact analysis. But still the level of management you deal with might not want to deal with individuals. – Tom Jan 31 '18 at 20:27
6

To answer your question:

does this not defeat the purpose of phishing campaigns and improving the security of information in your network? Is this even normal?

No. If the customer wants a watered down version of the result of the research, it's their call ultimately. But you have all the rights to offer your best advice and give them the power of choice.

Is it normal to be faced with those types of customer reactions? Yes, it may happen all the time, and that could be the consequence of a lot of things. Customer might have their own insecurities (e.g. what if members of management were caught, how to handle that), or might not want to slight their employees, might not know how to handle their training afterwards once the report is public.

If they're the ones paying for it, as their adviser you have to finally honor their decision.

As an extra side note, about something I noticed:

I politely accused them of checking a box and not actually being interested in educating their users.

There's no way to polite say what you just said. But there are other ways to say it without coming across all wrong. Welcome to the world of politics. I see most of the other answers covered the security aspect, so I wish to cover the other subtle political aspect in my answer.

You visibly have a lot of good will and know-how, and your customer appears stubborn from your standpoint for not going to the full length of that exercise.

I found the best way to communicate something like that is to use slightly different ways of saying it, that will promote your cause without necessarily annoying the customer or making the customer feel like you're not letting them call the shots or you're criticizing him and coming across the wrong way.

Here's a few ways you could have said it that would probably have helped promote your vision. It's all politics and can be studied separately.

  • Most politically correct way (maximum message dilution): We would be missing out on a great opportunity if we left out that part of the exercise. Are you sure you want to do that Mr. Customer?
  • A bit less politically correct but message less diluted and still not coming across off: If we don't go to the full extent and allowing the training to happen where it is due, we would be wasting a lot of energy. Are you sure you want to do that Mr. Customer?

Notice in both cases, I finished up with Are you sure you want to do that Mr. Customer?. That's called the power of choice, put the choice back in their hands at the end of any attempts to modify their behavior or thinking.

If you don't succeed and they still don't want to, let it be. You didn't have the authority anyways, all you can do is advise them the best you can. But still, in a different scenario, you could have affected the customer's thinking and got it your way. But it's not always the case.

Wadih M.
  • 1,102
  • 6
  • 20
  • Thank you,you address the political side of the battle, as you mention. I am young in this field and may have not learned the language of the trade yet. Can't speak to whether it should a answer, but I appreciate the perspective – pm1391 Jan 30 '18 at 14:38
3

I just finished such a campaign (as a customer) and wanted absolute anonymity of the users.

There were a few reasons, among them the most important ones were

  • privacy, a complicated matter in some countries
  • the fact that I would be sending a global note about the campaign and the results

Your customer may have had other reasons. You gave them the opportunity to have full results, possibly with some advice. They wanted the anonymous version, their choice.

Note: sorry for the typos (or actually - wrong autocompletions by my phone) which made my initial answer rather strange looking.

WoJ
  • 8,957
  • 2
  • 32
  • 51
  • True, And I respect that it is their decision. But it pains me that there is so much data that, in my view, could be used to further protect their environment. They can still keep the data and while not assigning blame to individuals. – pm1391 Jan 31 '18 at 22:18
  • Privacy is a big one, especially in european union countries. – SpaceTrucker Feb 02 '18 at 14:56
  • 2
    @pm1391 Names of naives are worthless to protect their environment. As you have correctly identified, it's about creating environment of resistance to phishing - not about giving Sam anti-phishing-aide. Also, the 30% who passed could have just missed it, education needs to be uniform. As someone who deals with security you should know best that **thou shall not store any data you don't need**. That's a breach waiting to happen. They were clever by refusing to see your list, this way if the list ever surfaces, it will be only your fault. I advise you destroy it. – Agent_L Feb 02 '18 at 15:15
1

I totally grasp your desire to capture that data. So anonymize them. The general idea is you take an MD5 hash of their lowercased email address, and grab as many bits of resolution as you need, and either leave them b7R+ or convert to "AOL passwords" like shave-pen-osram.

Forbidden

   John Smith           FAIL
   Frank Frink          FAIL
   Juliana Crain        PASS

What you might be allowed

   Rufus-Castle-Uniform-Enemy    FAIL
   Zion-Lathe-Shoot-Loyal        FAIL
   Flee-Worldly-Variable-Key     PASS

What is safer still

   Bucket Stop-Bad         4/6 failed (66%)
   Bucket Wax-Scissors     5/6 failed (83%)
   Bucket Memory-Egg       4/5 failed (80%)

There are two ways to go as far as anonymity, imagining n bits can contain the number of employees (e.g. employees=700 n=10).

  • You can go a few extra bits, like n+6 bits, in which case the anonymization would be reversible and the employee could be exposed: Rufus-Castle-Uniform-Enemy usually hashes out only to jsmith@foo.com... Gotcha! There might be a second email, but the more bits, the less likely.
  • You can go a few too few bits, like n-3 bits, in which case, the reverse run will reveal Stop-Bad hashes out to jsmith, emccarthy, jcrian, tkido, ctagawa, and ffrink, making retribution impracticable. This winds up creating a group of "buckets" as it were.
  • you can salt the MD5, but that fails if the persecutor
    • learns the salt via a brute-force crypto attack
    • simply commands you to turn it over
    • notes the pattern of activities which has been logged, and deduces the user

Your disagreement with their reasons is a classic workplace.se problem, but they may not be telling you all their reasons*. Regardless you must comply in your report to them.

The methods I've provided here with anonymization allow you to present, in your report, the detail data you want to present, while technically complying with their directive. You can either do it in the n+many form, which allows them to backtrack to individual users if they really want to -- or the bucketed form, which does not.

Bucketing is fairly useless at 70% unless you present "In bucket 127, 4/6 users fell for the phish". Bucketing works best when the hit rate is 1/3 the number of buckets or less, so 2 hits in the same bucket are rare. "In 512 buckets, 90 buckets had hits, most likely that's 90-95 people, which is the number you want.

* as a litigator I can think of a really big one. If it were me, I would delete the personalized data "as a matter of routine". Saving everything forever is all fun and games until the subpoena comes.

  • @schroeder I'll try to clarify that. I'm saying put `Rufus-Castle-Uniform-Enemy` in the reports where the name would otherwise be. – Harper - Reinstate Monica Jan 30 '18 at 21:51
  • Ok, it makes a *little* more sense now as a means of presenting individuals in the data using a reversible anonymisation so the client can choose to pierce that veil. But gosh, was that unclear at the start. – schroeder Jan 30 '18 at 22:23
  • @schroeder is there anything more I can do to fix it? Examples? – Harper - Reinstate Monica Jan 30 '18 at 22:30
  • 1
    Wouldn't it be easier to just use some randomly generated id instead of hashes? – Sourav Jan 31 '18 at 08:34
  • 2
    The problem with this scheme is that it still doesn't necessarily prevent deanonymisation. If you see Rufus-Castle in the dataset for Accounting department, employees aged 30-40, and contractor list, there may only be one employee that matches that profile. If you really want a technical mechanism for anonymising the data, while still gleaning useful statistical information, I'd suggest looking into [differential privacy](https://en.m.wikipedia.org/wiki/Differential_privacy) instead. – Lie Ryan Feb 01 '18 at 00:40
  • @LieRyan Totally agree, I proposed it because preventing deanonymization was not a thing OP even wanted. The bucket technique does help with that. Sourav that would be far easier, but would not allow you to connect subsequent tests to previous results. You would know you went from 70% fail to 44% fail, but would not know 56% of previous failers succeeded and (alarmingly) 61% of previous succeeders failed. – Harper - Reinstate Monica Feb 01 '18 at 00:57
  • The place I retired from did an annual "employee satisfaction survey." The survey never asked demographic info, but it did get employee ID number. Anyone who had seven of fewer people reporting to them did not get a report—too easy to guess who said what. Leaders with more subordinates than that got a report of their _average_ score for each question. – WGroleau Feb 01 '18 at 16:36
1

I agree that the client’s choice seemingly prevents any opportunity for improvement. However, there is another way to improve.

Let all the employees know “We do not know who fell for it and who didn’t, so we can’t fire or reward anyone. However, we do this test every month, and we will publish the percentages. If the 70% is down to 20% by the end of the year, all employees will get a bonus. The size of the bonus will depend on how much below twenty. To help us reach that goal, there will be a weekly e-mail teaching a technique for identifying phishing. Next year, the bonus will be every quarter, but the goal will also be smaller each quarter.”

WGroleau
  • 217
  • 1
  • 6