I currently work on the IT security team at my workplace in a senior role. Recently, I assisted management in designing the phishing / social engineering training campaigns, by which IT security will send out phishing "test" emails to see how aware the company employees are to spotting such emails.
We have adopted a highly targeted strategy based not only on the user's job role but also on the content such employees are likely to see. The content have been varied to include emails asking for sensitive content (e.g: updating a password) to fake social media posts, to targeted advertising.
We have been getting push back from end users that they have no way of distinguishing a legitimate email that they would receive day to day from truly malicious phishing emails. They have been requests to scale back the difficulty of these tests from our team.
Edit to address some comments that say spear phishing simulations are too extreme / bad design of simulations
In analyzing the past results of phishing simulations, the users who clicked tended to show certain patterns. Also, one particular successful phish that resulted in financial loss (unnecessary online purchase) was pretending to be a member of senior management.
To respond to comments on depth of targeting / GDPR, methods of customization are based on public company data (i.e: job function), rather than private user data known to that person only. The "content that users are likey to see" is based on "typical scenarios", not what content users at our workplace see specifically
Questions
When is phishing education going too far?
Is pushback from the end users demonstrative that their awareness is still lacking and need further training, specifically the inability to recognize legitimate from malicious emails?