20

If people could be fooled in life so easily ( http://www.bbc.co.uk/realhustle/ ) then in computing....

What are the best tips to persuade a regular user to pay a little more attention to security: e.g.: use HTTPS where available, up-to-date software, don't log in at a netcafe, don't click on links that you doesn't trust, use WOT/NoScript plugin, etc.

security: even for creating backups regurarly

schroeder
  • 123,438
  • 55
  • 284
  • 319
LanceBaynes
  • 6,149
  • 11
  • 60
  • 91
  • Kind of a duplicate of [this question](http://security.stackexchange.com/q/1777/33), no? – AviD May 07 '11 at 22:26
  • **WARNING:** WOT-ratings may be (partly) ok, but do **NOT install any WOT-software**, it can/must be considered as malware!!! - https://thehackernews.com/2016/11/web-of-trust-addon.html – DJCrashdummy Nov 11 '16 at 07:39

10 Answers10

20

Frankly, I believe the only way to achieve that is to offer no other choice - or at least make doing the right thing much easier than the alternatives. All of the points you raise put a burden on the user to get some aspect (however small) of information security correct: well users are not for the most part information security experts. They are bricklayers, or physics researchers, or project managers: we should let them get on with laying bricks, or researching physics, or managing projects. Those are the things they are expert at: we should be doing the security for them.

To pick on one example, "don't click on links that he doesn't trust". Just how should an administrative assistant, for example, decide whether a link is trustworthy? Look at the WHOIS record for the domain (having ensured that their DNS is being resolved through a 'trusted' channel)? Examine the details of the SSL certificate? Phone the webmaster and verify the key fingerprint? Only, why does he trust the phone number, the phone network, the person on the other end of the phone... It's a ridiculous position to give people hyperlinks that can point anywhere then tell them to make a judgement on using hyperlinks.

So I might be busy ranting, but am I going to suggest an alternative? Yes: stop using backwards compatibility or familiarity with the existing workflow to justify continued use of broken systems. To address the trusted link example, we have two problems:

  1. we can't identify all websites reliably because they don't all have identification data, which is currently the SSL certificate.
  2. we don't necessarily trust the people who are telling us to trust SSL certificate holders

Point 1 can be easily, if expensively, addressed.

Point 2 is harder: from a technological perspective, you can imagine issuing multiple certs from multiple authorities, but then you have to have UI for examining and evaluating multiple certs; yuck. Or you could imagine a 'web of trust' model where people countersign keys from sites they trust, and people they trust to trust, and so on; but now we're back in the same position we were before, where we have to know who to trust (this is basically what Moxie Marlinspike came to implement in Convergence). Or you could adopt the approach of current web filtering tools, and while you don't trust a vendor to tell you who is good, you trust them enough to declare that some sites are definitely bad.

This rough description fits with what Microsoft's SDL team has called NEAT security UI: the interface should be Necessary, Explained, Actionable and Tested. Compare that with the current UI for trusted websites: you get the interface (the padlock icon) when the trust is OK, i.e. when it's not Necessary. Clicking on that usually lets you see the details of the SSL certificate, but did anyone (and should anyone) explain what all of the fields in an SSL certificate mean to the user? Not Explained. Also, most browsers only stop you proceeding to sites with obviously broken certs, so in many cases the UI is not Actionable either.

The point is that if the UI were NEAT, then it would become more valuable: users would see the UI at the point that they (not we) need to make a decision; they would be told the supporting information relevant to their decision; their decision would have a meaningful outcome; and we would know how they react in both benign and hostile conditions.

Pang
  • 185
  • 6
  • @graham-lee Totally agree Graham, I'm not in the camp that places a great deal of faith in security awareness. You want people to specialize in what they are good at and not spend a large amount of their time solving a security problems. Security works best when it is transparent and by default not something that takes cognitive effort. [Blogspam] learned this the hard way implementing email encryption: http://www.rakkhis.com/2010/08/implementing-email-encryption-lessons.html . – Rakkhi May 10 '11 at 12:07
  • That said if security awareness and education is linked to personal motivation especially where it taps evolutionary heuristics e.g. install HTTPS everywhere and use WOT so that your KIDS are safer on the Internet that can work. In an organizational context the buck stops with renumeration. Make a security objective part of everyone's bonus really works. – Rakkhi May 10 '11 at 12:09
  • Making the right, secure choice easy in the deployment of technology is a great thing for us as implementors to focus on, but seems to miss the point of this question. Most of the world's systems are not going to become secure-by-default any time soon, if ever. In the meantime, how do I help a "regular user" deal with the challenges out there in the wild west of the Internet? – nealmcb May 15 '11 at 16:07
  • @nealmcb while I agree that's an important question, it's not the question being asked here. At least as I understand it, this question is about end users' engagement with security as a problem they need to solve: both you and I are - perhaps in different ways - interested in solving that problem for them. –  May 16 '11 at 16:18
6

I pretty much agree with Graham, security these days is too important to be left to the layman or even to the programmers themselves. It is best to have a dedicated team ensuring conformity to standards including those which could impact the user-side of operations.

On those lines:

  • Rather telling user to use HTTPS when available, the program should itself pick HTTPS if possible and disabling it should be complex enough to demotivate a layman. (I remember a famous organization offering 'opt-in' to users for HTTPS rather than enforcing it. :P)
  • Most software do offer auto-update. Having said that, some people do not like 'nossy softwares' which do things beyond their knowledge, especially connect to the internet. In such a scenario, it then becomes paramount that each release in itself is relatively secure.
  • Session management is also an important aspect whose usage, in today's data mining and targeted advertisement times, is often in conflict with security goals. A site may want to sustain a session, track user activity and collect data on partner websites like many social networking sites do. But a longer/sustained session implies a greater security risk. To tackle such an issue there is a good approach of asking again for only the password when user visits the site after some specified time, activity on some other external sites or accesses through newer tabs/windows.
  • Similarly content filtering also needs to be taken care of by, if not the developers of the content, then by the host bringing the content to the user. Eg.: Anti-spam may be implemented at your mail-provider but it also needs to implemented in your mail software like Thunderbird or Outlook, link-detection in browsers etc.

Most users do not get imaginative while using software, they generally follow a simple pattern. This may be used to advantage of security or to its disadvantage!

check123
  • 534
  • 5
  • 14
  • I'm not convinced that a longer/sustained session is a significantly greater security risk. Actually, it might be a lesser security risk, because if users don't have to type in their passwords as often, maybe they won't be vulnerable to phishing. I think one might reasonably consider a permanent session (using a persistent cookie). – D.W. May 11 '11 at 00:47
  • I don't understand the comment on content filtering or what it has to do with the question. It seems a bit tangential. – D.W. May 11 '11 at 00:48
  • +1 to the first two bullets: software should automatically pick HTTPS and should auto-update. – D.W. May 11 '11 at 00:48
  • Longer sessions become risks in public environments. Suppose you are logged in to your mail account with a persistent session, if your session was not terminated then anyone, other than you, accessing the machine can access your mails. This becomes a little more concerning in times of mobile-networking where people frequently 'check out' each others phone and thus may access to any persistent sessions. – check123 May 11 '11 at 04:49
  • Content filtering is again to highlight that an average user is unlikely to be equipped with sufficient knowledge or experience to identify a 'risky' content offered to him. You give people a nice well-dressed tempting offer, they are quite likely to click it, with little thought to the security implications. In that case it becomes important for content-hosts (like browsers in case of the internet) to be sufficiently equipped to filter out risky content before presenting to the user. – check123 May 11 '11 at 04:53
4

My experience is average Joe’s do care about security – hasn’t every member of this site been asked by someone outside the computer industry which antivirus software is best? I’m suggesting a question posed beyond a feigned attempt to find some common ground with you, that is, a direct question that clearly indicates they would like to hear what you think they can do to better their situation.

Maybe persuasion is not what’s needed.

Maybe easier ways for average Joe’s to pick up Internet street knowledge is what’s needed.

Many already have security anxiety rooted in not knowing if they’re infected, owned, or doing stupid things. That means the power of incentives is already in play. They are just not aware of the plenitude of dangers you alluded to; using open Wi-Fi at Starbucks is crazy*, but that’s street knowledge.

Knowing why it’s bad to walk down that street or to click that link takes work – you either have to experience it or be learned of it. I wonder if there are any short cuts to gaining such wisdom?

I think most people would be happy to spend a little extra time doing something right (i.e. securely) if they know what that means.

Alexis Conran (co-writer/actor for The Real Hustle) did a couple keynotes for the RSA Security conferences – if I remember right, his advice centered on making yourself more knowledgeable of bad things to avoid getting owned. I guess that’s another way of saying to be secure don’t be stupid.

(*http://www.immunitysec.com/products-silica.shtml is a Starbucks Wi-Fi auto-owning tool)

Tate Hansen
  • 13,714
  • 3
  • 40
  • 83
  • 2
    Thanks for actually addressing the issue of helping the user, rather than giving an answer to a different question.... – nealmcb May 15 '11 at 16:26
3

Graham's answer is absolutely correct when it comes to the how to educate a user in what the risks are, and how to either help them understand the decisions they make or take the decision away from them where possible.

A related problem is persuading them to care...The biggest problem here is that it really doesn't matter to the vast majority of people, and no matter what we say or do as an industry there will always be that group who will never notice the security impact one way or the other and will ignore us.

I think we can concentrate on certain groups to try and help improve things locally, and hope that there will be an element of grass-roots flow of cultural norms from these groups.

Important considerations:

  • Currently Joe Public sees security as something that stops him easily doing what he wants to do on the internet or on his home computer.

  • The public as a whole doesn't want to have to learn something esoteric, especially when it may be a different thing they have to learn next year.

  • The public has "learned" that the theft of large numbers of credit card details doesn't really have any effect on them. Look at the figures - the odds are good that individuals will not be impacted negatively.

So how can security be a benefit for the average home user? Simple answer is possibly "It Can't" so while I hate the idea of using Fear as a driver, we certainly should be able to use the publicity around major exploits to help educate.

The only exception I could think of would be:

  • Banks should offer better rates to individuals who use security software correctly. Simple and obvious tools include things like Trusteer's Rapport - which is currently used by a number of global banks, but there isn't a positive incentive for users to install it. Make it financially attractive and you will get better uptake.

Useful examples include raids such as the FBI's raid on the home of a (possibly innocent) DDoS'er - depending on how the court case and publicity go, it may become the case that if you don't secure your machine to prevent infection you might be liable. Possibly a long shot, but then Gene Simmons is remarkably litigious so who knows?

Admittedly a student won't care, but companies will - if they are being raided by the FBI.

So, some thoughts:

  • the Feds should go after companies. If the company is shown to be hosting a machine on a botnet which has attacked an org, the company should have punitive measures imposed - this will educate at corporate level.

  • ISP's should police their TOS - and charge individuals who breach them - this may educate at individual level.

Rory Alsop
  • 61,367
  • 12
  • 115
  • 320
  • 2
    My bank has offered Rapport for ages, and I've never installed it. It would mean changing my browser, and installing a keylogger to notionally address threats that they haven't convinced me aren't already mitigated by e.g. HTTPS and anti-virus. –  May 10 '11 at 13:35
  • @Graham - exactly! It does do a fair bit of risk mitigation (not all by any means) but unless it is worthwhile to the end user it won't happen. I have it installed with Firefox on one of my boxes and it works as you'd expect. Bit of a resource hog sometimes. Had I not been interested in seeing what it was doing I wouldn't have bothered... – Rory Alsop May 10 '11 at 13:48
  • 2
    I disagree that the problem is to convince end users to care. I suspect they care to about the right degree. Rather, I think the problem is that our systems do not work well for our end users. In other words, approaching this by saying "we need to change users' behavior" / "we need more user education" is demonstrably a dead-end approach; it has consistently failed over the past decades. I think a better approach is to say "we need to build software and systems that works better for end users, and doesn't screw them over". – D.W. May 11 '11 at 00:50
  • @DW - that is a very good point, and most of my effort is in that area: getting companies to invest in training coders etc and making it their issue, not the end users, but I do think there is still value in educating the end user, especially for the stuff which should be simple. – Rory Alsop May 11 '11 at 07:38
  • 2
    @all - I think that you need to scare the management layer, TYPICALLY give them the SONY example, give them the TOYOTA example... and then you ensure yourself many new MANDATORY regulations that obliges the end-users to listen and take the time to prioritize security and get educated about security – Phoenician-Eagle May 13 '11 at 15:30
3

May I also suggest one more approach (complementary of course to the other great ideas):

Make security "exciting"!

Contrary to other corporate directions IT security has this great inherent advantage. People get introduced to its basic concepts (in a right or wrong way, that's another discussion...) in their every day life and especially in exhilarating moments of their pastime (fiction, movies etc.)

"The Real Hustle" mentioned in the question does a great job in this direction, still there much to be done...

I would see security professionals doing something more than "Corporate Security Awareness Programs". Boring? Just another one in the thousands of office memos?

Here is the challenge: introduce interesting elements, talk about spectacular failures, be part educator, part evangelist, part fiction writer (read "Zero Day"?) Make it interesting to people and they will come. And then show them simple ways they can use to improve things in their every day practice. Make them feel they can make a difference, make them proud they use encryption like in the spy movies.

George
  • 2,813
  • 2
  • 23
  • 39
2

This is a hard problem since it has a potent enemy: Human nature. It's human nature to trust and everyday experience shows that trust works. You can visit thousands of web sites without a problem. Then one day the servers of your favorite games console get hacked and your digital identity is in danger. There are about a billion people online but even an attack like the one on Sony got only 70 million of them. And Sony-scale attacks are rare.

Even though most companies don't take security really serious. If you analyze the security breaches in recent years, even companies specialized in computer security got hit eventually.

So we have a large gap here: Security should be important but it isn't really that important in everyday life. You can have an insecure web site online for years without getting hit or you can click on a link and get infected with a zero-day exploit that no one could have prevented.

So I think that persuading the average person about security is the wrong approach. Just like in real life, security should not be an issue. No sane person dons a flack jacket before leaving the house in the northern hemisphere (well, you get my drift). Security has to "just work" or it's not security.

It doesn't work because computers aren't smart and powerful enough. My IDE is powerful enough to generate 1.5 million lines of Java code in less than 4 seconds but it's too dumb to tell me that <input value="<%= dbField%>"> or "select * from table where name = '" + userInput + "'" is dangerous.

Todays programming languages offer killer features like CSS styling of the UI but so far, no one cared to create a framework which makes it more simple to write safe code instead of unsafe one. If you start to write Eclipse RCP applications, about 100MB of Java byte code is added to your application and no one in the world can say how many security holes this code might have.

If a firewall gets a packet, most of the time, there is no way to tell where it really came from and whether it contains what it should. It's more important to route a packet around the globe in less than 3ns than making sure an email comes from the sender.

IMO, unless these problems are addressed, it's pointless to educate John Doe about security. We should start to educate the specialists, first. After that, John Doe won't have to care anymore.

[EDIT] I just read that some Android apps transmit the AuthToken for Google's ClientLogin protocol unencrypted. That means anyone can access all Google APIs where you have an account: GMail, Calendar, contacts, private pictures on Picasa, your blog on blogger.com.

So if Google doesn't get something so basic right, what's the point in persuading John Doe that security matters? :-(

Aaron Digulla
  • 365
  • 1
  • 8
1

We have to have a common language around risk - something that is mostly intuitive and can be explained on a napkin. I don't think we'll ever turn people into subject matter experts, but at least we'll get them thinking critically about a problem once they're made aware of it.

There's a good article on risk literacy form a few years back.

Ben
  • 605
  • 4
  • 11
  • 2
    I don't think a "common language" is going to fix the underlying problem: humans have trouble thinking clearly about risk. – D.W. Aug 12 '11 at 19:26
1

I'd suggest applying gamification to the task. As that Wikipedia article notes, the black hat community already uses games to improve their attacks, partly because it makes the task more fun, and promotes useful practice.

One approach is to educate users using games that help them become aware of the threats and vulnerabilities out there. It is common in many traditional games to make players aware of often arcane security principles of warfare. What games are out there about navigating the risky world of wifi security, phishing attempts, the dangers of using outdated and vulnerable software, the risks of having a larger attack surface than necessary, etc? Why aren't there more?

Another approach is for the general implementor community - incorporate game design principles into software user interfaces of security-related software: "Gamification" of Information Security: Applying Social Game Design Concepts to Information Security | Skype Education

nealmcb
  • 20,544
  • 6
  • 69
  • 116
0

Unfortunately, today people have to close their doors and they do. Every place that has gone public, today is equipped with alarms, video surveillance and other security systems. Such consciousness came to people through the long time and the value of lost and damaged property. The same is here, in IT industry.

There are usual people that do not really care about security - they are simply not interested in it, but they do care about their data and funds. Everyone closes their doors on lock, these are the basics. However, when one should install security system into the building, we usually call the specialists. I am talking about security transparency and easiness. Average user should not be intimidated by the process of establishing and supporting safe environment. Ideally, once such safe environment is provided, user should just follow simple rules not to break it.

When it comes to the questions like what then those rules are, what software to install and, generally, how to keep this utopian safe environment still working, we often come to dispute and controversy while not receiving accomplished answer. It depends from the needs and aims of computer user, role the computer is taking itself. It does not means that home users definitely cares less than bank operator. But there are costs-to-effectiveness, feedback from invested amount of time and money. Anyway, general tip is to support so-called CIA - confidentiality, integrity and availability.

0

Explain clearly what can happen if people use stupid passwords such as 123456 or leave their WIFI APs open, and give examples and references.

ie. years in jail and getting on a sex offenders list because their computer was used remotely to download kiddie porn - that should scare the crap out of normal people.

If not, then there's extortion, blackmail, identity theft, bank account theft, ridicule of friends on social networking sites, jealous exes or other enemies spreading lies and rumours for personal gain, revenge, destroying relationships and putting lives in danger etc etc etc. The list goes on.

  • But how often does this *actually* happen? (i.e., someone goes to jail and gets put on a sex offenders list because they chose a password.) I'd guess it is very rare. Therefore, I'd guess that users' behavior is pretty rational. I don't think it is constructive to try to scare users with inflated movie-theater plots and scare stories; pretty soon, users will catch on, and then they'll stop listening to us even when we have something justified to say. I'm sure we've all heard the story of the guy who cried wolf too often.... – D.W. Aug 12 '11 at 19:27
  • I would agree that that particular example is rare, but millions of people have already suffered. People should not need to lose a leg in a car crash to be a better driver! Surely it is better to prevent people wrecking their lives than waiting for them 'having to catch on'? New users can't easily comprehend that they are connected to a billion other people with a significant fraction of sleepless evil criminals whose sole occupation is to profit from their ignorance. There are new and gullible users joining every day that need educating. Internet security should be taught in primary schools. – Andy Lee Robinson Aug 13 '11 at 11:39