98

Detailed in the latest NSA dump is a method allegedly used by Russian intelligence to circumvent 2FA. (In this instance Google 2FA with the second factor being a code.)

It’s a fairly obvious scheme and one that I’m sure must be used regularly. It appears to work like this:

  1. URL is sent to target via spear phishing, the URL points to attacker controlled phishing website that resembles Google Gmail.
  2. User send credentials to the phony Gmail.
  3. (Assumption) Attacker enters credentials into legitimate Gmail, and checks if a second factor is required.
  4. Target receives legitimate second factor.
  5. Phony Gmail site prompts target for second factor. Target sends second factor.
  6. Attacker enters second factor into legitimate site and successfully authenticates.

The only way I can see to defend against this attack is by spotting the phony site as being a scam or blocking the phishing site via FW’s, threat intel etc.

Is there any other practical way to defend against such a scheme?

enter image description here

Dan W
  • 103
  • 3
TheJulyPlot
  • 7,669
  • 6
  • 30
  • 44
  • 15
    As a note, this will only give the attacker access *this time*, since the eavesdropped code expires after a few seconds. – Xiong Chiamiov Jun 07 '17 at 15:58
  • 36
    @XiongChiamiov Well, presumably the first step after getting access is to disable 2FA – cat Jun 07 '17 at 19:55
  • 4
    If one gives away their passwords then there is nothing the website itself can do. Geolock maybe but that would be trouble for people who travel – BlueWizard Jun 07 '17 at 23:06
  • 8
    @cat - not always an option. At work, we use itglue.com for documentation, licensed through another service provider, and the employees are required to use 2FA. Those people who are using 2FA do not have access to any option that will turn off that behavior. – TOOGAM Jun 08 '17 at 03:22
  • 1
    @TOOGAM Oh, sure, I'm thinking more of Google and Yahoo and non-internal systems (and I'm willing to bet there are internal systems that do allow it to be disabled) – cat Jun 08 '17 at 03:23
  • @cat - just to clarify (and I'm sorry for having left off this detail; it would have been classier if I just did this in the earlier comment), the website I'm using does have me use the Google Authenticater program to implement 2FA. So we are using Google's software (though not Google's website). – TOOGAM Jun 08 '17 at 03:37
  • 1
    Stop reading fake news about 'Russian hacks'. – Overmind Jun 08 '17 at 05:37
  • 9
    @Overmind I'm more interested in the details of the scheme outlined, than the alleged perpetrators of any such scheme. – TheJulyPlot Jun 08 '17 at 05:40
  • I think of this from another angle, given the content of you initial post. I'd say a vulnerability in this case is the phone. If the stakes are high and you invested in the necessary hardware, anything communicating with a phone can be intercepted relatively easy. For normal users, their awareness matters. Basic training can prevent URL phishing. – Overmind Jun 08 '17 at 05:48
  • 2
    I'm surprised this hasn't come up yet: Check the damn green lock! Inserting your password on a non https site should be the first thing you never ever do! – BgrWorker Jun 08 '17 at 07:00
  • 3
    @BgrWorker There is every possibility the attacker could have a valid cert for the phony domain. Really the answer I’m looking for is in relation to the 2FA part, rather than phishing detection or prevention techniques. – TheJulyPlot Jun 08 '17 at 07:05
  • This is where smartcards come in useful, as they prevent the user from being able to leak their second secret (the client-side private key stored on the smartcard). Only a small minority of countries and companies seem to use them though. – Mark K Cowan Jun 08 '17 at 17:15
  • 2
    I'm interested in where the _Top Secret_ image came from! (Even though some text has been redacted.) It's generally considered courteous to attribute your images, especially (well, maybe not) if they're marked Top Secret. – FreeMan Jun 09 '17 at 12:41
  • 3
    @Freeman Click the link. – TheJulyPlot Jun 09 '17 at 12:42
  • Ah, thanks, @TheJulyPlot. I missed that minor (yet obvious) detail... – FreeMan Jun 09 '17 at 12:47
  • FYI, just because something has been obtained by a publication or media outlet, does not automatically remove it's classification rating! So please be careful what you post. – Ogre Psalm33 Jun 10 '17 at 21:09
  • If 2FA does not work, it is time to use 3FA (such as password+token+fingerprint). – user4982 Jun 11 '17 at 16:38
  • I wonder what happened to page 2 of 2. – Pharap Jun 11 '17 at 23:45

7 Answers7

65

Not all two-factor authentication schemes are the same. Some forms of 2FA, such as sending you a text message, are not secure against this attack. Other forms of 2FA, such as FIDO U2F, are secure against this attack -- they have been deliberately designed with this kind of attack in mind.

FIDO U2F provides two defenses against the man-in-the-middle attack:

  1. Registration - The user registers their U2F device with a particular website ("origin"), such as google.com. Then the U2F device will only respond to authentication requests from a registered origin; if the user is tricked into visiting goog1e.com (a phishing site), then the U2F won't respond to the request, since it can see that it is coming from a site that it hasn't been previously registered with.

  2. Channel ID and origin binding - U2F uses the TLS Channel ID extension to prevent man-in-the-middle attacks and enable the U2F device to verify that it is talking to the same web site that the user is visiting in their web browser. Also, the U2F device knows what origin it thinks it is talking to, and its signed authentication response includes a signature over the origin it thinks it is talking to. This is checked by the server. So, if the user is on goog1e.com and that page requests a U2F authentication, the response from the U2F device indicates that its response is only good for communication with goog1e.com -- if the the attacker tries to relay this response to google.com, Google can notice that something has gone wrong, as the wrong domain name is present in the signed data.

Both of these features involve integration between the U2F two-factor authentication device and the user's browser. This integration allows the device to know what domain name (origin) the browser is visiting, and that allows the device to detect or prevent phishing and man-in-the-middle attacks.

Further reading on this mechanism:

D.W.
  • 98,420
  • 30
  • 267
  • 572
  • Many U2F tokens don't have persistent memory and don't care whether they've been previously registered for that site, so they'll reply every time. Although of course they use a different keypair for every site. – user1686 Jun 08 '17 at 06:25
  • That looks like a major hassle if you're trying to log in on a smartphone or tablet, or even a machine that doesn't trust USB (e.g. a client's). I.e.for many situations the effect on usability it too great – Chris H Jun 08 '17 at 12:11
  • 9
    @ChrisH [YubiKey NEO](https://www.yubico.com/products/yubikey-hardware/yubikey-neo/) is dual USB and NFC, allowing you to use U2F on both computers and mobile. It's pretty seamless on Android. (Unfortunately, Apple's walled garden does not support U2F over NFC...) – josh3736 Jun 08 '17 at 16:55
  • 5
    @D.W. I believe that Yay295's point is that your answer claims that U2F is designed to defend against that attack but doesn't clearly explain *how*. You say "the U2F device will only respond to authentication requests from a registered origin", but what prevents the malicious MITM site from asking the official site to make the request? – jamesdlin Jun 12 '17 at 03:07
  • 2
    @jamesdlin I think the key point is that real.com needs to be open *in the users browser* in order for the correct key to be generated, so Yay295's step 3 doesn't work. The attacker cannot open a copy of the site on their own phone, but trigger the crypto device on the target's phone; and a key generated for fake.com by the target's phone will not be valid on real.com. This works because there is direct communication between the crypto token and the browser. – IMSoP Jun 12 '17 at 11:50
  • 1
    @jamesdlin It does explain how though, in point 2. "the U2F device knows what origin it thinks it is talking to, and its signed authentication response includes a signature over the origin it thinks it is talking to. This is checked by the server." If you authenticate to a phishing site with the domain g00gle.com, the U2F device will create a signature that's only good for g00gle.com. The real google.com will not accept that signature. – Ajedi32 Jun 12 '17 at 15:42
  • @Ajedi32 That part still does not explain what prevents the MITM from asking the real site to request a key from the U2F device. How does the U2F device know that the phishing site is involved (i.e., what the user is currently observing) at all? IMSoP's explanation that there's direct communication with the browser seems to be the crucial part. – jamesdlin Jun 12 '17 at 19:02
  • 1
    @jamesdlin Yeah, I guess this answer is missing some introductory information on what U2F is. The idea of an attacker "asking the real site to request a key from the U2F device" is completely nonsensical when it comes to U2F, not just because it wouldn't work, but because it's not even possible for the real site to communicate with the user's U2F device out-of-band in the first place. U2F authentication happens entirely in-band. – Ajedi32 Jun 12 '17 at 19:41
47

Out of band 2FA is the correct approach. This means that you have a second factor that can't be phished, like a client cert or FIDO U2F. Codes, or SMS-based 2FA models are the weakest 2FA options because they're in-band, and as you've described, can be phished just as credentials can.

They're convenient because they can be used by nearly anyone, and they're certainly better than nothing, but the security they provider should never be confused with the security provided by out-of-band 2FA.

Xander
  • 35,525
  • 27
  • 113
  • 141
  • 60
    I'm not sure "Out of band 2FA" is really the term you're looking for here. TOTP and SMS-based 2FA schemes _are_ out-of-band, by definition. They're just not resistant to phishing because they don't integrate with the user's browser like U2F and client certs do. – Ajedi32 Jun 07 '17 at 20:48
  • 26
    The attack effectively involves the legitimate user handing over *all* credentials to the attacker. How does "out of band 2FA" help? Are you talking about something that automatically validates who the user is giving those credentials to? Could you elaborate, please? – jpmc26 Jun 07 '17 at 22:53
  • 1
    I think the correct terminology wouldn't be out-of-band but something that can't be MITM'd. Client cert qualifies for sure, I don't know whether FIDO is safe against this though. – André Borie Jun 08 '17 at 11:15
  • 10
    The attack only works because the second factor is sent via the same channel (the phony Gmail page). If the second factor is sent truly out-of-band, say, from a phone directly to a preset website (the real site), this mitigates the attack. I've often thought that this is how 2FA apps should really work, rather than getting users to type numbers into the same logon page. – adelphus Jun 08 '17 at 11:25
  • 1
    Much as @adelphus points out, when using mechanisms like SMS and Google's TOTP authenticator, while they do indeed provide the code out-of-band, it is submitted in-band, exactly like a password. This is what leads to vulnerability. – Xander Jun 08 '17 at 11:53
  • 2
    @jpmc26 A fully out-of-band MFA mechanism helps by eliminating the ability of an attacker to capture the additional factor and reuse it. For another example, phone based authentication system, where, after I enter my correct username and password, the system calls me on a phone number I have registered. I answer, and if I enter the correct PIN on the phone call, I am authenticated. I do not enter the second factor directly into the system, so it cannot be captured by someone impersonating the system. – Xander Jun 08 '17 at 12:54
  • 22
    @Xander I don't see how it matters which channel the second factor is submitted through. If you don't notice that you are on a phony site then you will happily answer the phone call and provide your PIN. This is the system my bank uses (except it is a "Enter PIN" dialog on the phone rather than an actual phone call). If I misspell the address for the bank, then the phony site can access the real bank on my behalf and trigger 2FA, I will get the dialog as I expect and enter my PIN (out of band), and then the attacker will be given access. – Supr Jun 08 '17 at 13:34
  • 3
    Google has a version of 2FA where their servers send a notification to the Google app on your phone. You open the app end press a button to authenticate. The 2nd factor cannot be MITM:ed, but that would not stop the attack described in this question at all. – Tor Klingberg Jun 08 '17 at 15:01
  • 15
    @adelphus When it comes to phishing though it doesn't matter whether the attacker can MITM the second-factor auth or not. If the attacker initiates a login request to your bank and you authorize that login session (regardless of whether or not that authorization happens out-of-band) the attacker _will_ get access to your account. U2F and client certs are not vulnerable to this attack, but that's _not_ because they're out-of-band. In fact, with both of those schemes the authentication process actually happens in-band. – Ajedi32 Jun 08 '17 at 18:38
  • Would Blizzard's 2FA be an example of an out-of-band authenticator? It's a phone app, but the site pushes a notification to the phone, which pops up a dialog that prompts whether you want to approve the access, and you have to click "yes" on the prompt, which then authenticates your web session. – Doktor J Jun 08 '17 at 18:41
  • 1
    @DoktorJ Yes that's out-of-band. It's still vulnerable to phishing though; see the other upvoted comments on this answer. – Ajedi32 Jun 08 '17 at 18:50
  • @Ajedi32 My bad. I should've realised that the second factor only works if it can be matched with the user's browser session. The factor doesn't really need to validate you (that's what your password is for) - it needs to validate the *thing you're typing your credentials into*. Thanks for the explanation. – adelphus Jun 10 '17 at 10:27
14

This is one of the situations a (in browser) password manager will help you.

Because a password manager stores passwords by their real url, it won't autofill in the attacker's page, or even give suggestions. In addition to not leaking the 2 step password token, it also protects the password from being leaked.

This protection even works better if the user does not know his own password, and can only interact through the password manager for filling in the password.

Ferrybig
  • 262
  • 3
  • 10
9

The bottom line is that if an attacker can fool you into providing all the credentials, then game over. It doesn't matter about the number of factors involved. There are things which can help limit the exposure, such as very short timeouts for tokens that make it difficult for an attacker to get and reuse the token within the time limit. However, timeouts have limited protection as getting the balance right can be difficult, especially with 'fake' 2FA, which has become so prevalent and where you have to allow delays of things like SMS delivery to prevent usability problems (I have seen this using international based services where the SMS delivery can be slower and the token times out before you can receive it and enter it in the browser).

Many of the systems called 2FA are not really 2FA at all - they are actually 2SA (two step authentication). In real 2FA, the factors are something you know (password) and something you have (token, often hardware based). Schemes which involve a code sent via SMS are NOT 2FA, they are 2SA - you don't actually have the token - it is sent to you. As it is something which is sent to you, there are new threat vectors, such as having the mobile number redirected etc. This is one reason NIST has deprecated SMS based tokens as a reliable authentication process.

With respect to the OPs specific question, the only reliable protection is being able to detect the phishing page. Google released a chrome extension to try and assist with this. The extension will warn you if it detects you are supplying your google credentials to a page which is not a google page.

The big problem is that we have spent years teaching people to look for the "green padlock" in URLs to provide some assurance the page is legitimate. Unfortunately, efforts like Lets Encrypt have now made it easy to get domain verified certificates, so many of these phishing pages will now have the green padlock. This is not to say the problem is due to Lets Encrypt - this is a very good initiative. The problem is partly due to weaknesses in the PKI infrastructure, but mainly due to user awareness and understanding. In general, people don't understand PKI and how to verify a certificate is legitimate for the site and that the site is the site they think it is. To make it worse, even if you do understand, the steps/time it takes to perform that verification is often inconvenient or simply too hard, so people don't do it. The situation is made worse by cleaver bad actors who find ways to make things look legitimate - for example, a recent exploit uses weaknesses in how browsers display URLs and Unicode characters to generate a URL which renders in the address bar in a way that at a glance looks correct, but the actual characters in the URL specify a phishing site. The user looks at the address bar, sees a green padlock and glances at the URL which looks right (your brain will even fill things in to make the match look better!) and accepts the page as legitimate. You don't notice some additional whitespace between characters or slightly odd looking character shapes.

So how do we protect against this. Unfortunately, there is no single "do this and you will be safe". Some password managers can help as they will only provide the credentials if the URL is correct, never use URLs in email messages - always type it in yourself or use a bookmark you created. assume at some point you will be fooled and adopt practices which will limit the damage when it occurs i.e. different passwords for every site, use hardware based 2FA when possible, actually click on the certificate details button for "high value" sites and look at what it says and who the certificate is registered to, make sure your system has all updates and your using the most recent browser version etc., be suspicious by nature and remember that the big threat is social engineering, so be very wary about anything which pressures you to take action based on fear, guilt, rewards or punishment. These are very effective motivators and threat actors rely on them. Phishing campaigns have become much more sophisticated in their implementation, but at their core, they still rely on emotional manipulation - a promise of something wonderful or a threat of something terrible.

Finally, if you’re tempted to comment because of my mention of password managers, please don't. Yes, there are risks with password managers and yes, some are worse than others. However, in general, a good password manager used correctly is usually going to provide more protection for the average user than their current password management process. Yes, if the password manager gets compromised, then all of your passwords are compromised. However, many people find password management too hard and are using the same, often weak, password on every site anyway. Once one site is compromised, all their sites are compromised. Obviously, if you understand technology and you understand passwords, hashing etc., you can probably come up with a more secure solution, but you’re not the audience for password managers. Think about how your parents or grandparents are dealing with password management and how well they spot phishing sites or understand certificates and then think about how easily they can handle your custom GPG based password management over cfile or synching.

EDIT: On re-reading my response, I'm not sure I emphasised enough that real 2FA is increasingly available and many of the providers who currently support the less secure 2SA with SMS codes also support far more secure 2FA, in many cases using U2F (as mentioned in other replies). Hardware 'keys' from Yubico or duo (and others) are cheap and easy to setup/use. My only recommendation is that if you decide to go the hardware token/key route, make sure you get two keys, register them both and put one key away in a secure location. I have one which I carry with me and one which I have in a safe at home. Recovering from a lost/damaged key is not as easy as recovering from a forgotten password, so you want to avoid getting into that situation as far as possible.

TheJulyPlot
  • 7,669
  • 6
  • 30
  • 44
Tim X
  • 3,242
  • 13
  • 13
  • [Cleaver](https://en.wikipedia.org/wiki/Cleaver) bad actors don't phish passwords, they work on C-grade terror/comedy movies (omg [Butcher](http://diablo.wikia.com/wiki/The_Butcher) just took his own hand off). XD – Mindwin Jun 09 '17 at 12:54
  • Cleaver LOL. I'm going to leave that typo. Think we should define a new term "cleaver actor" aka "clever bad actor" – Tim X Jun 11 '17 at 00:55
  • 1
    In SMS-based 2FA, "what you have" is a SIM card. – N.I. Jun 11 '17 at 18:40
  • No, not really. The SIM card is irrelevant as it is not part of the authz. Even worse is that it is trivial to use a bit of social engineering to have the SMS messages redirected to another number (a different SIM card), which is exactly how some of the more publicised hacks of SMS based 2FA have worked. For real 2FA, the second factor must be something you have and directly used in the authz process. The SIM card is incidental to that process. – Tim X Jun 12 '17 at 08:02
  • @TimX Couldn't you just as easily argue that TOTP isn't "what you have" either, because an attacker could conceivably clone the seed off your phone? Just because SMS-based 2FA is weak against certain side-channel/social engineering attacks doesn't mean it isn't 2FA at all. – Ajedi32 Jun 12 '17 at 15:51
  • @ajedi32 the difference is that the SMS code is not based on anything you have. The code is not derived from data only you have so it is not 2FA. It is only a second authentication step - the code is completely determined by the service you are accessing. In 2FA the second factor is either something you have or derived from something you have. Simple SMS codes are not based on something you have and are therefore not 2FA by definition. Many of the weaknesses associated with SMS codes are because it is not based on something you have and is why NIST has deprecated them. – Tim X Jun 13 '17 at 02:10
2

As pointed out in the comments, this isn't a good way to do things.

Reverse the test entirely.

In this case you're trusting that the mobile phone of the user is 'safe' so use this to authenticate them. Upon the user attempting to log into the website, you raise a request on the phone for them to agree to this login (Via push notification ideally directly to the application, not sms or email as these can easily be breached). 'You appear to be logging in from IP x.y.z / geolocation foobar - do you wish to continue?'

Also, you can have them provide a certificate which exists on the phone, but not on the computer. This way the 'attacker' can't access this information simply by managing to redirect the user to the wrong site.

  • 2
    Won't work. The user will see "You appear to be logging in from IP x.y.z / geolocation foobar" and believe that _they_ initiated that request, since they're currently trying to log in to what they _think_ is the legitimate site when in reality it's a phishing site. Once they approve the login request, the attacker will then get access to their account. – Ajedi32 Jun 08 '17 at 18:42
  • 2
    Ah yes, good point. Hmm. Back to the drawing board ! :) Leaving this here as a 'bad' answer – djsmiley2kStaysInside Jun 09 '17 at 08:53
  • 4
    I dunno, if you provide the geolocation information, and it says "You appear to be logging in from IP x.y.z / Onestia, Romania" and you're sitting at your desk in Anytown USA, that's going to raise a red flag. It'll be odd for people using corporate proxies (i.e. where I work, my IP shows up in VA while I live/work in MA) but that should be easier to figure out since most people know where their company is headquartered and will go "oh right I work for Acme who's headquartered in VA, that's why it thinks I'm logging in from VA" (or a nearby technophile may be happy to point that out to them) – Doktor J Jun 09 '17 at 20:37
  • @Ajedi32, I think the geolocation bit is what makes this workable, much as Doktor J points out above. – Wildcard Jun 10 '17 at 00:11
  • 1
    @DoktorJ Eh, I kinda doubt relying on users to check the geolocation would be any more effective than relying on them to check the domain of the site they're on. Not to mention that attackers could easily spoof the location displayed using a proxy or Tor. – Ajedi32 Jun 10 '17 at 04:08
  • Sorry, I should of pointed out the geolocation would be part of the check in the first place. – djsmiley2kStaysInside Jun 22 '17 at 09:26
1

This attack is known as phishing. All the security in the world will not do any good if you can fool an end user into surrendering the credentials willingly.

The mitigations against phishing include:

  1. Email servers can scrub emails for links to known phishing sites.

  2. Email clients often disable links by default and provide a warning when enabling.

  3. Users should avoid clicking links found in emails. It is often safer to type the address.

  4. Users should never access a sensitive site (e.g. a banking site) via a link from anywhere. Use a bookmark or type it.

  5. Contrary to some common belief, users should use password managers for sensitive sites. A password manager will not let you provide a password to the wrong site.

John Wu
  • 9,101
  • 1
  • 28
  • 39
0

As an addition to the other answers, this kind of attack can be hindered if the site authenticates to the user so that the user is in the habit of getting a stronger signal that they are entering credentials on an authentic page, even if they don't use a password manager or aren't paying super-close attention to the URL. This is typically done with a user-selected security image (from a large set of options) plus a user-set text string. It is presented after entry of the username but before the password (which get split between two pages). It's not completely foolproof (you have to prevent the attackers from getting the image/string by simple bulk request), but it is designed to work against phishing, and it makes successful phishing a bit harder to pull off. If the attackers do attempt to fetch security images/strings, these requests may also give the genuine service provider a tip-off that something is amiss, and provide some forensic information about where those requests are coming from.

Whether it works in practice or not is a different question, and the evidence from a 2007 paper suggests not, at least for most users.

WBT
  • 556
  • 1
  • 7
  • 14