7

It seems that 2FA purists seem to go for security keys (like ubikey or smart-cards) while others seem to have a more relaxt stance which even seem to include 'possession' of none physical elements like an email address, a phone-number or a push notification challenge to a certain app installed on a registered device (which only has a 'soft' clone-able private key).

I mostly investigated the last one (push messages), but it seems to me everything that can be copied (in a non-physical way/or physical effort) could potentially be written of as something that's not fully reliably a 'physical' possession of you (and so it doesn't count as a factor).

But that confuses me since (what seems to me like) a reliable 2 factor protocol description like U2F doesn't fully exclude '... completely software implementations ...'.

Also going from the assumption that a phone was hacked (or cloned) we could say that there is no real 2nd factor for the hacker anymore since it could possibly access passwords that were stored (another assumption) in the mobile browser (I'm securing a web application ;) and the software token (or even the hardware token, assuming full access on the device).

I guess I'm trying to ask: What would constitute an actual 2FA implementation using a smartphone, and what wouldn't? Or am I just mixing up theory with actual implementation too much?


For using the software token approach I could only think off securing that with a password that can't be saved (to mitigate the smartphone hack case a bit). But wouldn't this make it 'something you know' again and thus mitigate the 2nd factor (the 1st factor also being 'something you know' in this case)?

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
Tommy Bravo
  • 171
  • 3
  • 1
    One place I worked stopped talking about two-factor authentication because of endless debates about what was a legitimate second factor. They moved to talking about multi-factor authentication. While an SMS probably isn't true two-factor, it is definitely better than just a password. – paj28 Sep 27 '17 at 17:18

5 Answers5

10

[This is my view, I'm not claiming that it represents the view of the industry]

I totally agree that some piece of secret data stored on your main device blurs the line between "something you know" and "something you have". Which side of the line it falls on, I think, depends on the specifics of what that data is and how the authentication protocol works, and what perspective you're looking at it from.

Technically yubikeys, smartcards, and even OTP fobs, are also a piece of secret data stored in software, albeit in a way that's difficult even for an attacker with physical access to extract. I will argue that the thing you are proving possession of is not the device, but the secret data. With hardware tokens these are the same, but people go and apply the same thinking to phones and other kinds of secrets and I'm not sure that's the correct way to think about it.

Definitions of "Know" vs "Have"

What kind of secret it is, and how your device stores and accesses it gives a sliding scale of security (plain text file --> yubikey). Somewhere in there lies the boundary between "know" and "have". Where you draw that line, I think, depends heavily on whose perspective you take. Some examples:

  • End-user perspective: you probably draw the line at "came from my memory" vs "is stored on a device".
  • System Administrator perspective: you probably draw the line at whether you can ask for the device back and be confident that the user no longer has the secret.
  • Authentication Server's perspective: in most cases, the server has no way to tell whether the secret came from a secure device, or was derived from a password that the user typed in. So the relevant distinction from its perspective is whether it is a shared secret that both client and server know, or whether it's some sort of public key that you need to prove possession of without revealing the actual secret. The practical litmus test is that "knows" tend to be vulnerable to record&replay attacks, while "haves" tend not to be.

There are some cases that everybody agrees on: a password that the user types into a textbox is clearly a "know", and a smartcard with an RSA keypair is clearly a "have". But no matter how you define "know" vs "have", I think there will always be edge-cases that one of the above perspectives considers a "know" while another considers a "have".

Thought experiment

Say the sysadmin generates you a new random password, stores it encrypted on a yubikey such that the key will release the encrypted password upon request, and only your VPN client has the key to decrypt and actually use the plaintext password (no idea if this is realistic, but hey, it's a thought experiment). Is that a know or a have? From the end-user's perspective it's certainly a have ... they can't get into their account without the fob. From the admin's perspective it's (mostly) a have because unless the user went out of their way to hack the fob and the VPN client, the user can't learn the password, so you can ask for the fob back and give it to another employee. But from the server's perspective it's certainly a know because all it sees it a plaintext password, it has no way to tell whether it came off a secure device, or was typed in.

Server perspective

As an application developer, the theoretical distinction that matters to me is:

With "something you know" (ie passwords) you are sending the secret itself over the network to the server. With "something you have" (usually cryptographic keys or seeds) you never send the secret itself, but a one-time value or challenge response that proves you have possession of the secret.

Consider a man-in-the-middle sniffing your web traffic. They can steal your username / password. With OTP / yubikey / etc, the secret data is a cryptographic key or an RNG seed. They can sniff all the messages they want, they will never recover the "something you have" secret.

I'm arguing that if retrieving the second factor requires the attacker to have access to your device (physical access or rootkit) or to another account of yours, then it meets my definition of "something you have".

Resistance to cloning

Resistance to cloning once the attacker already has access the device is clearly a bonus, but (to me) not necessary to meet the definition. After all, to do a clone, the attacker already needs to "have" the device. The difference at that point between being able to use the device to impersonate you, and being able to clone the device to impersonate you is theoretically meaningless because, either way, they already have the ability to impersonate you.

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
  • I don't agree. The important distinction is, that if the secret value can be copied or not. If the token (or smartphone) has a tamper-resistant storage, then you can say its "something you posess" because its impossible for anyone, to copy the token data such as so they still retain access after they no longer posess the token. A password can be keylogged, guessed or stolen in other ways, thus its not "something you possess" since even if the password is converted into a one-time value, one with knowledge of the key or password can access without possessing the hardware. – sebastian nielsen Sep 22 '17 at 19:47
  • @sebastiannielsen I never claimed that a password is "something you possess". I claimed that a cryptographic key is something you possess. I'll make an edit though to mention extraction. Feel free to write an answer about the difference between "possession of data" and "possession of a device". – Mike Ounsworth Sep 22 '17 at 19:50
  • I believe your current definition is flawed. If I were to write a program that takes a password from the user as input, performs key stretching on the password, then uses that data to deterministically generate a private key that is then used to authenticate to the remote server, by your definition that authentication method would be "something you have", whereas I would consider it "something you know" since the data used to authenticate is stored entirely within the user's head, and nowhere else. – Ajedi32 Sep 26 '17 at 14:49
  • @Ajedi32 Are you arguing that my definition is _flawed_, or _different from yours_? Those are not the same thing. I've already been in one lengthy and heated debate on this thread because I think whether things are a "know" vs a "have" depends on perspective you're looking at it from. I've edit my answer to make my perspective more clear. Does that satisfy you? – Mike Ounsworth Sep 26 '17 at 15:30
  • 1
    @MikeOunsworth To clarify, I'm talking about the "definition" paragraph you put in the quote block, the part immediately after "the theoretical distinction between "something you know" vs "something you have" is[...]". I'm claiming that definition is flawed because it's trivial to design an authentication system that relies entirely on a password the user memorizes and manually enters, but also meets your definition of "what you have" and does not meet your definition of "what you know". (For example, HTTP Digest Authentication would, under your definition, be "something you have".) – Ajedi32 Sep 26 '17 at 15:47
  • @Ajedi32 I have no idea what an HTTP Digest Authentication is, but if it's something stored in browser cookies or something, then "have" seems right. As per my edits, I can see some edge-cases of things a user types in from memory that the server would still consider a "have". It all depends on context and perspective. I can make further edits if you're still not happy with that. – Mike Ounsworth Sep 26 '17 at 15:57
  • 1
    @MikeOunsworth HTTP Digest Auth sends of hash of username, password, and nonce. Technically you're sending a one time value computed from the password, so it could be considered "Have" from the Server's perspective as it's proving knowledge of a shared secret. – AndrolGenhald Sep 26 '17 at 16:10
  • Thanks, this revision is much more clear. +1 from me. Personally, I prefer to look at things from the end-user's perspective, since that's the part that matters most in terms of actual security (e.g. it's entirely possible for a client device to implement a reasonably secure 2-factor system on top of a server backend that only uses 1-factor password-based authentication, just as it's possible for a client to sabotage one of the two factors in 2-factor auth, making the whole system effectively 1-factor) but it is helpful to be able to see things from another perspective as well. – Ajedi32 Sep 26 '17 at 16:37
  • 1
    @Ajedi32 Thanks. You gave me the lightbulb that I'm tunnel-visioned in server perspective. _" it's entirely possible for a client device to implement a reasonably secure 2-factor system on top of a server backend that only uses 1-factor password"_ but one password db breach and an attacker can get into all attacks by accessing the backend API directly. As far as I'm concerned, a client-enforced 2FA adds inconvenience for little to no extra security against a breach. All of the above is important for security, but the definition is just a way to think about it in your mind, however you want. – Mike Ounsworth Sep 26 '17 at 17:05
  • I'm not convinced about the last paragraph, since the attacker may only "have access" for a single time and a short duration. In that case, cloning resistance is definitely important. – jiggunjer Jan 15 '18 at 04:10
  • @jiggunjer I see your point, and it's valid. To me though, we're debating levels of Game Over. – Mike Ounsworth Jan 15 '18 at 14:46
3

I am pretty sure there is no correct answer to this and similar to Mikes post this is only my view…

The main weakness with using phones is the propensity for a reliance on SMS messages to demonstrate access to the device.

While not trivial, it is possible to clone a SIM without physical access to the device or SIM, and places a reliance on procedures out of the authenticating system’s control and crucially the user’s control. For me this rules this option out as being a 2FA system due to the essentially unknown nature of the reliability of the procedural controls that there is a dependency on.

There is a case for arguing that this will not be a concern in a low impact environment or for a threat scenario in which the potential attackers are unlikely to attempt to clone a SIM. It may be entirely justified to accept the risk (of reliance on an external dependency with unknown reliability), but with no way of being able to even subjectively assert a level of confidence that a user is in possession of a device at a given point in time I do think it can be described as a 2FA solution.

There would also be a question about why bother with trying to implement an additional factor if the risk scenario does not need it to be reliable, or put another way if the risk assessment suggests an additional factor is necessary at least have something that is not introducing some degree of false sense of security / control misrepresentation. This is especially relevant if there is the potential for the access control system to be extended to more sensitive data in the future, while good practice would dictate that the control(s) are reviewed, if someone does not bother and signs off on the basis that 2FA is implemented there could be a nasty surprise ahead.

On the other hand systems that install an application with a cycling counter on a smartphone are far closer to being a reliable indication of access to the device. While a remote exploit to gain access to the device is not impossible it is a more complex and (wild unfounded assertion) less likely.

For some scenarios I think the convenience and likelihood for a phone to be ‘protected’ by the owner means the compromise that an app based 2FA solution requires is worth it, I usually refer to it formally as pseudo-2FA or words to that effect.

R15
  • 2,923
  • 1
  • 11
  • 21
  • 1
    I'll give you a +1 if you clarify your threat models. Basically, security is a sliding scale of cost vs risk, so "For me this rules this option out as being even acceptable" is meaningless without stating who you are trying to protect against. Cloning a SIM without access to the device or intercepting SMSes requires cooperation from the mobile carrier (I believe). If you are trying to keep state secrets from government agencies, then yes, not good enough (neither are OTP apps or push notifications), but if you're just trying to protect your WoW account, then it's fine. – Mike Ounsworth Sep 22 '17 at 17:25
  • @MikeOunsworth I have updated, though I expect we'll not entirely agree on this one, I agree in a practical sense, but I think the semantics are important. – R15 Sep 27 '17 at 16:06
2

Most smart-phones today have a "tamper resistant secure storage", that can be used to store OTP secrets. These are constructed so extraction or copying of the secret value is impossible, it can only be "used" for calculating OTP's.

This can then be viewed as "something you posess" as the hardware can't be copied, not even with your cooperation or negligence. Basically, the token and its secret data are "welded together" so you must use the token to be able to use the secret data.

However, "soft-tokens" that only rely on secrets stored on unsecured medium, I would consider more of "something you know".

Same I would apply to physical tokens. If the token can keep a secret, then I would consider it "something you posess". If the secret can be easily copied in some way, then its "something you know".

I would use this "test" to decide if a token/smartphone could be considered "something you posess" or "something you know".

1: Imagine the token device or item in question.

2: Imagine this is placed on a public bench, laying there in, lets say a whole day.

3: After 1 day, you return and find your device.

4: If the device can still be trusted without changing any secrets, then its "something you possess", else its "something you know". We also imagine, that if its a phone, its reformatted and the same secret is reinstalled in the phone.

Another easy consideration is, imagine a rogue employee. You take back the token from him (without changing or replacing any secrets). Can the rogue employee still login to the service in question. Consider the fact that the rogue employee can tamper with the token to extract its secrets.

sebastian nielsen
  • 8,779
  • 1
  • 19
  • 33
  • This is an interesting definition to debate. You're taking the view that there can only be one copy of "something you have" in the world, and that you should be able to take possession of a token away from somebody. I am taking the weaker definition that once you have access to a token, it's bound to you. How securely you store the secret within it / how tamper resistant it is, etc, are all important security considerations, but perpendicular to the definition of whether it's a thing you know vs a thing you have. Interesting to debate different angles :) – Mike Ounsworth Sep 22 '17 at 20:21
  • No. There can be multiple copies. If the administrator wants - for example a group account with multiple time-tokens (or challenge-response tokens) with same secret stored. The important line to draw, is that someone with long-term possession of the token can not retain access after possession of the token has ceased. With the example I given, a employee is given a token to the company's account on a third party service. Once he terminates the employment, the token is taken back, and then he shouldn't be able to access the service anymore (and the token can be given to another employee) – sebastian nielsen Sep 22 '17 at 20:26
  • Eg, the administrator can copy or duplicate the token, if the administrator retained the secret before enrolling the token, but not the person who possess it. Compare with for example a rental car. Even if you have the car keys, you can't retain access to the car after the car and keys has been returned. – sebastian nielsen Sep 22 '17 at 20:28
  • 2
    Certainly you need the ability to revoke somebody's access. No arguments there. That the token can be safely given to another employee afterwards is where we disagree. Some products may offer this, but I think it's a bonus feature, not a necessary part of the definition. For example, I would say that I **have** an SSH key (not that I **know** an SSH key), but clearly you can't give an SSH key to person2 once person1 doesn't need it anymore. – Mike Ounsworth Sep 22 '17 at 20:37
  • I think its part of the definition since the SSH key can still be stolen from the other side of the globe. It makes something you, or your computer, knows. A physical token is harder, but if its not stored in tamper-resistant memory, it could still be copied while someone is out for a toilet break. If its tamper resistant, you can know as long as you have the token, nobody else can have access with that token. – sebastian nielsen Sep 22 '17 at 20:42
  • 1
    I don't understand you. Quote 1: "No. There can be multiple copies". Quote 2: "you can know as long as you have the token, nobody else can have access with that token". Those don't match. Also, when did we agree that the definition of "have" means "can't be accessed from the other side of the world"? I never agreed to that as the definition of "have". I clearly **have** access to my gmail, even from across the world (and emailed one-time links are almost universally accepted as a 2FA method). – Mike Ounsworth Sep 22 '17 at 20:48
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/66035/discussion-between-sebastian-nielsen-and-mike-ounsworth). – sebastian nielsen Sep 22 '17 at 20:48
1

Usually a possession factor is represented by a cryptographic key.

Smartcard

The private key, which was generated on and can not (easily) be extracted from a smartcard is probably the most secure cryptographic key and thus the best 2nd factor (possession).

U2F

A U2F device like the yubikey also has a private key, derived from a master key. (Discussion about the security of this master key is out of scope here).

Yubikey

A yubikey used as an OTP token uses a symmetric secret key to generate a one time password based e.g. on RFC 4226.

Keyfob

Most hardware keyfob tokens use either RFC 4226 or RFC6238 with a symmetric secret key. Well - this key was generated and written to the hardware by the vendor. This heavily weakens the idea of the possession factor, since you must trust the vendor to destroy any copy of this secret key.

Smartphone

Now we come to the smartphone. Google came up with the Google Authenticator. I would call this a 2nd factor, since it also works based on a cryptographic (secret) key according to RFC 4226 and RFC 6238. But the rollout process of the google authenticator sucks. The secret key within the smartphone actually resides in a powerfull, online badly patched "computer". Yes, the smartphone, when used with an App is a 2nd factor, but a rather weak one.

2-way authentication

If you use your smartphone to get push messages or SMS, I would not call this a 2nd factor - rather a two way authentication. The interesting difference is the following:

You can "secure" your google authenticator by completely taking the phone offline. An attacker needs to attack the real phone - the real second factor.

If you are receiving SMS the attacked totally does not need to care about your phone. Your phone is no second factor. It might be much easier for the attacker to attack the network (either technically or by social enginerring) and reroute the SMS. So where is your 2nd factor/phone then? It is totally out of the equation.

But after all you need to think about, how much security you need and how much remaining risks you are willing to take.

cornelinux
  • 1,993
  • 8
  • 11
1

In my view, the only difference between "something you have" and "something you know" is where the information needed to authenticate is stored. If it's stored in your head, it's "something you know". If it's stored on a smartcard or even just a piece of paper, it's "something you have".

Now obviously a piece of paper with a barcode on it would be a terribly insecure implementation of "something you have", just as asking a user to give their name and birthday would be a terrible implementation of "something you know", but those considerations are totally separate from the conceptual definition of what each of these authentication factors are.

In practice though you're correct, an authentication factor which is intended to implement "something you have" should be as resistant as possible to an attacker trying to clone the physical token being used to authenticate, just as an authentication factor which is intended to implement "something you know" should use information that is difficult for the attacker to guess or discover through surveillance or subterfuge. If an authentication factor is easily bypassed or compromised, then it's effectively not an authentication factor at all.

Ajedi32
  • 4,637
  • 2
  • 26
  • 60