29

I am currently trying to get an understanding of multi factor authentication. The biggest issue so far: When does "something you have" NOT get reduced to "something you know"? I want to have a "posession"-factor that does not get reduced to a "knowledge'-factor

I don't think this is a question that can be answered easily, but it would be very helpful if at least the following questions are answered:

When I write down or store a password, is this then considered something I have?

When I have a public/private RSA-keypair with 4096 bit and I remember the private key without storing it anywhere, is it something I know?

When I write down or store the private part of a public/private RSA-keypair with 4096 bit, is this then considered something I have?

As far as I understand it "something I have" should be something I have physical access to that nobody else has. I don't see how it is possible to prove that I have something when using a web application because everything gets reduced down to the bits sent in a request and everyone could send the same bits. How does sending a specific sequence of bits prove that I have physical access to a certain device?

Gamer2015
  • 707
  • 4
  • 12
  • Thought: If a code is ent tomy mobile phone then: I know the number / I have the phone / I know the code sent. – Russell McMahon Jul 02 '21 at 11:59
  • @RussellMcMahon but your phone receives the text because it has your SIM card... which contains a key... which you could theoretically know. What's the difference between writing down a key, and putting it in a SIM card? – user253751 Jul 02 '21 at 15:25
  • @user253751 Good point. If we assume that a SIM card is unique ( which is usually true but can be untrue) then the having and knowing seem to correspond. Perhaps.Tje sim duplication possibility also weakens this method of 2FA. – Russell McMahon Jul 03 '21 at 00:14

9 Answers9

53

When I have a public/private RSA-keypair with 4096 bit and I remember the private key without storing it anywhere, is it something I know?

Yes.

When I write down or store the private part of a public/private RSA-keypair with 4096 bit, is this then considered something I have?

No. The authentication factor is not the sheet of paper where the key was written down, but it is the key written down. The key is not intrinsically connected to the paper, it can live without.

This is different from a smartcard or hardware token which contains the key. These devices are designed so that the key cannot be extracted and the device cannot be simply copied, i.e. the key basically has a single physical manifestation.

How does sending a specific sequence of bits prove that I have physical access to a certain device?

Take your case of a RSA key pair: In case of a smartcard the private key is located on the card and only there. One cannot extract the key but one can ask the smartcard to sign something using this private key - since the smartcard is a tiny computer. Thus the server can send some challenge, the smartcard signs the challenge and the server can verify the challenge using the public key associated with the user. If the signature matched the client must had access to the smartcard, i.e. proved possession of the smartcard. Other hardware based tokens work the same way: the secret never leaves the hardware.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • 4
    Thank you for the clear answer, this is kinda what I expected. I guess "something I have" therefore has to be something that is resistant to being cloned, are there other requirements for posession factors? – Gamer2015 Jun 30 '21 at 11:30
  • 3
    @Gamer2015 It depends on how resistant it is to cloning, yes. But to my knowledge, there really is no such thing that is 100% cloning resistant. Someone was able to manufacture it once, so they will be able to manufacture it again. –  Jun 30 '21 at 11:47
  • 4
    @Gamer2015: You have to make sure that you and only you has access to it. This means hard to be duplicated and hard to access by others. If you put some security token on a networked device and let everybody use it, then it still cannot be duplicated but it can be used by others. – Steffen Ullrich Jun 30 '21 at 11:47
  • 22
    @MechMK1: *"Someone was able to manufacture it once, so they will be able to manufacture it again. "* - The key on a smartcard is not created by the manufacturer. It is created at the customer side when provisioning the smartcard before use, outside of reach of the manufacturer. So even the manufacturer cannot just clone it. – Steffen Ullrich Jun 30 '21 at 11:50
  • 5
    Just to add the obvious: a physical, mechanical key is something you have. Most of them can still be cloned from a photograph. – A. Hersean Jun 30 '21 at 13:09
  • @A.Hersean: In fact, a person with sufficiently good memory (e.g., Mark DeFriest) can just *look* at a physical key and create a duplicate. – dan04 Jun 30 '21 at 20:27
  • 1
    @A.Hersean: The point is actually not that it is something "you" have, but "only the authorized" have. Same to something "you" know vs. something "only the authorized" knows. If a photo of a key is available and new keys can be build based on this the property "only you" moved to "you, but also others" and this way it is no longer a secure authentication factor. This is similar to a password written on a whiteboard and accidentally shown during a TV reportage (happend). – Steffen Ullrich Jun 30 '21 at 20:40
  • 1
    It probably is true that a certificate authority / badge / credit card / passport / car / IoT toothbrush / whatever manufacturer can often (but not always) create a clone of a given device if they really wanted to; including fixing the OS RNG so the devs can predict what keys it will generate at first-time boot. You do need to trust the manufacturer; but often there are huge regulatory bodies and frequent on-site audits to help establish that trust :) – Mike Ounsworth Jun 30 '21 at 21:33
  • @MikeOunsworth: Sure, a manufacturer can deliberately weaken a device in order to be able to clone it. But this is similar to deliberating placing backdoors in software etc - it is not a vulnerability of the concept itself but of a specific implementation or vendor. – Steffen Ullrich Jul 01 '21 at 04:12
  • 4
    @dan04 it might not even need a stupendous feat of memory - most metal keys are essentially the physical manifestation of a code lock - the notches in the key might have 4 or 5 indent levels and there might be 5 to 7 of them, which means once you klnow how to visually identify the depth of indent, you need only remember a ~6 digits number (that uses digits 1-5) e.g. 133254 - another key can be cut by grinding a blank to those indent depths – Caius Jard Jul 01 '21 at 08:21
  • Duplicating a mechanical key (by using a photograph or a mold) is similar to duplicating a physical NFC badge, but without being destructive. The point is to extract the code of the key: for a physical key it is its pinning, for a badge it is its embedded private key (you might need specialized probes in an electron microscope to get it). It's easier with a physical key, but it still the same kind of process. – A. Hersean Jul 01 '21 at 09:20
  • 2
    @A.Hersean: *"It's easier with a physical key ..."* - and how easy it can be done distinguishes if it is "something __only you__ have" vs. "something you have, but __others__ might easily too". The first is preferred for authentication. – Steffen Ullrich Jul 01 '21 at 10:31
  • 3
    @MechMK1 The problem isn't creating the smart card, the problem is replicating the key on the smart card. Imagine the smart card as being a safe, with a snowflake inside. Replicating the safe is easy, replicating the snowflake is not. Trying to take a look at the snowflake will require you destroy the safe, without destroying the snowflake. Not impossible, just stupendously hard... – Aron Jul 02 '21 at 03:08
  • @A.Hersean I disagree with your assertion. Physical keys are not designed to self destruct should someone try to read the key-code. In order for a Physical key to work in a similar way to a smart card (NFC or otherwise), would require a zero knowledge proof scheme in the physical. – Aron Jul 02 '21 at 03:13
9

As a clear example of "something you have" that cannot be reduced to "something you know" is "Have access to an email address / SMS number to retrieve the code we just sent you". There's nothing there to turn into a "know".

TOTP apps make the same assumption, but a little less straight-forwardly: when you scan a QR code to link your TOTP app to your account on some website, the server and your phone exchange a seed. Technically I suppose you could extract the seed and memorize it, but the assumption is made that seed is store securely within, and unique to, your device.

Same for USB tokens like Yubikey: the RSA private key is generated inside the device and never leaves. Successfully doing a private key operation is proof that you physically have the device.

Same for the crypto chips inside credit cards / passports / building ID badges etc; those RSA keys were put in the chip at manufacture time, good luck extracting and memorizing them.

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
  • 9
    email can very often be accessed with only a 'known' password and maybe your spouse's or pet's name; cell service (including SMS) can often be stolen by knowing a few things about the customer _or_ having (and using) money, plus apparently SMS can now be hijacked without even knowledge. – dave_thompson_085 Jul 01 '21 at 03:09
  • Another example is the "smart security key" devices issued by some UK banks. You "have" the device, but when you use it, it generates and displays a code which you then enter into the bank's online website. There is no way to "know" what the next code is going to be, before you need to use it, and it is only valid to authorize one transaction. And if you enter it incorrectly, you don't get multiple attempts with the same code - you have to generate a new code for each attempt. (The physical devices are now being replaced by phone apps, but the principle is the same). – alephzero Jul 01 '21 at 17:31
  • 2
    SMS? I know enough to fool a AT&T tech into issuing me a new SIM card. https://www.bankinfosecurity.com/att-sued-over-24m-cryptocurrency-sim-hijack-attacks-a-11365 – Aron Jul 02 '21 at 03:17
  • Email is a password you know. SMS is some kind of key the SIM card knows. – user253751 Jul 02 '21 at 09:17
  • 1
    Fair points that email might only be something you know, but it might also be set up with strong MFA. I would argue that keys internal to a SIM card are a "have" unless you know of a way to extract them. – Mike Ounsworth Jul 02 '21 at 13:14
6

In all cases, the verification deals with information, and nothing but information. "Something you own" is a helpful concept, but as you noticed, when you actually get down to the nitty gritty, every verification is information.

Every verification consists of the user providing information that is easily known by the correct user and hard to know for anyone else. Full stop.

The concept of "something you own" comes from identifying which kinds of information are particularly useful (easily known by correct user and hard to know for anyone else).

  • Something you know - this is information which can be kept in the brain of the user. Thus, it theoretically cannot be stolen without the user knowing it (or taking their brain), but it can be divulged and can definitely be copied once the information is provided.
  • Something you have - this is information that is very difficult to know unless you are in possession of a physical object. A smart card is an excellent example. Theoretically, all you need to break a smart-card authentication is to know the private key information in the silicon chip. We make it very difficult to get to this. So difficult, in fact, that the user typically doesn't know enough to be able to divulge it, and an attacker typically has to maintain possession.
  • Something you are - This is information that is virtually impossible to know unless the authenticated individual is physically present. I'd argue this is a special subset of "something you have." You have your body. The special aspect of this is that it (theoretically) cannot be taken away from the person. Possession of "something you are" implies that the valid user is indeed present.

The idea of the trifecta of "something you know, something you own, and something you are" is that it is remarkably difficult to successfully steal the credentials for all three of these simultaneously. The attack vectors which are good at beating one kind of credential are not so good at at least one of the others.

When you treat these not as crisp clear categories, but fuzzy guidelines, your corner cases are properly fuzzy. In the case of a password that is written down, the information is still the password, but it's not in the brain of the user, so they can't forget it and can't divulge it to anyone. This makes it act more like "something you have," except it's a very poor choice because it is easy for an attacker to be able to use the credentials when not in possession -- they can copy the password to a new piece of paper relatively easy.

As an extreme case, consider the Chinese seals. These were physical seals that had to be used to authenticate documents. They are clearly a "something you have" type of measure. In the end, the authentication is just information. Someone looks at the printed stamp placed on the paper, and identifies particular idiosyncrasies (wear marks, fractures, etc.). Technically this could be defeated by simply putting all of the idiosyncrasies in the right places. It is just information, after all. However, in those days, it was extremely difficult to generate this right information without the physical object. An artist couldn't simply carve a duplicate which matched all of the quirks. This made it not only "something you have," but a particularly effective example of that. That being said, in the end, all that was ever conveyed was information.

Toby Speight
  • 1,214
  • 9
  • 17
Cort Ammon
  • 9,206
  • 3
  • 25
  • 26
  • The other special aspect of "something you are" is that it can't (normally) be added to or changed in any other way. (And it can be problematic when, e.g., injury changes a relevant biometric). – Toby Speight Jul 01 '21 at 05:57
  • 1
    As a minor note, here I assumed online verification, which is typically the topic of this kind of discussion. For an in person verification, the queries do not always come in the form of information. As an example, we may be required to provide a physical passport with a picture to a customs officer. In this case, the passport may be studied truly as "something you have," and the physical objects are much more difficult to forge than, say, a picture of one (information). – Cort Ammon Jul 01 '21 at 22:25
  • The "everything is just information" is the crucial part of *the* answer. +1 for that – iBug Jul 02 '21 at 06:58
  • 1
    @TobySpeight "Something you are" has often been demonstrated to be fake-able (where it helps that the corresponding sensors are not perfect). Fingerprints from used glasses or even just a hires photo of someone making a thumbs up gesture, forearm vein patterns with a cam installed in a hand drier on a public toilet, ... (not to mention every other spy movie where thumb prints and iris scans are converted from something you are to something I have by using an axe) – Hagen von Eitzen Jul 03 '21 at 09:32
1

As far as I understand it "something I have" should be something I have physical access to that nobody else has

It doesn't stop there. The authenticating system has its play.
If the authenticating system doesn't accept any challenge or any secret other than through the direct use of the something you have, than that something you have will not be reduced to something you know.

Take the example of a secured area where some locked doors accept only an RFID key tag. In this case, the system will accept only the something you have.
In another case, where doors are equipped with keypad additionally to the RFID reader, the something you have could be reduced to something you know even if different secrets are used to authenticate the same user.

Peter Mortensen
  • 877
  • 5
  • 10
elsadek
  • 1,782
  • 2
  • 17
  • 53
  • 1
    How does that view become affected if you know the value on the RFID key tag, and can use that knowledge to make one with the same value? – TKoL Jul 01 '21 at 16:13
  • ^ this is just something the RFID tag knows. – user253751 Jul 02 '21 at 09:18
  • From authentication system perspective what you have could be replaced with (or become) what you know if the setup allows it. – elsadek Jul 02 '21 at 10:21
0

Everything eventually boils down to something you know.

Even physical keys could be considered something you know as they could be reproduced if you know the exact dimension and materials. Likewise for something you are, as what you “are” has to be “read” somehow, which again is simply a matter of sending the right information to to whatever reader is being used.

But that assumes a threat model at least at the nation-state level, possibly above.

In the normal course something you have is something that can give you knowledge on demand in a secure manner. This is typically a hardware key or phone that can give you a pin code or transmit such a code to a device.

When I write down or store a password, is this then considered something I have?

No, not only is it pure knowledge, it is easily sharable knowledge.

When I have a public/private RSA-keypair with 4096 bit and I remember the private key without storing it anywhere, is it something I know?

Yes. It’s something you know, how long it is is irrelevant. You can share it with others.

When I write down or store the private part of a public/private RSA-keypair with 4096 bit, is this then considered something I have?

No, it’s something you know.

As far as I understand it "something I have" should be something I have physical access to that nobody else has. I don't see how it is possible to prove that I have something when using a web application because everything gets reduced down to the bits sent in a request and everyone could send the same bits. How does sending a specific sequence of bits prove that I have physical access to a certain device?

Depends upon what you mean by prove and what the threat model is, if your bad actor is a nation state that can observe you and your environment or duplicate your devices functionality, then it might not help at all. If someone is holding your entire family (scale that up to the world if you like) hostage you may give those bits away to someone very far from you physically. But again that is not your typical threat.

Your typical situation is that the device has been programmed in such a way that it and another device share a secret and are able to use that shared knowledge to create bits in a certain way at particular times or in response to particular input. The portable device is designed in such a way as to make it difficult to impossible to extract the shared secret from it. From this it follows that if you can produce the right bits at the right time you have access to the device. Again, this is not 100% true, if the device has a screen which shows the bits (letters or numbers) anyone that can see the screen can know the numbers whether they are physically present or not. But it’s close enough to true to be useful. Also, the shared secret could have been leaked from the other end, but again, it is close enough to true to be useful. Security isn’t about 100%, security is about being good enough to reasonably rely on.

jmoreno
  • 496
  • 2
  • 9
0

How does sending a specific sequence of bits prove that I have physical access to a certain device?

The same way as always - by sending some bits, you prove that you have access to a secret key which is known to identify you (or your access right to whatever you are trying to do). This is often called a "challenge" in cryptology. In the simplest case, the bits are simply the fingerprint or hash sum of some data (i.e., the details of the transaction you are trying to authorize, or a time stamp for one of those dongles used for logging into a system), signed (i.e. encrypted) with your private key.

The other party then decrypts those bits with the public key fitting your private key, compares it to the expected result, and if this is successful, you have proven that the physical item has been involved.

More on public-key cryptography.

So with respect to Two Factor Authentication, this directly leads to your answer: something you have (something physical) needs to store your secret key in a way which makes it reasonably hard to extract the key. If you implement your "security token" or whatever item it is in such a way that...

  • the protocol it speaks over the wire or air simply has no provision of ever transferring the key and
  • it is physically unassailable, usually because by opening it up you are automatically destroying it, and
  • the secret key is not stored somewhere else as a backdoor

... then there you have your "something you have" and "something you do not know".

Your security token (see Wikipedia for some commercial examples) must then be intelligent enough to do whatever crypto operation is required for your use-case itself, without transferring the key to the outside world.

Of course there are tons of other attack surfaces here. The attacker can steal the token and torture any further info (for example an additional PIN) out of you. They can try all kinds of man-in-the-middle attacks based on bugs in the implementation. They can attack the route of the transaction details (i.e., you are still in possession of your security token, but now you are not sending 50,- to your mom, but 50.000,- to the attacker instead). Etc.; but you very much have the secret key on your token and do not know what it is.

AnoE
  • 2,370
  • 1
  • 8
  • 12
0

Consider authentication factors from the point of view of the authenticating device and the expectations of its designers.

The expectation with "something you (uniquely) know" is that you've memorized a unique piece of information that can be used to identify you. The authenticating device can't tell if the actual source is your memory or a scrap of paper. So, as a memory aid, the scrap of paper is essentially "something you know."

The expectation with "something you have" is that you possess a unique object that can be used to identify you. A password written on a scrap of paper is too easy to duplicate and therefore cannot be "something you (uniquely) have."

Memorizing a private 4096-bit key could be considered "something you know," but no designer of authenticating devices expects you to memorize it. So, practically, a private key is only considered "something you have."

Anyone could send a sequence of bits to a web app and pretend they have something they don't (or for that matter pretend they know something they don't or are something they're not) but remember the point is to authenticate a previously stated identity. I say who I am, then I prove it with an authentication factor. So sending random bits to a web app may match someone's identity, but (statistically speaking) not the identity you're trying to impersonate.

0

This is an issue of conceptual bitrot. Specifically a great example of something you have is a photo ID with a hologram. That is it has your name photo, and an anti-counterfeiting measure. This line a door-key is undoubtedly something you have. The whole something you know, have, are breakdown only works well in a physical context. For example if you saw me in person and I told you I was Brittney Spears, you would not believe me because I have a beard and cannot sing. The challenges of online authentication make easy verification of physical objects impractical. This is what you are seeing when you observe that physical object authentication often also uses secrets. It's turtles all the way down. But here is the main difference of authenticating a device to authenticate a person, the person does not know the secret on the device. And remotely verifying physical attributes is even harder.

hildred
  • 449
  • 1
  • 4
  • 9
-2

You can have something without knowing what is on it when it's encrypted.

Let's take a pretty straight-forward case. Let's say that you're a police officer who's confiscated someone's encrypted hard drive. You have the hard drive - it's in your physical possession. However, unless you can break the encryption on the hard drive, you won't know what's on it. This is, ultimately, basically the entire point of encryption, and always has been, ever since the very first codes were created by people like Julius Caesar for communicating messages to his military units.

nick012000
  • 581
  • 1
  • 3
  • 7