2

I was reading a document discussing biometrics and came across this statement:

Typical finger-print verification systems employed by FBI achieve 90% probability of verification at 1% false accept rate but only 77% probability of verification at 0.01% false accept rate.

With this in mind, and seeing the relationship between the percentages, can someone explain to me how the probability of verification is inversely related to the false accept rate?

I understand understand the concept of FAR (false accept rate), and realize to calculate it you must do the following equation:

imposter scores exceeding threshold/all imposter scores

However, I just don't understand the concept of the "probability of verification" and especially the inverse relationship it has with FAR. If someone can kindly explain the two in conjunction with each other, that would be great.

Anders
  • 64,406
  • 24
  • 178
  • 215
ihoaxed
  • 21
  • 1

1 Answers1

1

Ekhmm... they're not inversely related. They're directly related: when one increase the other increase as well:

70% | 0.01%
90% | 1%

70% < 90% and 0.01% < 1%

This is really a machine learning question. False accept rate is a false positive. In other words, it is the chance that a random someone will verify his fingerprint as another person.

Assuming that Bob is an authorised person whose fingerprints we have loaded into our biometric system. Now, we can tune the parameters of our biometric sensor to check whether Bob is Bob by using his fingerprint. If we tune the sensor to be very very picky (acceptance threshold is low), Bob will have 77% of chance of verifying that it is himself on the first try. If now Eve tries to impersonate Bob through this very picky system she has only 0.01% chance that her fingerprints will be recognized as Bob's on her first try.

When we lower the "pickiness" of the sensor (have a bigger threshold of acceptance), Bob will be able to verify that he is himself easier: he will have a 90% chance that the sensor will conclude that he is Bob based on his fingerprints (on the first try). On the other hand, in this less picky system, Eve has a bigger chance to impersonate Bob: On her first attempt to read her fingerprints, there is a 1% chance that the sensor will conclude that Eve's fingerprints are actually Bob's (and authorize Eve as if she was Bob).


Any machine learning system that has an acceptance threshold parameter will perform in the above fashion. In ML you always will have:

  • true negatives - An impostor's fingerprints are not Bob's
  • true positives - Bob's fingerprints are Bob's
  • false negatives - Bob's fingerprints are not Bob's
  • false positives - An impostor fingerprints are concluded to be Bob's

The perfect ML system would give only true positives and true negatives, but it does not work like that. There is noise and noise will require some acceptance threshold. A sensor that would not have any acceptance threshold would only provide true negatives and false negatives. But that would e completely useless because no one would ever be authorised to do anything.

An acceptance threshold is an amount of difference that is allowed between the fingerprints stored in the database and the ones read by the sensor (and processed by the ML system). Making this threshold bigger increases the number of true positives and false positives (remember that without the threshold we had only true negatives and false negatives).

In the context of the article "probability of verification" is the amount of true positives, and the "false accept rate" is the amount of false positives.

grochmal
  • 5,677
  • 2
  • 19
  • 30