Over the last five years, India has systematically advanced a biometric identification scheme, which now has an enrollment of over 800 million users. To ensure the accuracy of matching for such a large population, the Biometrics Standards Committee proposed collecting all 10 fingerprints where physically possible, presumably to set a larger threshold for identification (e.g. 80% confidence on 3 or more fingers).
A PoC conducted by the Unique Identification Authority of India found that a suitable technique to use for authentication would be to verify "two separate fingers for upto 3 attempts" (details here). This provides an accuracy of 99% which the committee finds reasonable since authentication is 1:1 and not 1:N (an analogy is that the password is only checked for a single username during login). While this process also has tons of privacy and security problems, these are fairly well studied.
However, the process of enrolling a new user is supposed to perform deduplication, and hence is 1:N (a user should not be enrolled if fingerprints match). Assuming that only "the best two fingers are being matched", is it feasible to assume that this deduplication is not being carried out during enrollment, since beyond a certain number of users collisions are probably inevitable? The last report I've seen indicated 34,015 duplicates when 290 million people were enrolled (~ 0.01%)
Is such a system truly feasible and scalable? The birthday paradox seems to indicate that most users should have "doppelgangers" (even assuming 0.01% collision over 800m users). Are there any techniques that could be used to reliably and automatically identify "true duplicates" as opposed to "false duplicates" in such a system? Do biometric systems get progressively worse as users are added?
EDIT/TL;DR Do biometric authentication systems have negative network effects, whereby they get progressively worse (less accurate/precise) as the number of users increase? If no, why not?