Who guarantees that facial recognition works fine? What would happen on a false positive?
Imagine the provider of the facial recognition software would guarantee that their system is 99.99% accurate. That sounds decent enough, right?
For the entire US population, this would mean roughly 32.000 people who would be accused wrongly. Of course, real-life face recognition software is not nearly as accurate, and may have much more problems.
For example, if a system only receives training data for white people, you may end up with strange results for other ethnic groups. Stereotypes like "all people from this ethnicity look the same" can then very quickly become a very real problem.
Aside from this, there are obvious privacy problems. Why is the police allowed to identify me? What if the information about my identity is cross-referenced with other data? What if that data is incorrect too?
The problem is the same as above: We know that technology is flawed, the police may know that technology is flawed, but the officer in the field doesn't consider that this one specific case may be a false positive, because the thousands of matches before may not have been.