We have a discussion in our company about password security for our application.
Besides the fact, that we use 2 factor authentication (certificates+login/password), we currently enforce passwords with at least 12 characters of length.
Currently there is no enforcement of using non-alphanumeric characters. And from my perspective there is little to no gain in security, when the user is forced to mix (non)-alphanumeric characters.
From my understanding if you have the alphabet of possible characters, the entropy for 123456789012
and !"§$%&/()=!"
should be the same and both passwords should have the "same degree" of security (the choices made from the alphabet differ, but the alphabet is the same).
The only difference would be the information to the attacker, that the first password uses only a subset of the alphabet, which would reduce the entropy.
On the other hand, what makes the first password "insecure" is the fact, that it is really predictable; chances are high for a fast match if a dictionary is used. Given slow hardware and a dictionary, it would really matter chosing one over the other.
But with todays power (GPU/Cloud) for bruteforcing the difference in security is from my point of view neglectable, so that it is valid to say: one is as secure as the other.
tl;dr
Is the difference in security between 123456789012
and !"§$%&/()=!"
as passwords nowadays neglectable?
(strong hashing and using salts presumed).