The answer depends on why you're even thinking of setting a 12-character limit.
If it's because the system is known to be trivial to attack with 11-character passwords, and proven to be computationally infeasible to attack with 12-character passwords and the foreseeable compute resources of the planet, then you make it an absolute requirement. Anyone who can't or won't implement that limit simply doesn't meet your security policy, and that's that. The consequences for them might be really bad (they can't sell their product), they might be manageable or absolutely insignificant (they use a different system that relies for its security on something other than 12-character limits and therefore has a different policy), but those consequences are justified.
If you set a 12-character limit because you have no idea what limit you should set, and 6 is kind of OK but not enough forever so you doubled it, then indeed you should take account of what is practical to implement. Presumably you do want people to use your policy (in preference to: if they're an employee getting a job that doesn't drive them insane with impossible demands; if they're the CEO flat ignoring it and buying something that meets no part of your policy; if they're a vendor finding another customer or doubling their prices to cover the additional effort). So if the policy contains arbitrary and onerous requirements, then you're unnecessarily risking those things happening.
In practice you're somewhere in between those two extremes. But once you know why you're making the requirement, you can decide:
- it's a hard requirement. Anything that doesn't satisfy it doesn't accord with the policy.
- it's a recommendation that you believe to be achievable, and you explain the consequences of not satisfying it.
- it's half-baked. You need to do more work to establish what the requirement should be before setting policy.
Consider also the goal of the policy. If the object of the exercise is to ensure that you don't use vendors with poor or average security setups, then excluding vendors who don't have good security setups, whether because of their small size or otherwise, is an advantage of a requirement that can only be met by those with good security. The policy is there to guide people, it's also there to exclude people who can't meet it.
Who is going to get blamed when things happen? The security team right?
Yes, and there are two ways your policy can fail and you'll get blamed. It can include someone who, with hindsight, you realise it should have excluded, and you get blamed for a security breach. It can exclude someone who, with hindsight, you realise it should have included, and you get blamed for obstructing the business of the company. A critical system cuts both ways -- you don't want it to be insecure, and you also don't want it to never get built because it's too hard to meet the policy. Your job is to find something that's both adequate and achievable (or, if you can't do that, at least to explain why not so that someone else can get involved in changing the constraints you're working with).
A massive security breach is bad, but if it were inherently worse than insolvency companies would "play safe" by (for example) never attaching anything on the internet in the first place, and all go bust. For obvious reasons this is not what they choose to do. So the provisions of the policy need to be justified, and the justifications need to be available for review later, such that even if something goes wrong they will still seem like reasonably good decisions given what was known at the time you made them.