Obscurity is that which cannot be quantified.
Proper security comes with cost estimates. We say that a 128-bit encryption key is secure because we can estimate how much it would cost (in dedicated processors and electrical power, and ultimately in dollars) to find the key by exhaustive search (trying all possible 128-bit keys). When the cost is much higher than what an attacker would be willing to spend, and, in particular, when it is much higher that what any attacker with earth-based technology could possibly spend, then we have achieved security.
When such an estimate is not possible, then it is security by obscurity. For instance, assume that you have some sort of encryption system on a software which you keep secret. How much secret is that software ? It is written on some hard disks. It has been developed somewhere, source code for it exists, stored somewhere. How difficult is it for an attacker to recover the algorithm ? Stored files leak in many places, e.g. old discarded computers, stolen laptops, indiscretions from subcontractors (the software source code is in files, but also in the brain of some programmers)... if the attacker can get hold of the binary, he can disassemble it, a process which is not immediate but limited only by the wit of the attacker.
The point of all this is that while making some code secret sure makes the task harder for the attacker, it is very difficult to say how much harder it gets, when you want to express it in dollars.
So here is my answer to your question: to prevent implementers from using security by obscurity, mandate that they should produce detailed justified cost estimates for attacks.