A. K. Lenstra et al. in an extensive examination of the RSA moduli employed in practice (http://infoscience.epfl.ch/record/174943/files/eprint.pdf, p.10) wrote that they "could not explain the relative frequencies and appearance of the occurrence of duplicate RSA moduli and depth one trees" found in their study.
How could the high security risks arising from this direction be effectively countered in the current actual practice of RSA applications on the Internet?
Having in the past done two implementations of AES, I have with surprise just learned in another community that NIST has, in addition to the (known-answer) tests in Appendix C of FIPS-197, issued a document specifying a very much more comprehensive test suite, with which an accredited laboratory could (independently) certify the correctness of a vendor's implementation in a variety of application scenarios, i.e. not merely the straightforward transformation of a block of 128 input bits to the corresponding output bits with a given 128/256 bit key. See csrs.nist.gov/groups/STM/cavp/documents/aes/AESAVS.pdf. Now, even a commercial AES software with some added nice features is apparently very much smaller in size and simpler in programming logic and organization than the currently in practice used diverse open-source or closed-source packages that support the employment of RSA for the purpose of achieving communication security of the end-users on the Internet. When it is thus deemed desirable/necessary to highly carefully verify the correctness of an AES implementation, how much MORE efforts in comparison should therefore sensibly need to be done to ensure the correctness of RSA software, and that EXTREMELY urgently in view of the high security risks mentioned above?
Efficiency is naturally one of the desiderata of any software. However insecure IT-security software is evidently useless, no matter how fast it runs. There are of course applications where achieving extremely high speed is very essential, e.g. automatic stock trading where some financial firms have located their computing centres as near as possible to the stock exchange markets in order to gain the advantage of being able to put orders some microseconds earlier than their competitors and recently a tower is said to be planned to be built in London to realize fast microwave direct communications with corresponding locations in Europe for that purpose. Nevertheless I strongly believe it is generally acceptable for the absolute majority of the commen end users in case e.g. a web page needs one second longer to pop up on their monitors than it currently is, if the security of the applications could be rendered much more secure thereby, e.g.via use of open-source software that are certified by publically trusted institutions. (Note on the other hand that we could let e.g. the above mentioned financial firms develop their own special communication hardware/software and be accordingly responsible for the security of the use of their own products.)
Obviously it isn't difficult from the viewpoint of software engineering to specify (similarly to the case of AES and AESAVS) an RSA key generation program unit that has a standard interface to its environment. If this is ISO standardized, implementations could be certified independently by national standardization bodies and/or IT professional associations of diverse countries of the world. While there is nothing 100.00% perfect in the real world (excepting certain mathematical theories that are logically impacable, though still contingent on their axioms), the trustworthiness of such implementations will obviously become higher, when their number of certifications obtained increases. In this situation, at least a common end user doing encryption/decryption of his personal communications could easily have a choice for his RSA key generation between such a certified implementation (which may be not very efficient) or another implementation that is not certified but more efficient and with lots of convenience features.
There are certainly a vast number of applications where CAs are practically necessarily to be invoved. This leads to the IMHO verily difficult issue of trust on the CAs (their cooperations with one another, the faithfulness and correctness of the work of their employees, attempts of third parties to excercise different malicious influences on their work, etc.) However, if the goal sketched in the preceding paragraph could be satisfactoriy achieved, then that's already an extremely essential achievement in countering the security risks currently facing the common end users resulting from the phenomenon of shared prime factors among RSA moduli employed in practice. Improvements in the issues related to the CAs could be strived at simultaneously, but preferrably with a comparatively lower priority in my personal opinion.
[added on Jan 10, edited on Jan 12:]
A possible cause of the phenomenon reported by Lenstra et al. could be that a certain non-open-source RSA key generation software employed in practice contains a backdoor. With PRNGs it is namely very simple to specifiy a set of N prime numbers (without having to explicitly store them in the software) to be pseudo-randomly selected for use in an RSA key generation session. N could be chosen by the malice to his advantage to be fairly large without causing difficulties to his purposes. For the set of N prime numbers can be all pre-generated by him, forming a list at his disposal to probe a given RSA modulus in order to find his prime factor in case the modulus happens to have been generated by the RSA software that contains his backdoor.
It may be remarked that backdoors of the genre of the preceding paragraph is actually of a comparatively simple nature. More sophisticated and practically impossible to be detected backdoors in non-open-source RSA key generation software are conceivable. One idea of designing such backdoors, sketched by maartin decades ago, was recently explained by me in details in the Epilogue of a software of mine (PROVABLEPRIME, http://s13.zetaboards.com/Crypto/topic/7234475/1/). The security risks stemming from this direction are IMHO much more hideous and lie clearly beyond the feasible realm of studies of the art of Lenstra et al.