My take on reducing the risk of being hacked on products and installation have often been to create false footprints.
From my own experience, the servers I've spent most time (and hate) on hacking have been those that have claimed to be something they are not.
For example faking services on certain ports that imitate a Windows 2008 server, while in fact the server is a completely different type.
Given of course that one takes all normal approaches to system security, first the traditional with code reviews, system hardening etc - then offensive security testing with penetration testers.
What are the downsides?
I specifically appreciate links to articles and sources on the topic.
Am I fooling myself that this has any effect? I know personally that I am triggered by the first sign of a specific system and would waste time. Most likely increasing (A) the chance of me giving up and trying another approach (or another server/service/target) and (B) the chance of discovery and back-tracking from the attacked target.