I'm facing an issue with rampant scraping and abuse on a website that costs me a good chunk of money to maintain. So, I have been looking to implement a few solutions, and apparently these solutions fingerprint the client in some form.
However, the premise of fingerprinting seems problematic to me. Since fingerprinting involves a series of tests run on the browser which are then submitted to a server, it would be trivial to capture parameters involved with a good fingerprint from a browser, and submit them to the server each time such fingerprinting is attempted, thereby making it nearly useless.
Is there a reason why browser fingerprinting would work to detect a bot when it can be easily subverted?