I wonder if it's better to standardize or randomize data for anonymity. For example, think of browser fingerprinting. If you standardize every parameter, you would have all browsers returning the same user-agent, the same installed fonts, the same window size, etc. Every browser would appear to be the same as the others, so it would be anonymous (except for the source IP of course). The other approach is randomization: every browser will randomize the data for every request, so for example the user-agent will keep on changing every time (or pretty often anyway) and might be chosen from a large set of common user-agents, or maybe even randomly generated. The same goes for all the other parameters, including window size, etc.
I think there's basically no difference between these two approaches, except that maybe randomization might confuse the tracking systems a bit more, causing a little bit of damage by polluting their data. It might also be easier to spot any small differences between standardized data, while randomized data might be more difficult to analyze, at least at first, before the trackers have figured out a way to remove the noise.