This answer mentions Bayesian poisoning in passing and I've read the wikipedia page but don't feel I've fully grasped it.
The first case, where a spammer sends a spam with a payload (link, malicious file, etc) and includes lots of non-spammy "safe" words seems obvious enough. The aim is to bring up the rating of that individual email so that spam filters might class it as "not spam".
The second case is more subtle and (to me) confusing:
Spammers also hope to cause the spam filter to have a higher false positive rate by turning previously innocent words into spammy words in the Bayesian database (statistical type I errors) because a user who trains their spam filter on a poisoned message will be indicating to the filter that the words added by the spammer are a good indication of spam.
How does this help the spammer? Sure, false-positives (if I've understood correctly that this means legitimate emails wrongly classed as spam) are annoying, but they would have the be very common to disable spam filters entirely. It doesn't seem like this would change the rating of real spammy words, or does it just affect their relative rating?
Finally, does this, or any other, approach help an individual spammer with a particular few spam words they'd like to sneak through the filters, or would it potentially help all spammers?
Could someone provide or link to an example-based explanation?