19

I recently read an article about researchers being able to penetrate the Facebook network and making lots of friends with about 100 "Social" bots. What would prevent somebody to do the same on Stack Exchange sites, so as to increase his reputation? He might do this in order to finance bounties and have questions answered quickly for free.

Update: how I imagined an attack:

  • Initial seeding: create 100 profiles on a Stack Exchange site;
  • Have a Social Bot manage each profile;
  • Slowly (i.e., during several weeks) Social Bots post questions, post answers (to questions asked by other Social Bots), up questions and answers.

Result: you end up with 100 profiles having a pretty good reputation.

Ben Brocka
  • 812
  • 7
  • 16
user1202136
  • 595
  • 4
  • 8
  • 2
    Depending on how blatant he is, the SE vote fraud script might catch him. – CodesInChaos Apr 11 '12 at 13:00
  • 2
    Define "pretty good" reputation. There are people with thousands of reputation points, dozens upon dozens of badges, you might get a couple hundred reputation points ( big deal ) out of this even if you could get by all the protection that exists. I find it funny this is being asked by a random user that doesn't even have an actual account with a username. – Ramhound Apr 12 '12 at 19:28

2 Answers2

30

Stack Exchange has multiple layers of security preventing this. Captchas and email addresses are required. The email check is easy to beat with a script, but the captchas aren't; you'd need a Captcha breaking service to even get this off the ground.

None of your bots can vote at the start, so you can't accumulate rep just by posting questions; human eyes have to look over your post and manually give you your first reputation. At best your bots could get rep only by other bots accepting answers. After each bot gets an accepted answer they would be able to upvote however.

Once they can upvote you start running into fraud detection; cross voting where a significant % of votes are split between two users are automatically flagged for moderator attention, and serial upvotes (more than a couple inside of a few minutes) would automatically be reversed, so you'd have to be VERY careful in how you implement this.

Even if you surpass the automated fraud detection, which would require significant effort and tools programmed specifically for the Stack Exchange Network, you run into the problem of human moderation. All posts on SE are getting human eyeballs on them. If your bots are asking duplicate questions, they're going to get closed. If they're asking gibberish or spam questions, they're going to get deleted, and deleted posts don't earn rep (certain outstanding circumstances aside).

Basically your bots would have to post actual, new, good questions in order to not be found out and shut down. SE has some okay automatic detection scripts here, but there's no way an army of bots would go unnoticed simply because you'd need an amazing artificial intelligence to actually write enough new questions and answers to pull this off.

Facebook hacking can be a lot easier because no one has to see your bot friends; Facebook profiles can go without human scrutiny, Stack Exchange posts cannot.

I'm not a Stack Exchange employee however, so there may well be additional security measures I'm unaware of, but as a user and moderator those are the significant points I'm aware of.

Ben Brocka
  • 812
  • 7
  • 16
  • I'm quite sure some modest reputation building could be established if desired. (E.g., create 10 bots from different IP addresses/user-agents, legitimately get them 15 rep, and then use them to sometimes upvote some legitimate posts after a time delay as well as look at other users; and simulate normal behavior). This could maybe boost someones reputation by a factor of ~2. However, in the end as rep is just a meaningless number this strategy is more trouble than its worth. (If someone hires you based on your rep without looking at the quality of your answers that's their issue). – dr jimbob Apr 11 '12 at 21:50
  • 4
    @drjimbob bots are a lot more effort than manual sock puppets for this purpose though, and they don't really have a way to automatically gain enough rep to vote anyway. It's a significantly different problem from the relatively easy Facebook bot strategy. – Ben Brocka Apr 11 '12 at 21:59
  • 5
    Great answer! One additional note on gibberish/spam posts: if enough of your posts on high-traffic SE sites are downvoted/deleted, your account is automatically banned from creating more posts. This makes botting a lot more difficult, and really cuts down on the amount of human moderation required on sites like SO. – Bill the Lizard May 18 '12 at 14:50
  • 11
    http://xkcd.com/810/ – Josh Lee May 18 '12 at 14:53
9

If these bots are capable of crafting original, on topic, quality question/answers such that moderators/users don't catch them, I welcome them to the community.

chao-mu
  • 2,801
  • 18
  • 22