Social bot
A social bot (also: socialbot or socbot) is an agent that communicates more or less autonomously on social media, often with the task of influencing the course of discussion and/or the opinions of its readers.[1] It is related to chatbots but mostly only uses rather simple interactions or no reactivity at all. The messages (e.g. tweets) it distributes are mostly either very simple, or prefabricated (by humans), and it often operates in groups and various configurations of partial human control (hybrid).[2] It usually targets advocating certain ideas, supporting campaigns, or aggregating other sources either by acting as a "follower" and/or gathering followers itself. In this very limited respect, social bots can be said to have passed the Turing test.[3] [4] If the expectation is that behind every social media profile there should be a human, social bots always use fake accounts. This is not different from other social media API uses.
Social bots appear to have played a significant role in the 2016 United States presidential election[5][6] and their history appears to go back at least to the United States midterm elections, 2010.[7] It is estimated that 9-15% of active Twitter accounts may be social bots[8] and that 15% of the total Twitter population active in the US presidential election discussion were bots. At least 400,000 bots were responsible for about 3.8 million tweets, roughly 19% of the total volume.[5]
Twitterbots are already well-known examples, but corresponding autonomous agents on Facebook and elsewhere have also been observed. Nowadays, social bots are equipped with or can generate convincing internet personas that are well capable of influencing real people,[9][3][10] although they are not always reliable.[11]
Social bots, besides being able to (re-)produce or reuse messages autonomously, also share many traits with spambots with respect to their tendency to infiltrate large user groups.[12]
Using social bots is against the terms of service of many platforms, especially Twitter[13] and Instagram.[14] However, a certain degree of automation is of course intended by making social media APIs available.
The topic of a legal regulation of social bots is currently discussed in many countries, however due the difficulties to recognize social bots and to separate them from "eligible" automation via social media APIs, it is currently unclear how that can be done and also if it can be enforced. In any case, social bots are expected to play a role in future shaping of public opinion by autonomously acting as incessant and never-tiring influencer.[15][16]
Uses
Lutz Finger identifies 5 immediate uses for social bots:[17]
- foster fame: having an arbitrary number of (unrevealed) bots as (fake) followers can help simulate real success
- spamming: having advertising bots in online chats is similar to email spam, but a lot more direct
- mischief: e.g. signing up an opponent with a lot of fake identities and spam the account or help others discover it to discreditize the opponent
- bias public opinion: influence trends by countless messages of similar content with different phrasings
- limit free speech: important messages can be pushed out of sight by a deluge of automated bot messages
The effects of all points can be likened to and support methods of traditional psychological warfare and information warfare.
Detection
The first generation of bots could sometimes be distinguished from real users by their often superhuman capacities to post messages around the clock (and at massive rates). Later developments have succeeded in imprinting more "human" activity and behavioral patterns in the agent. To unambiguously detect social bots as what they are, a variety of criteria must be applied together using pattern detection techniques, some of which are:[10]
- cartoon figures as user pictures
- sometimes also random real user pictures are captured (identity fraud)
- reposting rate
- temporal patterns[18]
- sentiment expression
- followers-to-friends ratio[19][20]
- length of user names
- variability in (re)posted messages
Botometer[21] (formerly BotOrNot) is a public Web service that checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. The system leverages over a thousand features.[22][8] An active method that worked well in detecting early spam bots was to set up honeypot accounts where obvious nonsensical content was posted and then dumbly reposted (retweeted) by bots.[23] However, recent studies[2] show that bots evolve quickly and detection methods have to be updated constantly, because otherwise they may get useless after a few years.
See also
- Astroturfing
- Crowd manipulation
- Fake news website
- Internet bot
- Marketing and artificial intelligence
- Messaging spam
- On the Internet, nobody knows you're a dog
- Post-truth politics
- Search engine manipulation effect
- Social spam
- Sockpuppet (Internet)
- Technoself studies
- Twitter bomb
- Votebots
- Whispering campaign
References
- Ferrara, Emilio; Varol, Onur; Davis, Clayton; Menczer, Filippo; Flammini, Alessandro (July 2016). "The Rise of Social Bots". Communications of the ACM. 59 (7): 96. doi:10.1145/2818717. Retrieved 27 February 2020.
- Grimme, Christian; Preuss, Mike; Adam, Lena; Trautmann, Heike (2017). "Social Bots: Human-Like by Means of Human Control?". Big Data. 5 (4): 279–293. arXiv:1706.07624. doi:10.1089/big.2017.0044. PMID 29235915.
- "What is socialbot? - Definition from WhatIs.com". whatis.techtarget.com. Retrieved 2016-12-16.
- https://www.nytimes.com/2014/11/20/fashion/social-media-bots-offer-phony-friends-and-real-profit.html
- Bessi, A & Ferrara, E. (2016) Social Bots Distort the 2016 US Presidential election online discussion. First Monday 21(11), 2016
- Shao, Chengcheng; Giovanni Luca Ciampaglia; Onur Varol; Kaicheng Yang; Alessandro Flammini; Filippo Menczer (2018). "The spread of low-credibility content by social bots". Nature Communications. 9 (1): 4787. arXiv:1707.07592. Bibcode:2018NatCo...9.4787S. doi:10.1038/s41467-018-06930-7. PMC 6246561. PMID 30459415.
- Ratkiewicz, Jacob; Michael Conover; Mark Meiss; Bruno Gonçalves; Alessandro Flammini; Filippo Menczer (2011). "Detecting and Tracking Political Abuse in Social Media". Proc. 5th International AAAI Conf. on Web and Social Media (ICWSM).
- Varol, Onur; Emilio Ferrara; Clayton A. Davis; Filippo Menczer; Alessandro Flammini (2017). "Online Human-Bot Interactions: Detection, Estimation, and Characterization". Proc. International AAAI Conf. on Web and Social Media (ICWSM).
- Alessandro Bessi and Emilio Ferrara (2016-11-07). "Social bots distort the 2016 U.S. Presidential election online discussion". First Monday.
- Ferrara, Emilio; Varol, Onur; Davis, Clayton; Menczer, Filippo; Flammini, Alessandro (2016). "The Rise of Social Bots". Communications of the ACM. 59 (7): 96–104. arXiv:1407.5225. doi:10.1145/2818717.
- China kills AI chatbots after they start praising US, criticising communists Yahoo! News August 5, 2017
- Ferrara, Emilio (2018). "Measuring social spam and the effect of bots on information diffusion in social media". Complex Spreading Phenomena in Social Systems. Computational Social Sciences. pp. 229–255. arXiv:1708.08134. doi:10.1007/978-3-319-77332-2_13. ISBN 978-3-319-77331-5.
- "Automation rules". Retrieved 2018-11-15.
- "Terms of Use • Instagram". www.instagram.com. Retrieved 2018-11-15.
- "How robots could shape Germany's political future". The Local. 21 November 2016.
"Social Bots" were the sinister cyber friend in the US elections who didn't actually exist. Could they also shape how Germans vote next year?
- "The rise of no".
- Lutz Finger (Feb 17, 2015). "Do Evil - The Business Of Social Media Bots". forbes.com.
- Mazza, Michele; Stefano Cresci; Marco Avvenuti; Walter Quattrociocchi; Maurizio Tesconi (2019). "RTbust: Exploiting Temporal Patterns for Botnet Detection on Twitter". In Proceedings of the 10th ACM Conference on Web Science (WebSci '19). arXiv:1902.04506. doi:10.1145/3292522.3326015.
- "How to Find and Remove Fake Followers from Twitter and Instagram : Social Media Examiner".
- "TwitterAudit".
- "Botometer".
- Davis, Clayton A.; Onur Varol; Emilio Ferrara; Alessandro Flammini; Filippo Menczer (2016). "BotOrNot: A System to Evaluate Social Bots". Proc. WWW Developers Day Workshop. arXiv:1602.00975. doi:10.1145/2872518.2889302.
- "How to Spot a Social Bot on Twitter". technologyreview.com. 2014-07-28.
Social bots are sending a significant amount of information through the Twittersphere. Now there’s a tool to help identify them
External links
- The Computational Propaganda Research Project University of Oxford