Do You Know There Are Billions of Social Media Bots Designed to Mimic Humans? Here’s How to Spot Them
“Bots” — automated social media accounts which pose as real people — have a huge presence on platforms such as Facebook, Instagram and Twitter. They number in the billions.
Most fake social media accounts are “bots,” created by automated programs to post certain kinds of information as part of an effort to manipulate social conversations. Sophisticated actors can create millions of accounts using the same program.
These bots can seriously distort debate, especially when they work together. They can be used to make a phrase or hashtag trend and they can be used to amplify or attack a message or article and even harass other users.
Recently, social media giant, Facebook, claims it disabled a staggering figure of more than three billion fake accounts over a six-month period.
One study by researchers at the University of Southern California analyzed US election-related tweets sent in September and October 2016 and found that 1 in 5 were sent by a bot. The Pew Research Center concluded in a 2018 study that accounts suspected of being bots are responsible for as many as two-thirds of all tweets that link to popular websites.
In the social media context, these autonomous programs can run accounts to spread content without human involvement. Many are harmless, tweeting out random poems or pet photos. But others are up to no good and are designed to resemble actual users.
Bots “don’t just manipulate the conversation, they build groups and bridge groups,” said Carnegie Mellon University computer scientist Kathleen Carley, who has researched social media bots.
“They can make people in one group believe they think the same thing as people in another group, and in doing so they build echo chambers.”
In the run up to the 2023 general election, it has already become incumbent on social media users to acquaint themselves with tips and tricks of social media bots as misinformation and disinformation appear in political ads or social media posts. They can include fake news stories or doctored videos among wider and far reaching public interest implications.
Many bots are relatively easy to spot by eyeball, without access to specialized software or commercial analytical tools. This article sets out a dozen of the clues, which can be found useful in exposing fake accounts.
Read Also:
- Activity: The most obvious indicator that an account is automated is its activity. This can readily be calculated by looking at its profile page and dividing the number of posts by the number of days it has been active. The benchmark for suspicious activity varies. The Oxford Internet Institute’s Computational Propaganda team, views an average of more than 50 posts a day as suspicious; this is a widely recognized and applied benchmark, but may be on the low side.
- Anonymity: A second key indicator is the degree of anonymity an account shows. In general, the less personal information it gives, the more likely it is to be a bot.
- Amplification: The third key indicator is amplification. One main role of bots is to boost the signal from other users by resharing, retweeting, liking or quoting them. The timeline of a typical bot will, therefore, consist of a procession of reshares or retweets and word-for-word quotes of news headlines, with few or no original posts.
-
- Low posts / high results: Bots achieve their effect by the massive amplification of content by a single account. In Twitter for example, they create a large number of accounts which retweet the same post once each: a botnet. Such botnets can quickly be identified when they are used to amplify a single post, if the account which made the post is not normally active.
- Common content: The probability that accounts belong to a single network can be confirmed by looking at their posts. If they all post the same content, or type of content, at the same time, they are probably programmed to do so.
- The Secret Society of Silhouettes: The most primitive bots are especially easy to identify, because their creators have not bothered to upload an avatar or profile image to them. Some users have silhouettes on their accounts for entirely innocuous reasons; thus the presence of a silhouette account on its own is not an indicator of botness. However, if the list of accounts which retweet, reshare, or like a post looks the same, it’s a red flag.
- Stolen or shared photo: Other bot makers are more meticulous, and try to mask their anonymity by taking photos from other sources. A good test of an account’s veracity is therefore to reverse search its avatar picture. Using Google Chrome, right-click on the image and select “Search Google for Image”. Using other browsers, right-click the image, select “Copy Image Address”, enter the address in a Google search and click “Search by image”. In either case, the search will show up pages with matching images, indicating whether the account is likely to have stolen its avatar:
- Commercial content: Advertising, indeed, is a classic indicator of botnets. Some botnets appear to exist primarily for that purpose, only occasionally venturing into politics. When they do, their focus on advertising often betrays them.
- Automation software: Another clue to potential automation is the use of URL shorteners. These are primarily used to track traffic on a particular link, but the frequency with which they are used can be an indicator of automation.
- Retweets/Reshares and likes: A final indicator that a botnet is at work can be gathered by comparing the retweets and likes of a particular post. Some bots are programmed to both retweet and like the same tweet; in such cases, the number of retweets and likes will be almost identical, and the series of accounts which performed the retweets and likes may also match.
Bots are an inevitable part of social media, especially twitter. Some exist for legitimate reasons, others don’t. Notwithstanding, the most common indicators to detect them are activity, anonymity, and amplification, the “Three A’s” of bot identification, but other criteria also exist.
The researcher produced this media literacy article per the Dubawa 2021 Kwame Kari Kari Fellowship partnership with PRNigeria to facilitate the ethos of “truth” in journalism and enhance media literacy in the country.
Kidnapped School Children
Yauri FGC Students, Kebbi (Freed)Baptist School Students, Kaduna (Freed)
Tegina Islamiya Pupils, Niger (Freed)
Report By: PRNigeria.com