Brief:
- Instagram, the Facebook-owned image-sharing app with 1 billion users worldwide, may have as many as 95 million bots posing as real accounts, making the platform the next frontier in the fight against misinformation, fake news and political propaganda, according to The Information. The publication's estimates are based on a study conducted by researcher Ghost Data.
- Ghost Data estimated the percentage of Instagram users that are bot accounts had risen to 9.5% this year from 7.9% in 2015, the last time the firm conducted similar research. That year, Instagram had 300 million users after purging millions of fake accounts in December 2014, per The Wall Street Journal.
- To conduct its research, Ghost Data paid for 20,000 bots and analyzed their traits to help identity similar characteristics in about a million Instagram accounts. The firm found that the bots tended to follow many popular Instagram accounts, but post little content such as photos of models taken from other websites.
Insight:
The Information's report about Instagram bots shows that fake accounts remain rampant on social media platforms. The issue is particularly concerning for marketers that pay for endorsements based on the number of followers that celebrities and social influencers have accumulated, likely driving up the cost per post or campaign. Many users have sought to create the appearance of social influence to boost their careers or political activism by paying for followers. That has led to a budding industry of social media auditors that help advertisers identify fake accounts and avoid overpaying for social media ads.
Instagram may be particularly vulnerable to bot activity because of its emphasis on imagery that's more difficult to scan than text containing key words that help to identify hate speech or other misinformation. Images and video tend go viral much quicker than text, adding to the difficulty of curtailing offensive material. Last year, Instagram announced a crackdown on fake accounts that resulted in the closing of Instagress, a third-party service that claimed it could help users get real Instagram followers and become more popular, per Business Insider.
Facebook, Twitter and Google's YouTube have faced greater scrutiny in the past couple of years for their role in spreading misinformation that affects public opinion and has been blamed for inciting deadly violence in countries like Myanmar, India and Sri Lanka. Amid a growing chorus of criticism that reached a fever pitch amid this year's Cambridge Analytica data-sharing scandal, Instagram's parent company Facebook said it's taking steps to protect user privacy and stop the spread of objectionable content. It has 10,000 people working on safety and security, and plans to hire another 10,000 in the next year to identify potentially harmful activity on its platforms. Twitter this month also began purging its platform of fake accounts to help restore trust, and Facebook on Wednesday said it would remove misinformation that leads to violence.