Dive Brief:
- Instagram announced it will begin blurring content flagged by users and considered "sensitive" by its internal review team, according to Variety.
- Users will still be able to view the content after tapping on the blurred image, however.
- The move from Instagram arrives at a time when Google and its YouTube video platform have come under fire from marketers for placing ads next to content featuring things like terrorism and hate speech, inadvertently helping fund that material.
Dive Insight:
Instagram is quickly building out its marketing options and is attracting more brands, recently topping out at more than 1 million advertisers. The latest news shows that Instagram parent Facebook is very much looking to get out in front of a brand safety issue that is quickly picking up traction both in the U.S. and abroad.
Google is the world's largest digital advertising platform in front of Facebook but big-name marketers ranging from AT&T to Wal-Mart and PepsiCo have shown they are unafraid to entirely freeze their ad spending with the company until standing problems are addressed. Facebook's metrics errors last year, while dramatic, did not spur a backlash to this degree.
Eric Schmidt, chairman of Google's parent company Alphabet, recently said the tech giant is working on a tech solution but can't currently — or perhaps ever — fully guarantee ads won't appear next to offensive content. Google has resisted committing to actively flagging down offensive material, likely to avoid accusations of censorship, and the Instagram policy changes similarly delegate content monitoring to the end-user, with an internal team only reviewing material marked as inappropriate.
In Google's case, this stance has frustrated many advertisers searching out a more proactive approach. For its part, Instagram also released a new help site around its safety and filtering tools.