Dive Brief:
- Facebook CEO Mark Zuckerberg published an almost six thousand word manifesto detailing the state of the platform and how it intends to improve the world, which was reported by many outlets including Recode.
- Facebook has been active in artificial intelligence R&D, most publicly in the facial and image recognition space, but one area of Zuckerberg’s open letter highlighted the use of AI to track content posted to the platform. “One of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community," he wrote. "There are billions of posts, comments and messages across our services each day, and since it's impossible to review all of them, we review content once it is reported to us.”
- Per the exec, AI as a content-flagging tool is still in early development, but it is expected to be able to analyze photos and videos to find potential cases of harassment and bullying on the platform so they can reviewed by a person. The AI already generates around one-third of reports to the review team.
Dive Insight:
Facebook is a global giant with 1.23 billion daily active users as of Q4 2016 and billions of pieces of content shared each day. It has also become a primary source of news for many of its users with the figure cited as high as 44% last year.
As Facebook's influence grows as well as concern about fake, offensive and even harmful content, the manifesto is the latest example that Zuckerberg clearly feels Facebook has some responsibility for the torrent of content its users produce and that content's potential impact. The company has also taken a series of steps this year to cut down on fake news on the site and make it easier for users to identify quality content.
While the manifesto suggests Zuckerberg feels some social responsibility for the content on Facebook, he is also likely moving to assure users and advertisers that the platform is a safe place and prevent any potential user or advertising drop-off.
In the written statement, Zuckerber stated Facebook is looking into how AI can be used to differentiate between legitimate news about terrorism and terrorist propaganda to remove content that is posted to recruit for terrorist organizations. He also highlighted the importance of “security and liberty” by mentioning privacy elements exemplified by encryption built into messaging apps Messenger and WhatsApp, stating that after implementing end-to-end encryption into WhatsApp spam and malicious content fell by more than 75%.
Mashable uncovered one part of the manifesto that was edited out of the final version that also had privacy implications — the use of AI to flag suspicious content would extend to private communications, something that goes far past monitoring content uploaded to the platform cited in the final version. Facebook reportedly confirmed the change as reported by Gizmodo.