- Google will more carefully review premium YouTube videos that it packages for advertisers in an effort to address ongoing concerns that inappropriate and offensive content is appearing next to brand messages, Bloomberg Technology reported citing people familiar with the matter.
- Google plans to use human moderators and artificial intelligence software to review and flag videos that are part of Google Preferred, a group of YouTube channels that Google sells to advertisers at higher prices. Last month, Google said it would hire 10,000 employees to monitor videos and some of these hires will reportedly be tapped for the latest initiative around premium video.
- YouTube's latest troubles stem from reports that some of the channels targeting children had posted gruesome content. The platform is also dealing with the fallout after Logan Paul, one of its most popular content creators, posted a video featuring a dead body Paul found in Japan from a possible suicide. The video was widely condemned as being tasteless, and Google said it has removed Paul's videos from Google Preferred, per Bloomberg.
Following a year that featured two major advertiser boycotts over brand safety, Google's YouTube was likely hoping to kick-off 2018 with a fresh start — an ambition that was immediately doused in the wake of the Logan Paul scandal, which captured national news headlines and drew heaps of criticism toward the platform and its star creator. The Google Preferred program has been pitched to brands as a way to ensure their ads and messaging are appearing only next to premium video content, but it's clear that this often isn't the case, and Paul isn't alone in getting slammed for tasteless videos. The situation strongly echoes what happened in 2017 with Felix Kjellberg — a.k.a. PewDiePie — who was also booted from Google Preferred over posts that were construed as anti-Semitic.
The Bloomberg report underscores how YouTube hasn't yet regained the trust of its advertisers, and how it's moving now to enact broader changes that go beyond the band-aid fixes of kicking off specific offending creators. This comes despite YouTube implementing a series of measures to ensure that ads don't appear next to unsavory content. The hiring of 10,000 human staffers, combined with the platform's AI sorting, might put it in a better position going forward.
Most advertisers can't ignore the vast reach YouTube offers in the meantime, and some are investing in their own tools to address the issue. JPMorgan Chase, for example, recently developed its own in-house algorithm that plugs into YouTube's API and uses 17 filters to segment channels that are safe for ad placement.
Multiple surveys have revealed that most marketers have no idea where their ads actually appear online, largely because of programmatic ad buying, prompting an effort to take back control. At stake is the trust these brands have built with consumers and their overall reputation. More than 30% of viewers believe that ads placed next to video content on YouTube indicate an endorsement of the content, according to an Adweek-commissioned Survata survey.