The media minefield: How can marketers start making better choices?
Fake news, extreme content and other brand poisons are more apparent than ever, but catchall solutions appear few.
In a digital ecosystem overrun with blatantly fake news and otherwise questionable media, marketers need to start making better choices, like reintroducing a human element into buying decisions and walking away from content that, while not necessarily false, still doesn’t reflect well on a brand.
Blacklisting is a tricky term. It's often associated with suppression of dissent and even freedom of the press. But sensitivity toward brand values has sharpened as the extent of the bad news issue has become apparent, raising central questions of whether technology, people or some intersection of the two can help marketers better monitor media.
Following a period of leaving media monitoring largely to technology, the introduction of a more pronounced human element to the vetting process is picking up traction. Facebook, for its part, recently launched a third-party fact-checking initiative to flag down "disputed" posts and prevent them from being promoted or made into ads. As programmatic tools fail to properly account for things like fake news, brands and their media buyers might also need to consider more personalized oversight to digital strategies.
"While we're still early in understanding the implications, depth and breadth of the fake news problem, I think it's perfectly fine to over-invest in humans to understand and address [this] today," said Jonathan Adams, chief digital officer, agency, at Maxus. "For the time being, a human-powered solution's probably going to be welcome until we have a handle on it.
"I think this is a bigger problem that's going to continue for quite awhile," he said. "If we can get to a place where we can ensure that news is real rather than fake, we still have the larger question of using these platforms in a data-driven way to specifically change people's minds, and that is the nature of marketing."
Post-election, "fake news" caught marketers off-guard as the two biggest digital platforms, Google and Facebook, were shown to be infested with it. In November, BuzzFeed News reported that fake news might have actually outperformed legitimate news during the election period.
As a sort of fallout, some big name brands took a proactive role in distancing themselves from sites that could otherwise be labeled as extreme in their views, notably Kellogg’s, who cut advertising ties with the alt-right publisher Breitbart. Breitbart, in turn, cried "censorship."
For marketers, polarizing or fake content can harm credibility merely by association, making a degree of "censorship," as Breitbart puts it, necessary as a business practice. But brands and advertisers are failing to be diligent in this regard, as recent Forrester research shows just 29% of marketers took an active role in managing their media blacklists and whitelists in 2016 — a lack of oversight stoked by the rise of programmatic and other automated ad technologies.
"Brands have little to no control over where their ads appear," said Jamie Tedford, founder and CEO of Brand Networks. "We're talking about like 6,000 websites [sic] in Google's network alone. And so when brands like Kellogg's find out they're running on Breitbart, again, they make a move but they're not really addressing the core challenge."
At the same time, relying on blacklisting alone does little to solve the underlying issues.
The roots of the problem
Revelations of misinformation and other unsavory content have lit a fire for consumer advocacy groups demanding more diligent blacklisting on the part of brands. Sleeping Giants, a guerrilla social media presence dedicated to stopping "racist websites by stopping their ad dollars," uses Twitter and Facebook to call out individual brands whose spots still appear to be supporting Breitbart, specifically.
"These kinds of movements have come and gone and the pressure that groups like Sleeping Giants have put on brands is, in my opinion, a little bit misplaced," said Tedford.
"[For] every one of those processes, there's probably 10 other offensive websites that are still benefiting from that traffic and from that funding," he added. "To me, you have to look at the systematic controls that the industry needs to have in place in order to avoid this kind of thing in the future."
For brands, news that's actually fake might not even be the biggest problem.
Breitbart, for example, doesn’t necessarily print "fake" news. It does, however, publish opinion pieces that claim things like all fat people are disgusting, which obviously aren’t appropriate for many if not most brands.
While objectively false media — such as Yoko Ono having an affair with Hillary Clinton — will be easier to weed out, addressing brand-safe spaces might require a far larger pivot for marketers.
"The phenomenon of fake news printed specifically for misdirection and to spread falsehoods is significantly different than the phenomenon I've been observing around the fallout of advertising appearing in non-brand safe spaces," said Tedford. "I'm hopeful that it is prompting, not just a knee-jerk reaction — 'let's ban Breitbart from our media buy' — but instead prompts marketers to re-evaluate their entire strategy."
Tightening the brand safety net
The wide adoption of programmatic digital display and other automated technologies has allowed advertisers to buy and place media with inhuman efficiency. But the inhuman aspect of that equation can be a crutch as well as a solution, leading to a lack of transparency and oversight.
The re-introduction of a human element to media vetting may lead to the better judgment calls that can stop ad dollars from supporting fake news and other brand poison.
"We're going to see a potential rise in brands being more actively involved [...] just like they've always been involved in print and broadcast and radio," said Maxus' Adams.
On the platform side of things, Facebook’s introduction of a third-party system based around signatories of Poynter's fact-checking guidelines is a step in the right direction, experts say.
Companies like DoubleVerify or Integral Ad Science, which introduce an extra line of human defense when considering where ads get placed, will also likely become a bigger part of the conversation as brands take more care to measure just how much their values affect actual strategy.
That's not to say that newfangled and automated technologies won't play a part at all. New ad tech solutions might also emerge to tighten brand safety nets.
"There's sort of an arms race to complete a pre-bid filtering tool to check the quality of the articles," said Adams. "That sort of a tool [would be] one more thing we could use on the behalf of our clients to protect them from nefarious content."
Lighting a fire
While the responsibility for shutting down outwardly bad media might rest, at least in part, with the big digital platforms like Facebook and Google, brands should avoid being overly reliant on them to solve the problem.
Facebook, in particular, has become a catalyst for this conversation as it’s made more significant steps toward assuming media responsibilities. The tech giant recently created and filled a new head of news partnerships position, for example, and then launched a Journalism Project with the intention of educating both users and journalists in order to "curb" fake news.
"Facebook is gradually coming to terms with the idea that it is publisher [...] It may not want to have its own reporters and editors, but it does need to [...] make some alliances and collaboration to avoid the worst of fake news," said Rick Edmonds, media business analyst at the Poynter Institute. "I do think Facebook itself would probably logically be a lead player or the lead player but it's not the beginning or the end of the story."
Discerning fake from real
Google’s approach to the "fake news" phenomenon further underscores its complexity. After the election, the tech company demonstrated it would make more proactive efforts to prevent ads from being placed on pages that "misrepresent, misstate, or conceal misinformation about the publisher, the publisher’s content, or the primary purpose" of the site.
However, Mashable reported in January that Google’s AdSense platform no longer listed "fake news" in its advertising policy. Google assured Mashable in a statement it was still addressing fake news in general and was merely dropping the terminology from its guidelines, pointing to just how nebulous the phrase "fake news" has become.
"Where do you draw the line between skewed news being targeted at people versus fake news being targeted at people?" asked Adams. "That's going continue to be a balance that agencies have to weigh with their clients."
Regardless of specific strategy adjustments, marketing, as a business, is ultimately working through one of the most politically divisive, fraught periods in recent memory. While "fake news" and its ilk aren't the beginning or end of the story, they serve as a clear signal that brands need to be more conscious of their positions and messaging in 2017.
"Proactively helping clients formulate their own opinions about this space is going to end up being crucial," said Adams. "Just as with other protections like avoiding hate speech or profanity — I think every brand is going to need to have their own point of view as to what they're comfortable with."
Follow Peter Adams on Twitter