The following is a guest piece written by Kevin Alvero, chief compliance officer of Integral Ad Science. Opinions are the author’s own.
Artificial intelligence (AI) is moving faster than the ad industry can keep up with. At this year’s Cannes Lions festival, the energy was all about racing ahead and trying to be among the first to roll out AI. Conversations ranged from Meta unveiling a wave of AI-powered tools to help advertisers automate creative production to the uneasiness around job displacement as agencies and holding companies announced layoffs while pouring more investment into AI. But in the middle of all the hype, no one seemed to be steering the AI ethics conversation.
Marketers, publishers, agencies and platforms alike are relying on AI for almost everything. It powers media buying, blocks fraud, writes ad copy and even generates creative. And its potential is only expanding as new applications for AI emerge almost every day — from personalized recommendations to customer service agents to advanced content generation. Yet there are no clear, universal standards for how AI should be used, tested or disclosed. Everyone is making it up as they go.
Across industries, state governments are already experimenting with different AI rules, all circling around themes like transparency and accountability. If advertising waits for that patchwork of regulation to set the bar, the industry risks being forced into reactive compliance rather than leading with its own standards.
The wild west
Right now, the advertising industry is existing in the wild west of AI. The technology is racing forward, but the rules have not caught up.
On the plus side, AI is delivering incredible benefits. It helps marketers work smarter, optimize campaigns and personalize creative in ways that were unimaginable just a few years ago. But alongside the progress are real problems. Bias can creep into systems without anyone noticing. Misinformation can spread at scale.
Low-quality “AI slop” content is already flooding the internet and eating away at trust. By 2026, as much as 90% of web content could be AI-generated, according to Europol. Some AI-driven sites are already pushing out over 1,200 articles daily, pumping out volume to maximize ad revenue.
The lack of agreed-upon guardrails makes it too easy for sloppy practices or bad actors. Without accountability, the foundation of digital media — trust — could collapse. If that happens, everyone loses.
The pressure to adopt responsible practices isn’t unique to advertising. Other sectors, from financial services to healthcare, are already being scrutinized for how they use AI. That same level of expectation will come to media and advertising, and the companies that prepare now will be better positioned than those that wait.
What responsible AI looks like
So what does good, ethical AI actually look like? Responsible AI isn’t just a policy: it has to be embedded into company practices and lived every day. It starts with human oversight. People need to stay in the loop to catch issues before they spiral into bigger problems.
Bias mitigation is another core step. Left unchecked, AI can reinforce or even worsen biases. Companies need clear processes to identify and reduce this risk.
Data and IT controls are just as important. AI systems are hungry for data, but that data often includes sensitive information. Protecting it is non-negotiable.
And finally, transparency builds trust. Marketers, publishers and consumers deserve to know how AI works and what is behind the systems they are using. When companies are open, people are more willing to trust the results.
These principles are not just theory. They are practical steps that can keep AI both powerful and safe.
Third-party certification matters
Internal policies are a good start, but they are not enough. As AI becomes core to how media is measured and optimized, companies need external validation.
That is where third-party certification comes in. Independent organizations like the Alliance for Audited Media (AAM), International Organization for Standardization (ISO), and TrustArc can step in to test, validate and certify AI systems. This shows that your tools meet high ethical and technical standards, not just the ones you set for yourself.
Third-party certification sends a clear signal. It tells clients, partners and the public that you are serious about responsible AI. It proves you are not only talking about it, you are holding yourself accountable. In a market where trust is everything, that kind of validation can be a powerful competitive edge.
Taking responsibility now
AI is not going away. It will only continue to shape the advertising industry in deeper ways. That is why leadership matters now.
While there are already some regulations emerging around AI, the technology is advancing far more quickly than policy can keep up with. The smarter move is to act now, set the bar high and lead with responsible practices. Taking the initiative shows customers and partners that you are prepared for the long haul. It earns credibility while competitors are still scrambling.
The future of advertising will be defined by how well the industry handles AI. We all share responsibility to use this technology wisely, fairly and ethically.