The following is a guest piece written by Jeffrey Kennedy, executive director of client analytics at WPP Media. Opinions are the author’s own.
“Ghost ships” are technically flawless models that never influence real decisions. They look impressive in presentations, but aren’t built with the flexibility that rapid business changes require. This failure is showing up constantly in my performance, brand and applied artificial intelligence measurement meetings. AI use is only accelerating this problem.
For example, my team spent six months building the perfect forecasting model. The media manager loved it. We all celebrated. Then three months later, market conditions shifted and priorities changed. The model couldn’t adapt. Nine months later the model was dead, creating another ghost ship.
That’s not the uncomfortable part; I also defended the model in multiple meetings before I realized our model was too rigid. This didn’t just bruise my ego, but demotivated the team.
How we built the ghost ship
Our team designed a predictive model for a brand that layered on top of their current MMM model to project long- and short-term ROI tradeoffs. This was meant for investment scenarios, risk assessment and to understand how spend fluctuations impact goals. After six months, we thought we delivered the perfect output-channel level predictions that showed exactly where to move budget to hit the target. The marketing and media manager loved it. Everyone agreed this was a success. However, the success was short lived. Category dynamics shifted. Consumer behavior changed. Competitive intensity accelerated. The growth target suddenly became unattainable.
The CMO needed to pivot to reforecast everything against the new goal of protecting market share while optimizing efficiency. This is when the model broke. The model was based on one target but the shift in priorities required two. The model couldn’t adapt, and rebuilding meant time wasted that we couldn’t afford. Decisions couldn’t wait four months. The brand needed to reallocate investment immediately.
The CMO asked: “Can we reforecast by next week?” The team responded: “No. We need to start over, at least four months to rebuild,” to which the CMO responsed, “Then we are making decisions without the model.” The room went quiet, the analyst nodded, and my stomach dropped. This was when the ghost ship launched with fanfare and became irrelevant.
This is “measurement theater” at its most expensive. We built a concrete wall when we needed to build a partition that can be adjusted. The team and I learned then that the best measurement systems are the ones that leadership is willing to kill. If your model can’t be questioned or abandoned when priorities shift, then it is technical debt with a nice PowerPoint. I apply this principle now, and since implementing it, the back and forth with team’s questions dropped 40%.
The leadership fix
- Schedule a goal stress test at the 33% and 66% milestone markers, or the two and four-month checkpoint on a six-month build, for example. Set the agenda around one concept: “If leadership changes, what breaks this model.” If the answer is anything other than, “we can pivot,” stop the build and redesign the model.
- Ask this question before approval: “Walk me through what breaks if we shift goals. If the answer is technical, like data inputs, you’re fine. If it disrupts entire logic and business requirements, then stop the project. This single question has saved my team hours on rebuilding.
- Enforce the 72-hour rule: ”If the model can’t reforecast within 72 hours, we make the decision without it.” This forces the team to build for agility from day one.
Where applied AI helps
- Use the response of an AI stress test prompt against your goals to challenge test your assumptions before you build. If you can’t address the objections, your goal will be rejected. Example prompt to use: “I am a CFO reviewing a marketing plan. The team wants to build a forecast optimized to (insert goal). What are three reasons I would reject this goal as unrealistic, too risky or misaligned with business priorities?” Be specific about financial constraints, market conditions, and feasibility.
- Use the response of an AI salvage prompt on existing work during check-ins. This one kept our models relevant when our strategy pivoted mid-year. We reframed the outputs in two days, instead of wasting weeks rebuilding. Example prompt to use: “We built marketing forecasts optimized for revenue growth. Priorities shifted to margin protection and efficiency. Which of our original recommendations of (insert recommendations) still hold under the new goal? What assumptions need to change? Revise our recommendations for the new priority of margin protection and efficiency.”
- Use the response of an AI presentation prompt before presenting to simulate any stakeholder objections. Approval rates went up 30% using this approach. Example prompt to use: “I am presenting marketing forecasts to a CFO. Here are my top three recommendations: (list recommendations). Role play a skeptical CFO. What are the first three questions you’d ask to challenge these recommendations?”
Proof in action
Our team was four months into building a channel forecast for a consumer brand when the CMO signaled a strategy shift was coming. We ran a stress test immediately and got, “If we pivot, the entire model breaks.” We paused and redesigned the model to treat the goal as a variable. This took three extra weeks up front. However, when the strategy officially shifted, we were able to flex into the new goals, making the model scalable and useful. Reforecasting based on shifting goals now takes 36 hours versus weeks of a rebuild. We have used this for three strategy shifts already without a rebuild. The model now has become an official tool and not a ghost ship.
Your pressure test to try
Find a project that you are currently working on, and call a meeting to ask, “If our goal changes next quarter, how long will it take to revise the model?” If the answer is weeks or months, you are building a ghost ship. Build systems worth killing. That is how you will know if they are worth keeping.