One of the greatest aspects of digital marketing channels is they are almost entirely data-driven. This means marketers can gain constant insight into how campaigns are performing and see what is working and what's not by keeping up with key performance indicator (KPI) metrics. Marketers can even attribute ROI to campaigns up and down the sales funnel.
With the deluge of marketing data now available, it also makes it so marketers can test – either A/B split test or multivariate testing – on almost any element of a digital marketing campaign.
Testable marketing campaign elements include, but are by no means limited to, email subject lines, landing page design, layout elements and copy such as call-to-action language and buttons. Marketing messaging elements such as actual marketing copy, messaging personalization (think: dynamic content and geo-targeting), timing and frequency are all items that can be tested against.
The mobile marketing channel also offers ample opportunity for testing. In fact, mobile experience management firm Apptimize's CEO Nancy Hua recently told Marketing Dive, "Mobile requires the most testing because mobile is the most complicated and critical experience for users."
Earlier this week, Marketing Dive wrote about an A/B testing program implemented by Paktor – a mobile dating app targeting the Asia market – that involved looking at how in-app pop-up messages impacted subscriptions. Paktor found the messaging test was overall very successful, increasing subscriptions 10.35%. Even though the pop-up messages resulted in a decrease in a la carte purchases, Paktor still ended up with a 17% increase in average revenue-per-user.
While positive testing results are the ideal, what many marketers too often overlook is the real goal of testing: To learn something that can be applied to future tests and processes. A "win" might be great, especially in the moment, but if you zoom out and look at the big picture, a testing loss can be just as powerful if something of value is learned. And perhaps post importantly, keep in mind that testing is an iterative process.
When a testing 'loss' is actually a valuable learning
Looking to improve the user interface (UI) and better understand through data in what ways users were interacting with its app, News360 recently launched a series of A/B tests.
The news app tested an onboarding feature through which they received a lift in overall engagement. The test had the app adapt to users' topic selections, which ended up helping the news app see four times the lift in the number of topics added. News360 CEO Roman Karachinsky told Marketing Dive what helped the onboarding test succeed was the new, simpler look to the app. The news app also gained information about more interests selected during onboarding.
Though this onboarding test returned positive results, News360 also tested a more content-dense home screen with a higher number of news stories, which turned out to be a flop. In a way, the test "loss" also gave the team insights into its audience, confirming the team's original belief that the app's user's preferred big images.
"We were pretty sure it would do well, but having the exact data to validate those expectations, and being able to compare the two interfaces in detail on the same audience was immensely helpful," Karachinsky said.
With these results in hand, the team ran its second big UI test, which compared two interfaces for story list presentation: The existing interface (the test control) and then the UI with a denser look that squeezed more stories on users’ home feeds. The denser look had been suggested by users and the team wanted to find out if more stories per screen translated into more stories read per session.
As for the result? Squeezing in more stories onto the screen did not cause more stories be read per session. "It turned out to not be the case," he said. "The alternative UI performed marginally worse compared to the original on a few metrics. The number of stories read remained exactly identical, which confirmed our suspicion that it was brought about by external factors and not anything we could significantly affect by just changing the UI."
Though positive results are generally what you should aim for when testing, he added that he believes both negative and positive test results are valuable. While positive results can help improve your metrics, less-than-stellar results can help you save resources going forward.
"Negative results are great because they mean you're not going to waste time and effort pursuing something that doesn't work, so they should be embraced and learned from, but you've still already wasted some effort getting there," Karachinsky said.
"That said," he continued, "the more multivariate testing you do, whatever the result, the better you understand your user and their motivations and that will make your product decisions better and will train your intuition, so as your product evolves the percentage of the correct bets will increase."
Perhaps the most important thing marketers should keep in mind when getting into a testing program, experts say, is that it's a journey and not a destination. There is always room for improvement and there’s no shortage of things to test in digital marketing.