3 Reasons Machine Learning Trumps A/B Split Testing

ab split testing

by BRADY EVAN WALKER

A/B Split Testing Only Tests One Variable

A/B testing can be useful on large websites with hundreds of thousands of daily hits. In that context, simultaneous A/B tests run. But in cases like email marketing and direct mail, it would be unwise to split test your entire audience. So you use a subset that’s just large enough for statistical significance.

But which variable needs to be tested? It’s harder to pin if some unaccounted variable caused consumers to respond, like Tuesdays or tax refunds or the unknown fact that everyone who received B was a divorced, left-handed train-set enthusiast.

Even if you knew with certainty that this one variable was the variable that, when optimized, would rain down ROI like never before, testing shouldn’t be restricted to two permutations of a single variable at the expense of thousands of other permutations. You could be choosing between mediocre and kinda bad.

With experimental design implemented through machine learning platforms, these thousands of variations feed into the system as data. As testing persists, data accrues, mathematical decision-making is refined, and rather than finding the best of two choices, you get the best of thousands.

A/B Split Testing Is Time-Consuming

Even if you’re doing an A/B/C/D test, in most contexts and most businesses, you cannot test dozens of discrete elements at the same time. Say you’re testing an email body. You can test a few message variations with one test. Then you can test background color. Then graphic elements. Then call to action. “That’s fine,” a defensive marketer might say.

But to test all of these things effectively, you’d need four sizeable audience subsets to offer any meaningful differences between test variants. And A/B split testing requires man hours, which can cost much, much more than computer hours.

Machine learning allows businesses to focus on their product and value proposition while relieving them of the labor- and time-intensive process of manually testing their marketing and branding practices.

A/B Split Testing Gives Limited Insight

As Tim Ash, CEO of SiteTuners.com, wrote at Target Marketing:

Conducting multiple split tests back to back is the most wasteful kind of data collection. None of the information from a previous test can be reused to draw conclusions about the other variables you may want to test in the future.

It’s impossible to home in on behavioral insight. When it comes to testing human-written messages, the only “data” underlying the syntax and grammar is explicit meaning and subjective connotation inside the copywriter’s head.

With a machine learning platform like Persado’s, the system can learn behavioral triggers and compose messages with limited human input based on a complex, data-driven linguistic ontology paired with a 17-permutation test. (For those not accustomed to testing life, I’ll translate: A/B/C/D/E/F/G/H/I/J/K/L/M/N/O/P/Q testing.)

Similarly, for design decisions, measured results offer no insight into why you got those results and thus deliver no future methods for optimization.

As Jakob Nielsen of Nielsen Norman Group wrote:  

Say, for example, that you tested two sizes of Buy buttons and discovered that the big button generated 1% more sales than the small button. Does that mean that you would sell even more with an even bigger button? Or maybe an intermediate button size would increase sales by 2%. You don’t know, and to find out you have no choice but to try again with another collection of buttons.

Of course, you also have no idea whether other changes might bring even bigger improvements, such as changing the button’s color or the wording on its label. Or maybe changing the button’s page position or its label’s font size, rather than changing the button’s (sic) size, would create the same or better results.

In Conclusion

There’s too much data out there to ignore. Creating meaningful, multivariate experiments with machines that can quickly act on the results to deliver massive ROI is no longer science fiction. It’s an imperative business fact.

Machines will never take over entire marketing departments (or not any time soon, at least), but today machines can support marketing departments to scale their processes beyond human capacity.

A/B split testing served us well in the past, but online competition is stiffer, attention spans are shorter, and customer loyalty and patience are more fickle than ever. With more data and the proper machine learning systems in place to interpret that data, you can inspire action in your audience with growing consistency.