AI can optimize email campaigns, but only if CMOs fix the testing model first

The news: AI tools are making it faster and cheaper to run email A/B tests—but that speed is increasing the risk of ingraining weak methodology.

AI-powered platforms like Phrasee, Persado, Salesforce Einstein, Adobe Journey Optimizer, and HubSpot’s AI tools can rapidly generate and test subject lines, creative, and send-time variations. 

However, "many email tests are called too early, run on too small an audience, or judged on short-term engagement metrics alone,” Kath Pay, co-founder of the Holistic Email Academy told EMARKETER. “Inbox placement, preview text, send timing, and competing messages all influence behavior before a recipient opens anything,” she added.

Zooming in: Email already delivers exceptional returns, averaging around $40 in revenues for every $1 spent; brands that always A/B test email programs report closer to $48 per $1, per Increv. But some programs are structured in ways that guarantee misleading results, per Dynamic Yield

  • Some platforms auto‑select a “winner” after a small, underpowered test and send it to the rest of the list without checking statistical significance, a pattern experts have warned can systematically skew results and decisions. 
  • That winning variant can quietly drop conversion rates, average order value (AOV), and revenues per recipient. 
  • Forty-six percent of US B2B SMB marketers use built-in testing tools within their CMS, email, or ad platforms—indicating that experimentation through A/B testing is a necessary practice for successful brand marketing.
  • Winning variants from A/B tests routinely become default templates—locking in performance benchmarks built on conditions that no longer exist or that aren’t sustainable.

Implications for marketers: “For marketers and CMOs who want more successful email A/B tests, the first adjustment is to shift from testing for a winner to testing for learning,” said Pay. “Every test should begin with a strategic hypothesis: What do we believe about the customer, and what behavior are we trying to influence?” 

  • Marketers should look beyond single factor tests and align with success metrics and goals like revenues, retention, lead quality, or subscription growth.
  • Document A/B test results to build a body of knowledge. Consider a version that gets more clicks may not produce more revenues, higher AOV, stronger retention, or better-quality conversions. 
  • CMOs should audit whether their current methodology would produce reliable results at human speed. If it doesn't, AI will only make the wrong answers arrive faster. 

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!