Friday 22 January 2021

3 Proven Steps To A/B Testing Success in 2021

As 2021 kicks off, my top marketing resolution is to bring new rigor to the A/B tests that I manage for the Oracle.com digital marketing team. I’ve been managing tests for the past three years on Oracle’s most highly trafficked marketing page—the corporate home page—and am expanding my focus to landing pages, advertising campaigns, and other platforms that serve content to site visitors. As I do so, I’ll apply three proven steps to achieve A/B testing success.  

These practices—which I’ll summarize here and then spell out in more detail below—are applicable for B2B and B2C brands of all sizes, across industries:

  • Conduct disciplined upfront analysis so that you can: understand performance baselines, develop a solid test hypothesis, and determine which measurements will maximize your test’s business value. 

  • Monitor tests continuously to ensure they’re functioning correctly and yield an outcome that informs future campaigns. 

  • Explain in business terms how test results can be applied to drive better performance by fellow marketers. 

1) Invest in thorough upfront planning and analysis to ensure stakeholder buy-in

The most important step before building and launching any test is to clearly define your business goal. 

Without an upfront understanding of how your test can impact goals, you lack the business grounding to deliver a valuable outcome. Note that I’m not using the term “successful” outcome. That’s because some tests may not elicit a clear winner, or even a high performing option, and that’s OK. Sometimes the value of a test comes from eliminating low performers.

Testing, after all, is a process that can require running many iterations, and it requires this long view.

If you’re testing a demand-gen offer on a primary product page, business goals can include conversion rate, MQLs, SQLs, pipeline or revenue. If you’re testing a new component design, business goals can include clicks and click rate, time spent and depth of engagement, measured by time on page or page views per visit.  

Once your primary goal is defined, get a handle on the current performance levels of the site, page, or component you plan to test. Review past performance of your offer on the page you’d like to test, as well as the component on the page. 

What was the promotional language? What was the click rate? What was the verbiage in the CTA? Where was it placed on the page? What image was used, if any? 

If it’s a new content asset, review historical performance of the page and component that will host the offer. How have previous, similar offers performed? Did that performance meet campaign objectives? What copy was used, what CTA, what images? 

Using baseline data, develop a hypothesis of how your test offer, or “variant,” will introduce a performance advantage. Keep in mind that the test outcome won’t always be consistent with your hypothesis. I’ve found this to be the case in roughly one-third of tests. 

Preconceived notions must be set aside when this occurs. Your test goal isn’t to prove anyone right or wrong. Instead, it’s contributing to a digital marketing culture that uses data, rather than assumptions, to inform decisions.

Some common examples of test hypotheses: 

  • A short, punchy headline and subhead will drive better results than a detailed, lengthy headline and blurb

  • A context-sensitive CTA (e.g., “Read the Complete Guide to Modern ERP”) will receive better engagement than a more generic one (“Learn more”)

  • A shorter registration form completed on a progressive basis will elicit more conversions than one long form presented immediately to the visitor.

Package the past performance data and test hypothesis into slides or whatever presentation model is used in your organization to obtain stakeholder/executive buy-in before you start a test. 

2) Monitor metrics to ensure the test is operating correctly, and quickly address any anomalies 

The baseline performance data you’ve gathered is the foundation of your test measurement plan. Key metrics when testing two versions of a demand-gen offer include:

  • Views of each offer

  • The click (engagement) rate

  • The conversion (form fill) rate

  • Time spent on page and with the asset

  • The bounce rate

  • Where and how visitors engage after they filled out the form, as well as the same data for those that did not complete the form

  • The number of MQLs and SQL as well how your pipeline is doing and how much revenue is being generated

A dashboard should be set up pre-launch to monitor performance in real time. It can be adjusted as the test plays out, but the basics should be in place from day one. 

In Oracle’s case, we have multiple systems (for example, Oracle Maxymiser Testing and Optimization, web metrics system, customer registration system) collecting data in our tests, and you may have similar complexity. 

In any case, it’s imperative that you designate your “source of truth,” which may vary depending on the metric. While it’s unlikely that all systems will align perfectly on data, the rankings of data must be consistent, otherwise there’s likely a problem with the test itself.   

The numbers (think, views of each offer) will build slowly. While that happens, watch out for anomalies that warrant adjustments in the test. Oddities might include unusually high conversion or bounce rates, irregular engagement levels and the like. Having the historical data that you gathered will make suspicious numbers stand out. 

In my experience, roughly one in five tests has requires some type of adjustment post launch. Once a change is made, the data analysis should start over from the date of the revision so that you are working with “clean” data that stands up to scrutiny.    

3) The presentation—and communication—of test results is as important as the test itself

As your test draws closer to launch, create a plan for how you will present the resultsboth in progress and once the test has completed. Use whatever mechanism (dashboard, PDF, slides) prevails in your organization and have that format ready from early on. The more engaged your stakeholders are in the test, the more eager they will be for results after it launches. Any preliminary results should include the caveat that the performance ranking can change as the test progresses. 

Your messaging should review baseline performance before the test, the test hypothesis, test methodology and, most importantly, the performance of the winning and losing versions. Use concrete percentages and numbers for maximum clarity.

Avoid two things: 

  1. Terminology that is specific to individual systems 

  2. Presenting multiple systems’ result for the same metric in order to shield stakeholders from unneeded complexity

Depict data in terms of how marketers can act on it, expressing results in straightforward terms such as “more time spent on page,” “more time spent with key asset,” “higher rate of conversion,” or “higher rate of form fill yielding MQLs.” 

Build your results communication into a template that can be reused, placing greatest emphasis going forward on the key takeaways while plugging the numbers into a modular form that will be actionable.  

What’s on your testing agenda for the year to come? What practices have you found to be most effective? Please reach out with your feedback, and here’s to a successful year of A/B testing that contributes to core business results.

                                                                                       

For more information about A/B and multivariate testing, please check out:


from Oracle Blogs | Oracle Marketing Cloud https://ift.tt/2NvkO57
via IFTTT

No comments:

Post a Comment