Adaptive A/B Tests

Adaptive A/B Tests let you run experiments that compare different variants while accounting for predicted visitor behavior, ensuring you measure true impact across different intent levels.

Overview

The Adaptive A/B Tests page is where you create and manage experiments. Unlike traditional A/B tests that split traffic randomly, adaptive tests can factor in prediction scores and audience membership when analyzing results, giving you a clearer picture of what works for different visitor segments.

The page has two tabs:

  • Ongoing Adaptive A/B Tests: Experiments currently running on your site.

  • Archived Adaptive A/B Tests: Completed or stopped experiments stored for reference.

You can filter by Device Type, Operating System, and Channel.

Key Concepts

Term
Meaning

Variant

A specific version of an experience shown to a subset of visitors (e.g., Variant A vs. Variant B).

Control

The baseline experience against which variants are measured.

Statistical Significance

The confidence level that an observed difference in performance is not due to chance.

Adaptive Analysis

Breaking down results by predicted behavior segments to understand which variant works best for each visitor type.

Getting Started

1

Create a New Adaptive A/B Test

Click Create New Adaptive A/B Test to begin.

The setup flow covers:

  1. Schedule settings: Configure your schedule settings

  2. Live QA: Trigger live QA

  3. Validate JS code: Validate your JS code to move to the next section

  4. Choose targeting: Run the test for all visitors or limit it to specific audiences, devices, or channels.

2

Target and split

  • Select success metrics: Pick the KPIs that determine which variant wins (conversion rate, revenue per visitor, engagement, etc.).

  • Set traffic allocation: Decide what percentage of traffic goes to each variant.

3

Monitor Results

Return to the Adaptive A/B Tests page to track performance. The platform reports results broken down by:

  • Overall conversion metrics per variant.

  • Performance by predicted behavior segment (e.g., high-intent vs. low-intent visitors).

  • Statistical significance indicators.

circle-info

Let tests run until they reach statistical significance before making decisions. Stopping a test too early can lead to unreliable conclusions.

4

Apply the Winner

Once a test reaches significance, you can promote the winning variant to become the default experience for all visitors, or for specific audience segments.

Last updated

Was this helpful?