Adaptive A/B Tests
Adaptive A/B Tests let you run experiments that compare different variants while accounting for predicted visitor behavior, ensuring you measure true impact across different intent levels.
Overview
The Adaptive A/B Tests page is where you create and manage experiments. Unlike traditional A/B tests that split traffic randomly, adaptive tests can factor in prediction scores and audience membership when analyzing results, giving you a clearer picture of what works for different visitor segments.
The page has two tabs:
Ongoing Adaptive A/B Tests: Experiments currently running on your site.
Archived Adaptive A/B Tests: Completed or stopped experiments stored for reference.
You can filter by Device Type, Operating System, and Channel.
Key Concepts
Variant
A specific version of an experience shown to a subset of visitors (e.g., Variant A vs. Variant B).
Control
The baseline experience against which variants are measured.
Statistical Significance
The confidence level that an observed difference in performance is not due to chance.
Adaptive Analysis
Breaking down results by predicted behavior segments to understand which variant works best for each visitor type.
Getting Started
Create a New Adaptive A/B Test
Click Create New Adaptive A/B Test to begin.


The setup flow covers:
Schedule settings: Configure your schedule settings
Live QA: Trigger live QA
Validate JS code: Validate your JS code to move to the next section
Choose targeting: Run the test for all visitors or limit it to specific audiences, devices, or channels.
Monitor Results
Return to the Adaptive A/B Tests page to track performance. The platform reports results broken down by:
Overall conversion metrics per variant.
Performance by predicted behavior segment (e.g., high-intent vs. low-intent visitors).
Statistical significance indicators.
Let tests run until they reach statistical significance before making decisions. Stopping a test too early can lead to unreliable conclusions.
Last updated
Was this helpful?

