# Adaptive A/B Tests

Adaptive A/B Tests let you run experiments that compare different variants while accounting for predicted visitor behavior, ensuring you measure true impact across different intent levels.

### Overview <a href="#overview" id="overview"></a>

The Adaptive A/B Tests page is where you create and manage experiments. Unlike traditional A/B tests that split traffic randomly, adaptive tests can factor in prediction scores and audience membership when analyzing results, giving you a clearer picture of what works for different visitor segments.

The page has two tabs:

* **Ongoing Adaptive A/B Tests**: Experiments currently running on your site.
* **Archived Adaptive A/B Tests**: Completed or stopped experiments stored for reference.

You can filter by **Device Type**, **Operating System**, and **Channel**.

### Key Concepts <a href="#key-concepts" id="key-concepts"></a>

| Term                         | Meaning                                                                                                            |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------ |
| **Variant**                  | A specific version of an experience shown to a subset of visitors (e.g., Variant A vs. Variant B).                 |
| **Control**                  | The baseline experience against which variants are measured.                                                       |
| **Statistical Significance** | The confidence level that an observed difference in performance is not due to chance.                              |
| **Adaptive Analysis**        | Breaking down results by predicted behavior segments to understand which variant works best for each visitor type. |

### Getting Started <a href="#getting-started" id="getting-started"></a>

{% stepper %}
{% step %}

### Create a New Adaptive A/B Test

Click **Create New Adaptive A/B Test** to begin.&#x20;

<figure><img src="https://2350286830-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F6Yw9IRJ6KbbucQPwZUCZ%2Fuploads%2F6fMTDeFcSNjWRx4pP4TW%2Fimage.png?alt=media&#x26;token=35b2da47-3fc4-4be6-8cd6-afcacbecf73b" alt=""><figcaption></figcaption></figure>

<figure><img src="https://2350286830-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F6Yw9IRJ6KbbucQPwZUCZ%2Fuploads%2FD400ElPiPpIgUsiyISVb%2Fimage.png?alt=media&#x26;token=a25c18bb-3e76-4ff1-8148-a9e03ac0c781" alt=""><figcaption></figcaption></figure>

The setup flow covers:

1. **Schedule settings:** Configure your schedule settings
2. **Live QA:** Trigger live QA
3. **Validate JS code**: Validate your JS code to move to the next section
4. **Choose targeting**: Run the test for all visitors or limit it to specific audiences, devices, or channels.
   {% endstep %}

{% step %}

### Target and split

* **Select success metrics**: Pick the KPIs that determine which variant wins (conversion rate, revenue per visitor, engagement, etc.).
* **Set traffic allocation**: Decide what percentage of traffic goes to each variant.
  {% endstep %}

{% step %}

### Monitor Results

Return to the Adaptive A/B Tests page to track performance. The platform reports results broken down by:

* Overall conversion metrics per variant.
* Performance by predicted behavior segment (e.g., high-intent vs. low-intent visitors).
* Statistical significance indicators.

{% hint style="info" %}
Let tests run until they reach statistical significance before making decisions. Stopping a test too early can lead to unreliable conclusions.
{% endhint %}
{% endstep %}

{% step %}

### Apply the Winner

Once a test reaches significance, you can promote the winning variant to become the default experience for all visitors, or for specific audience segments.
{% endstep %}
{% endstepper %}
