Experiments ROI dashboard

circle-exclamation

This tutorial guides you through the Experiments ROI Dashboard: AB Tasty's strategic planning tool for measuring and projecting the financial impact of your experimentation program. You'll learn how to use its advanced features including accuracy settings, quarterly analysis, and revenue projections powered by RevenueIQ™.

What's new

The Experiments ROI Dashboard uses an optimized calculation method that ensures every euro/dollar of revenue is counted exactly once, even when customers are exposed to multiple experiments simultaneously. This gives you a true picture of your experimentation campaigns financial impact.

Accessing the dashboard

  1. In the main AB Tasty navigation bar, click on Strategy

  2. Select Impact Dashboard, then Experiments ROI

circle-info

The report will display data from the last quarter with ROI optimized data as Testing phase impact calculation method, with over 3 months as expected uplift, by default.

The three main indicators of the dashboard are:

1) Global experiments ROI

It’s the sum of the testing phase revenue impact and the uplift that will be generated overtime by your winning variations from Web Experimentation campaigns only.

2) Testing phase revenue impact

It’s the sum of each quarter revenue impact based on real production data.

3) Expected winners uplift overtime

It’s the sum of each quarter winning variations uplift over the selected month based on RevenueIQ most probable scenario.

ROI impact

  • A green display indicates that running the campaign positively impacted revenue generation.

  • A yellow display indicates a negative impact on revenue.

Understanding accuracy settings

Testing phase impact calculation

You can select the Testing phase impact calculation method

ROI optimized data

Data is attributed homogeneously between campaigns to deduplicate transactions and perfectly represent the real business impact.

Report standard data

As in A/B test reports, it doesn’t use an attribution strategy to manage parallel running campaigns. This approach is optimal for A/B test analysis but not suited for ROI calculation as it can’t be summed up safely.

Calculation rules

To ensure accuracy, these rules apply:

  • Campaign types: A/B tests and multipage campaigns

  • Statistical threshold: Only uplifts with 95% or higher chance to win are counted

Expected uplift calculation

Uplift Estimated (Time Period)

This dropdown controls how far into the future you want to project the revenue uplift. You have three options:

  • Over 1 month - Projects uplift for the next month

  • Over 3 months (default) - Projects uplift for the next quarter

  • Over 6 months - Projects uplift for the next half-year

Winner Selection Method

This dropdown controls which winning experiments are included in your projection calculations:

1) Best winner of each quarter only (Default) Only the single best-performing winner from each quarter is included in projections.

The system identifies the experiment with the highest projected monthly ROI in each quarter. Only that "best winner" is used for the projection calculation and the other winning experiments are visually "dimmed" in the campaign list You can use it When you want to

  • Focus on your top performers only.

  • Budget planning where you want the most reliable single number

  • When explaining to stakeholders which test had the biggest impact

2) All winners

Every winning experiment from each quarter is included in projections. The system sums the projected monthly ROI from ALL winning variations. All winners are displayed in green.

Last updated

Was this helpful?