This section refers to a deprecated version of the product. The new version is FE&R. To access FE&R, contact your CSM.
πŸ“˜ To learn more, read the FE&R documentation.
LogoLogo
PlatformsPricingRessources
  • User documentation
  • Onboarding
  • Help Center
  • Release Notes
  • Flagship - Deprecated
  • Feature Experimentation & Rollout (ex-Flagship) is evolving!
  • GETTING STARTED
    • πŸ‘‰Managing multiple environments
    • Using the trial version of Flagship
  • First steps with Flagship
    • Quick start guide
    • Glossary
  • Implementation
    • Sending transactions from the AB Tasty Shopify app directly to Feature Experimentation & Rollouts
  • Integrations
    • Heap Analytics integration (Push)
    • Tealium AudienceStream (receive audiences)
    • FullStory integration (segment exports)
    • Heap Analytics integration (Pull)
    • Google Analytics integration (pull audiences)
    • Segment Integration (receive traits)
    • Mixpanel integration (cohort exports)
    • πŸ‘‰Retrieving your third-party tools’ audiences in AB Tasty - Feature Experimentation & Rollouts
    • Zapier integration
    • Segment integration
  • Steps configuration
    • πŸ‘‰Configuring Sequential Testing Alerts
    • πŸ‘‰Configuring your Flags
    • πŸ‘‰Using the scheduler
    • πŸ› οΈ[Troubleshooting] How to target a large number of users at the same time?
    • πŸ‘‰Configuring KPIs
    • πŸ‘‰Using the automatic rollback option
    • πŸ‘‰Targeting configuration
    • πŸ‘‰Dynamic allocation
    • πŸ‘‰Traffic allocation
  • Team
    • Access Rights, Teams & User Management
    • πŸ‘‰Defining rights per project
  • DEMO
    • AB Tasty - Feature Experimentation & Rollouts Demo - How to use it
  • Navigating the interface
    • πŸ‘‰Archiving use cases from the dashboard
    • πŸ‘‰Flags page
    • πŸ‘‰Running a search on the dashboard
    • Navigating the Flagship interface
  • REPORTING
    • πŸ‘‰Verifying your hit setup
    • πŸ‘‰Exporting reporting data
    • Understanding the "Chances to win" indicator
    • πŸ› οΈ[Troubleshooting] How can I know my test is reliable and my data significant enough to be analyzed?
    • Reporting - A/B Test
    • πŸ‘‰Using the reporting filters
  • API keys & Settings
    • πŸ‘‰Acting on your account remotely
    • πŸ‘‰Using Experience Continuity
    • visitor experiment option
  • FEATURES SETUP
    • πŸ‘‰Bucket allocation
  • SDKs integration
    • πŸ‘‰Managing visitor consent
    • πŸ‘‰Understanding the use of SDKs
  • FAQ
    • Can I make a comparison for each reporting?
    • Can I use Flagship even if my SDK stack is not available?
  • Platform integration
    • πŸ‘‰Webhooks page
  • Decision API
    • Decision API for non-techie users
  • Account & Profile
    • πŸ‘‰Configuring account restrictions with MFA
    • πŸ‘‰Configuring a FA on your profile
  • RELEASE NOTES
    • October - Flagship becomes Feature Experimentation & Rollouts
    • February - Release Notes
    • πŸ“…January - Release Notes
    • πŸŽ‰December - Release Notes πŸŽ‰
    • πŸ¦ƒNovember - Release Notes
    • September Release Notes 🎨
    • June Release Notes 🐞
    • 🍸May Release Notes β˜€οΈ
    • Flagship Release Notes April πŸ‡
    • Flagship February release notes πŸ‚
    • Flagship January release notes πŸŽ‰
    • Flagship November release notes πŸ¦ƒ
    • Flagship October Release Notes πŸŽƒ
    • Flagship September Release note πŸŽ’
    • Flagship August Release Notes 🐬
    • Flagship Release Notes July β˜€οΈ
    • Flagship Release notes June 🌻
    • Flagship Spring Release May 🌸
    • Flagship Release Notes: Fall
  • Use cases
    • πŸ‘‰Duplicating a winning variation
    • πŸ‘‰Configuring a Feature Toggle/Flag
    • πŸ‘‰Configuring an A/B Test
    • πŸ‘‰Configuring a Progressive rollout
    • πŸ‘‰Configuring a Personalization
  • VIDEO TUTORIALS
    • [Video Tutorial] AB Test
    • [Video Tutorial] Feature Flag
    • [Video Tutorial] Progressive Deployment
Powered by GitBook
LogoLogo

AB Tasty Website

  • Home page AB Tasty
  • Blog
  • Sample size calculator
  • Release note

AB Tasty Plateform

  • Login

Β© Copyright 2025 AB Tasty, Inc, All rights reserved

On this page
  • Reporting - A/B Test
  • Accessing the reporting analysis
  • A/B Test information
  • Results displayed
  • Metrics
  • Types of metrics and data

Was this helpful?

Edit on GitLab
Export as PDF
  1. REPORTING

Reporting - A/B Test

Previous[Troubleshooting] How can I know my test is reliable and my data significant enough to be analyzed?NextUsing the reporting filters

Last updated 2 days ago

Was this helpful?

Reporting - A/B Test

When viewing an A/B Test reporting function, you can access several levels of information.

The reporting displays the experiment results in a clear and visual way for easy reading and interpretation. Highlighted information and color codes enable you to quickly identify the best-performing variations.

The metrics shown in the new Flagship reporting layout are those chosen during the KPI configuration of the Basic Information step. If you haven't configured any KPIs, none will be displayed in the reporting layout.

Accessing the reporting analysis

To access the reporting, click Reporting from an A/B Test on the dashboard.

A/B Test information

The new reporting features a summary of information on the experiment:

  • The experiment name;

  • A toggle for launching or pausing the experiment in the top right corner;

  • Date and Context key filters;

  • A Reliability Status, to make sure you can analyze your data;

  • Your primary and secondary metrics with various tabs (depending on the metrics selected during KPI configuration). For more information, refer to Flagship - A/B Tests.

Results displayed

The experiment results are based on the metrics you configured during the KPI configuration step. If you haven't configured any KPIs, none will be displayed in the reporting.

By default, results are calculated based on the original version of the experiment. If necessary, you can change the reference variation by selecting the one that interests you in the variation tab of the experiment configuration.

Metrics

The new reporting displays the primary metric of the experiment, followed by the secondary metrics.

The primary metric serves as a point of reference for the entire experiment and enables you to determine which variation takes precedence over the other(s). This is why it appears at the top of the reporting feature: all future decisions will be based on this metric.

It can only have one tab focused on, either Transaction Rate/Conversion Rate or Transaction Total Revenue/Conversion Total Value.

Secondary metrics are displayed one after the other underneath the primary metric. You can choose to display all variations (by default) or one in particular. To do this, check the variation(s) you want to see in the chart on the list of variations to the right of the relevant goal.

You can display the two tabs for each secondary metric.

Types of metrics and data

Transaction and Conversion KPI can have two tabs each.

Transaction:

  • Transaction Rate

  • Transaction Total Revenue

  • Average Basket (coming soon)

Conversion:

  • Conversion Rate

  • Conversion Total Value

  • Average Value (coming soon)

Each tab shows basic information such as:

  • The Variation name

  • The number of unique visitors

  • The number of unique conversions

Then, depending on the KPI, there is some personalized data:

Transaction Rate and Conversion Rate:

  • Transaction Rate / Conversion Rate, number of unique transactions/conversions divided by the number of visitors

  • Uplift, improvement compared to the reference variation, with the confidence interval being calculated using the Bayesian algorithm.

  • Chances to win, the chances you have to win more if you put that variation into production.

Transaction Total Revenue and Conversion Total Value:

  • Transaction Total Revenue / Conversion Total Value, indicates how much revenue/value the variation won

  • Revenue Projection / Value projection, indicates the total revenue/value if all visitors were assigned to the variation

  • Uplift, improvement compared to the reference variation. This is based on raw data only and should not be interpreted as a statistically valid result.

  • Potential Value/ Potential Revenue, the difference between the total revenue/value registered for the experiment and the Revenue Projection /Value Projection.

Average Basket and Average Value:

  • Average Basket / Average Value, addition of all the transaction revenue divided by the number of transactions.

  • Uplift, improvement made compared to the reference variation. This is based on raw data only and should not be interpreted as a statistically valid result.

Chances to win

The Chances to win enable you to quickly identify the leading variation. This information has three levels:

  • Green, your experiment is on track, we are 95% sure that it will have benefits

  • Orange, your experiment might be on track, but we are 95% sure that even if it has benefits, it may also have side-effects.

  • Red, your experiment isn’t on track at all, we are 95% sure that it will not have any benefits.

Whatever your Chances to win results are, you need to wait 2 business cycles before analyzing your data and having significant results. By default, a business cycle is 5 weekdays and 2 weekends, so 14 consecutive days in total. If you know your business cycle, feel free to adapt the analysis of your test accordingly.

As soon as your reliability status is reliable, this means the data is statistically significant and ready to be analyzed.

Uplift data

The Uplift -- on Transaction Rate and Conversion Rate goals -- enables you to access advanced statistics. This data is based on the Bayesian approach and provides two measurements: confidence interval and improvement gain.

The improvement gain indicator enables you to manage uncertainty related to conversion rate measurements. It indicates what you may really hope to gain by replacing one variation with another.

Last update information

The data displayed on the reporting is updated at a specific frequency. Here is the frequency schema:

  • From 0 to 3 days after the last launch of the experiment: every hour

  • From 4 days to 7 days after the last launch of the experiment: every 4 hours

  • From 8 days to 14 days after the last launch of the experiment: every 8 hours

  • From 15 days to 30 days after the last launch of the experiment: every 24 hours

  • From 31 days to 60 days after the last launch of the experiment: every 48 hours

  • From 61 days after the last launch of the experiment: every 168 hours (1 week)

Note that during the first 12 hours of the experiment, the data displayed is in real-time.

The new report is only available for AB Tests and Progressive Rollout. If you would like to share feedback, please send an e-mail to .

Secondary_goal_conversion_value.png
Average_revenue_tab.png

product.feedback@abtasty.com