This section refers to a deprecated version of the product. The new version is FE&R. To access FE&R, contact your CSM.
📘 To learn more, read the FE&R documentation.
LogoLogo
PlatformsPricingRessources
  • User documentation
  • Onboarding
  • Help Center
  • Release Notes
  • Flagship - Deprecated
  • Feature Experimentation & Rollout (ex-Flagship) is evolving!
  • GETTING STARTED
    • 👉Managing multiple environments
    • Using the trial version of Flagship
  • First steps with Flagship
    • Quick start guide
    • Glossary
  • Implementation
    • Sending transactions from the AB Tasty Shopify app directly to Feature Experimentation & Rollouts
  • Integrations
    • Heap Analytics integration (Push)
    • Tealium AudienceStream (receive audiences)
    • FullStory integration (segment exports)
    • Heap Analytics integration (Pull)
    • Google Analytics integration (pull audiences)
    • Segment Integration (receive traits)
    • Mixpanel integration (cohort exports)
    • 👉Retrieving your third-party tools’ audiences in AB Tasty - Feature Experimentation & Rollouts
    • Zapier integration
    • Segment integration
  • Steps configuration
    • 👉Configuring Sequential Testing Alerts
    • 👉Configuring your Flags
    • 👉Using the scheduler
    • 🛠️[Troubleshooting] How to target a large number of users at the same time?
    • 👉Configuring KPIs
    • 👉Using the automatic rollback option
    • 👉Targeting configuration
    • 👉Dynamic allocation
    • 👉Traffic allocation
  • Team
    • Access Rights, Teams & User Management
    • 👉Defining rights per project
  • DEMO
    • AB Tasty - Feature Experimentation & Rollouts Demo - How to use it
  • Navigating the interface
    • 👉Archiving use cases from the dashboard
    • 👉Flags page
    • 👉Running a search on the dashboard
    • Navigating the Flagship interface
  • REPORTING
    • 👉Verifying your hit setup
    • 👉Exporting reporting data
    • Understanding the "Chances to win" indicator
    • 🛠️[Troubleshooting] How can I know my test is reliable and my data significant enough to be analyzed?
    • Reporting - A/B Test
    • 👉Using the reporting filters
  • API keys & Settings
    • 👉Acting on your account remotely
    • 👉Using Experience Continuity
    • visitor experiment option
  • FEATURES SETUP
    • 👉Bucket allocation
  • SDKs integration
    • 👉Managing visitor consent
    • 👉Understanding the use of SDKs
  • FAQ
    • Can I make a comparison for each reporting?
    • Can I use Flagship even if my SDK stack is not available?
  • Platform integration
    • 👉Webhooks page
  • Decision API
    • Decision API for non-techie users
  • Account & Profile
    • 👉Configuring account restrictions with MFA
    • 👉Configuring a FA on your profile
  • RELEASE NOTES
    • October - Flagship becomes Feature Experimentation & Rollouts
    • February - Release Notes
    • 📅January - Release Notes
    • 🎉December - Release Notes 🎉
    • 🦃November - Release Notes
    • September Release Notes 🎨
    • June Release Notes 🐞
    • 🍸May Release Notes ☀️
    • Flagship Release Notes April 🐇
    • Flagship February release notes 🏂
    • Flagship January release notes 🎉
    • Flagship November release notes 🦃
    • Flagship October Release Notes 🎃
    • Flagship September Release note 🎒
    • Flagship August Release Notes 🐬
    • Flagship Release Notes July ☀️
    • Flagship Release notes June 🌻
    • Flagship Spring Release May 🌸
    • Flagship Release Notes: Fall
  • Use cases
    • 👉Duplicating a winning variation
    • 👉Configuring a Feature Toggle/Flag
    • 👉Configuring an A/B Test
    • 👉Configuring a Progressive rollout
    • 👉Configuring a Personalization
  • VIDEO TUTORIALS
    • [Video Tutorial] AB Test
    • [Video Tutorial] Feature Flag
    • [Video Tutorial] Progressive Deployment
Powered by GitBook
LogoLogo

AB Tasty Website

  • Home page AB Tasty
  • Blog
  • Sample size calculator
  • Release note

AB Tasty Plateform

  • Login

© Copyright 2025 AB Tasty, Inc, All rights reserved

On this page
  • Interpretation
  • Use case
  • Case #1: High Chance to win
  • Case #2: Low Chance to win

Was this helpful?

Edit on GitLab
Export as PDF
  1. REPORTING

Understanding the "Chances to win" indicator

The Chances to win is a statistical index which indicates the odds of a strictly positive gain on a variation compared to the original version. It is expressed as a % for any selected KPI.

⭐ Good to know

We recommend following business rules before making a decision after running an experiment:

- waiting until you have recorded at least 5,000 unique visitors per variation

- letting the test run for at least 14 days (two business cycles)

This measurement is based on the number of conversions collected. The Chances to win enables you to determine the risk percentage (100% minus the Chance to win). It enables a fast result analysis for non-experts and simplifies the decision-making process.

Interpretation

The Chances to win indicator can take on values between 0% and 100%, rounded to the nearest hundredth and should be interpreted only if the business rules are complied with.

  • Green: the Chance to win is equal to or greater than 95%. This means the variation can be implemented with what is considered to be a low risk (5% or less).

  • Orange: the Chance to win is between 5% and 95%. In this case, the feature is either neutral or lacks data. You can check the confidence intervals: the further the confidence intervals are, the more you will have to wait to have enough data. There is as much chance of the variation underperforming compared to the original variation as there is of it overperforming.

  • Red: the Chance to win is equal to or lower than 5%. This means the likelihood that this variation is underperforming compared to the original version is very high. Thus, the variation mustn’t be implemented as the risk is very high (95% or more).

As soon as your reliability status is reliable, this means the data is statistically relevant and ready to be analyzed.

Use case

Case #1: High Chance to win

In this example, the chosen goal is the Conversion rate. The experiment is made up of a single variation.

The conversion rate of variation 2 is 1.55%, compared to 1.49% for the original version.

The Chance to win displays 99.95% for variation 1, which means that variation 1 has a 99.95% chance of triggering a positive gain, and therefore of performing better than the original version. The odds of this variation performing worse than the original therefore equal 0.05%, which is a low risk.

Because the Chance to win is higher than 95%, variation 1 may be implemented without incurring a high risk.

Case #2: Low Chance to win

In this example, the chosen goal is the Conversion rate. The experiment is made up of a single variation.

The conversion rate of variation 1 is 18.07%, compared to 18.21% for the original version.

The Chance to win displays 0.93% for variation 1. This means that variation 1 has a 0.93% chance of triggering a positive gain, and therefore of performing better than the original version. The odds of this variation performing worse than the original therefore equal 99.07%, which is a very high risk.

Because the Chance to win is lower than 95%, variation 1 should not be implemented: the risk would be too high.

Need additional information?

Submit your request at product.feedback@abtasty.com

Always happy to help!

PreviousExporting reporting dataNext[Troubleshooting] How can I know my test is reliable and my data significant enough to be analyzed?

Last updated 3 days ago

Was this helpful?