This section refers to a deprecated version of the product. The new version is FE&R. To access FE&R, contact your CSM.
šŸ“˜ To learn more, read the FE&R documentation.
LogoLogo
PlatformsPricingRessources
  • User documentation
  • Onboarding
  • Help Center
  • Release Notes
  • Flagship - Deprecated
  • Feature Experimentation & Rollout (ex-Flagship) is evolving!
  • GETTING STARTED
    • šŸ‘‰Managing multiple environments
    • Using the trial version of Flagship
  • First steps with Flagship
    • Quick start guide
    • Glossary
  • Implementation
    • Sending transactions from the AB Tasty Shopify app directly to Feature Experimentation & Rollouts
  • Integrations
    • Heap Analytics integration (Push)
    • Tealium AudienceStream (receive audiences)
    • FullStory integration (segment exports)
    • Heap Analytics integration (Pull)
    • Google Analytics integration (pull audiences)
    • Segment Integration (receive traits)
    • Mixpanel integration (cohort exports)
    • šŸ‘‰Retrieving your third-party tools’ audiences in AB Tasty - Feature Experimentation & Rollouts
    • Zapier integration
    • Segment integration
  • Steps configuration
    • šŸ‘‰Configuring Sequential Testing Alerts
    • šŸ‘‰Configuring your Flags
    • šŸ‘‰Using the scheduler
    • šŸ› ļø[Troubleshooting] How to target a large number of users at the same time?
    • šŸ‘‰Configuring KPIs
    • šŸ‘‰Using the automatic rollback option
    • šŸ‘‰Targeting configuration
    • šŸ‘‰Dynamic allocation
    • šŸ‘‰Traffic allocation
  • Team
    • Access Rights, Teams & User Management
    • šŸ‘‰Defining rights per project
  • DEMO
    • AB Tasty - Feature Experimentation & Rollouts Demo - How to use it
  • Navigating the interface
    • šŸ‘‰Archiving use cases from the dashboard
    • šŸ‘‰Flags page
    • šŸ‘‰Running a search on the dashboard
    • Navigating the Flagship interface
  • REPORTING
    • šŸ‘‰Verifying your hit setup
    • šŸ‘‰Exporting reporting data
    • Understanding the "Chances to win" indicator
    • šŸ› ļø[Troubleshooting] How can I know my test is reliable and my data significant enough to be analyzed?
    • Reporting - A/B Test
    • šŸ‘‰Using the reporting filters
  • API keys & Settings
    • šŸ‘‰Acting on your account remotely
    • šŸ‘‰Using Experience Continuity
    • visitor experiment option
  • FEATURES SETUP
    • šŸ‘‰Bucket allocation
  • SDKs integration
    • šŸ‘‰Managing visitor consent
    • šŸ‘‰Understanding the use of SDKs
  • FAQ
    • Can I make a comparison for each reporting?
    • Can I use Flagship even if my SDK stack is not available?
  • Platform integration
    • šŸ‘‰Webhooks page
  • Decision API
    • Decision API for non-techie users
  • Account & Profile
    • šŸ‘‰Configuring account restrictions with MFA
    • šŸ‘‰Configuring a FA on your profile
  • RELEASE NOTES
    • October - Flagship becomes Feature Experimentation & Rollouts
    • February - Release Notes
    • šŸ“…January - Release Notes
    • šŸŽ‰December - Release Notes šŸŽ‰
    • 🦃November - Release Notes
    • September Release Notes šŸŽØ
    • June Release Notes šŸž
    • šŸøMay Release Notes ā˜€ļø
    • Flagship Release Notes April šŸ‡
    • Flagship February release notes šŸ‚
    • Flagship January release notes šŸŽ‰
    • Flagship November release notes 🦃
    • Flagship October Release Notes šŸŽƒ
    • Flagship September Release note šŸŽ’
    • Flagship August Release Notes 🐬
    • Flagship Release Notes July ā˜€ļø
    • Flagship Release notes June 🌻
    • Flagship Spring Release May 🌸
    • Flagship Release Notes: Fall
  • Use cases
    • šŸ‘‰Duplicating a winning variation
    • šŸ‘‰Configuring a Feature Toggle/Flag
    • šŸ‘‰Configuring an A/B Test
    • šŸ‘‰Configuring a Progressive rollout
    • šŸ‘‰Configuring a Personalization
  • VIDEO TUTORIALS
    • [Video Tutorial] AB Test
    • [Video Tutorial] Feature Flag
    • [Video Tutorial] Progressive Deployment
Powered by GitBook
LogoLogo

AB Tasty Website

  • Home page AB Tasty
  • Blog
  • Sample size calculator
  • Release note

AB Tasty Plateform

  • Login

Ā© Copyright 2025 AB Tasty, Inc, All rights reserved

On this page

Was this helpful?

Edit on GitLab
Export as PDF
  1. REPORTING

[Troubleshooting] How can I know my test is reliable and my data significant enough to be analyzed?

PreviousUnderstanding the "Chances to win" indicatorNextReporting - A/B Test

Last updated 3 days ago

Was this helpful?

When running an experimentation on your users, you must wait until Flagship has collected enough significant data before analyzing the reporting of your campaign in order to get reliable insights.

⭐ Good to know

We recommend following three business rules before making a decision after running an experiment: - waiting until you have recorded at least 5,000 unique visitors per variation; - letting the test run for at least 14 days (two business cycles); - waiting until you have reached 300 conversions on the primary KPI.

Flagship’s reporting displays a statistical reliability index that enables you to know if your test is statistically reliable. We recommend reaching the ā€˜Reliable’ status before making any firm and definitive decision.

If this label is not displayed, it means that one or several of the rules mentioned above have not been complied with.

We also strongly recommend leaving a test active for at least the amount of time related to your business cycle. This time may be estimated at several days for classic e-commerce websites (browsing, verification, purchasing), but may take several weeks for less traditional websites (e.g.,: B2B activities, large purchases, etc.).

Not every test gives reliable results, and sometimes, you may have to pause some of them due to low statistical reliability, because the tested hypotheses have no impact on your conversion rate.

The elements that indicate a low reliability rate are the following:

  • A very small difference between the original’s conversion rate and the variation,

  • A too significant gap between the 2 confidence intervals

  • Results that are chaotic time-wise (the average conversion rate curves overlap regularly from the start of the test).

Need additional information?

Submit your request at product.feedback@abtasty.com

Always happy to help!

šŸ› ļø