[Troubleshooting] How can I know my test is reliable and my data significant enough to be analyzed?
Last updated
Was this helpful?
Last updated
Was this helpful?
When running an experimentation on your users, you must wait until Flagship has collected enough significant data before analyzing the reporting of your campaign in order to get reliable insights.
ā Good to know
We recommend following three business rules before making a decision after running an experiment: - waiting until you have recorded at least 5,000 unique visitors per variation; - letting the test run for at least 14 days (two business cycles); - waiting until you have reached 300 conversions on the primary KPI.
Flagshipās reporting displays a statistical reliability index that enables you to know if your test is statistically reliable. We recommend reaching the āReliableā status before making any firm and definitive decision.
If this label is not displayed, it means that one or several of the rules mentioned above have not been complied with.
We also strongly recommend leaving a test active for at least the amount of time related to your business cycle. This time may be estimated at several days for classic e-commerce websites (browsing, verification, purchasing), but may take several weeks for less traditional websites (e.g.,: B2B activities, large purchases, etc.).
Not every test gives reliable results, and sometimes, you may have to pause some of them due to low statistical reliability, because the tested hypotheses have no impact on your conversion rate.
The elements that indicate a low reliability rate are the following:
A very small difference between the originalās conversion rate and the variation,
A too significant gap between the 2 confidence intervals
Results that are chaotic time-wise (the average conversion rate curves overlap regularly from the start of the test).
Need additional information?
Submit your request at product.feedback@abtasty.com
Always happy to help!