Sample ratio mismatch
In order to ensure accurate and reliable results from A/B testing experiments, it is important to use the right sampling approach and understand the implications of different samples. Statistical sampling is the process of selecting a subset of data from a larger population of data in order to make inferences about the population. In the context of A/B testing, sampling involves randomly selecting visitors from the larger population and directing them to the different variations of a test. This allows for an unbiased comparison of the different variations, as well as a more accurate assessment of the impact of the changes.
Once the sampling approach has been determined, it's important to monitor the progress of the experiment and keep an eye out for any potential sample ratio mismatches (SRMs).
Explanation
Sample ratio mismatch is an issue that can occur in A/B testing experiments, where the expected traffic allocation between variations does not match the observed visitor numbers. This mismatch can be caused by several different factors, including technical issues that are described after.
SRM causes
Redirection test
The redirection might take too long time or might crash at some point, making your visitor not land on the variation
Performance differences for users who suffer from the extra loading of the redirection
Bots that are leaving just after being redirected
Direct link to the variation URL shared across media (email, social media, etc.)
This cause isn’t applicable in AB Tasty solutions as the service verifies if the user comes from a redirection or not.
Complex tests with JavaScript code that crashes the browser
Only your variation would be impacted which could cause SRM
Allocation setup changed by the user over time
Detecting SRM
The impact of SRM depends on the size of the difference between the expected ratio and the observed ratio, as well as the total number of visitors observed. When an SRM problem is detected, it's important to understand the size of the issue and the cause of the problem, in order to be able to correct the issue before restarting the experiment.
To enhance efficiency, a Sequential algorithm layer has been integrated into the SRM, resulting in the S-SRM. This addition enables real-time alert detection, allowing you to take immediate action.
This feature is currently part of an Early Adopters program. Please contact your CSM to benefit from it.
The Sequential approach follows a frequentist methodology, relying on p-value principles for detection. The chosen significance threshold is 0.01.
This advancement addresses two critical limitations of traditional SRM systems:
Elimination of manual monitoring delays — With classic SRM, relying on self-initiated checks often results in delayed detection, potentially allowing issues to escalate beyond manageable levels.
Prevention of post-experiment disappointment — The S-SRM eliminates the scenario where problems are discovered only during final analysis, preventing the frustration of having to restart experiments from scratch.
By implementing the S-SRM, you ensure that anomalies are detected as early as possible, minimizing delays and optimizing decision-making.
Debugging SRM issues
Debugging Sample Ratio Mismatch (SRM) issues can be challenging. However, based on our experience and insights from Trustworthy Online Controlled Experiments by Ron Kohavi, Diane Tang, and Ya Xu, here are some key steps to help you diagnose and resolve SRM problems effectively:
Check your redirection performance
Verify experiment allocation consistency Confirm that the allocation of users has remained stable over time and hasn’t been altered overtime.
Analyze the period before the SRM alert Identify whether the issue has been present since the beginning of the test or if it emerged later. Then, review your website/product metrics to check for any performance degradation that could explain the anomaly.
Assess whether the SRM issue affects all your experiments or is limited to this particular test.
Once you have identified the root cause, you can confidently proceed with your experimentation roadmap without disruptions.
The Truth About SRM Alerts: AB Tasty’s Role Explained
This feature is currently part of an Early Adopters program. Please contact your CSM to benefit from it.
At AB Tasty, we proactively monitor Sample Ratio Mismatch (SRM) alerts across all active campaigns—over 15,000 at any given time—to detect trends and ensure system reliability. Our analysis focuses on whether the total number of SRM alerts is increasing over time. Over the past six months, our data has consistently shown that the SRM alert rate remains below 1%, confirming that SRM issues are rare.
These findings indicate that SRM discrepancies are not inherently linked to AB Tasty’s system but rather stem from external factors beyond our control. By continuously tracking and analyzing these alerts, we ensure that our platform remains robust, reliable, and optimized for accurate experimentation.
Still want to calculate it manually?
SRM Calculator
In order to help A/B testers with their experiments, AB TASTY provides, in addition to its S-SRM service, an online service for SRM analysis. This service can help identify potential SRM issues so that they can be corrected before the experiment is restarted.
Overall, SRM can significantly affect the results of an A/B test experiment, and it's important to take steps to identify potential SRM issues and take corrective action before starting or restarting the experiment.
How does it work?
Input : Type the allocation ratios expected (numbers that sum to 1) and the observed allocation (the one that happened, with integers). If your test has more than two variations, hit the “+” button to create additional variation slots.
Output : After hitting ‘enter’, you will be told whether there is a SRM or not. In case of SRM, a confidence interval of the deviation will be shown as a relative gain. A vertical dotted line shows where the normal value should be (corresponding to 0 deviation).
Last updated
Was this helpful?