Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
AB Tasty's reporting system offers insights into campaign performance by specific goals. It visually presents campaign results, utilizing various data representations and color codes to highlight the top-performing variations and help you make data driven decisions to boost your impact.
Key Features and Functionality:
Confidence Intervals and Winning Chance: Utilizes Bayesian statistics for more reliable analysis.
Real-Time Reporting: Provides up-to-the-minute insights and data.
Audience Comparison: Offers a double-filtered view for specific audience analysis.
Detailed Filtering Options: Includes filters by device, geolocation, browser, and user type (new vs. returning) to assess campaign success and trends.
Data Representation: Uses various forms and color coding to easily identify the best-performing variations.
Clear Visualization: Campaign results are displayed clearly for straightforward interpretation.
Goal-Based Performance Analysis: Allows users to analyze campaign performance
Evi Explore powered by RevenueIQ is an advanced feature that makes it easier to interpret the results of transactional A/B tests. By introducing a new set of revenue-focused metrics, Evi Explore gives you a clear, statistically sound view of the revenue impact of each test variation. This empowers you to make faster, more confident, and more profitable decisions based on real revenue projections, rather than relying solely on traditional metrics like conversion rate or average order value.
Log in to AB Tasty
Go to your AB Tasty dashboard and log in with your credentials.
Navigate to Your Campaign Reports
From the main menu, select the campaign (A/B test) you want to analyze.
Open the report for a campaign that uses a transaction goal (i.e., a goal that tracks purchases or revenue).
Open the Evi Explore Insights
In the campaign report, look for the Transaction metrics section.
Click on the Insights tab to access our smart analysis and recommendations.
The left panel provides a summary and recommendations based on
Below are the report details
This metric shows the most probable incremental revenue that a winning variation could generate per visitor. It helps you understand the direct financial value of each user and assess your potential gain or loss if the variation is launched.
How it's calculated: The median value of the Revenue Uplift Confidence Interval, divided by the number of unique visitors in the variation.
Bayesian Only:
revenueIQ stats are powered by our new Bayesian statistical engine.
They are not available in frequentist mode.
Baseline Traffic is Key: If a report shows a "0" for a metric in a variation even if transactions occurred, it is because there was insufficient traffic in the baseline (the original version). It is essential to ensure a consistent flow of traffic to your baseline to get accurate and actionable results.
Understanding the differences between risky and best practices is crucial
Decision making on
Risky practice (Based on metrics)
Best practice (Based on stats)
Revenue (with traffic perspective)
RPU
Using metrics for decision-making can be risky due to their inherent variance. Variance refers to the potential instability of data points, indicating that performance variations may not be consistent once deployed. The statistics provided by AB Tasty are specifically designed to minimize the noise caused by variance.
Revenue Uplift per user (revenueIQ)
Revenue (with business circle perspective)
Revenue uplift (potential)
Revenue Uplift per month (RevenueIQ)
Revenue
Revenue
Revenue Uplift Chance to win and confidence interval
conversion
Conversion rate growth
Conversion uplift Chance To Win and confidence interval
Average Order Value
AOV growth
AOV uplift chance to win and confidence interval
The right panel displays visualizations of key revenue stats and projections.
This metric shows the most probable incremental revenue that a winning variation could generate per month. It is a revenue projection per month based on median value of revenue uplift confidence interval. It helps you understand the direct financial impact of each variation per month if you ship it live on your page traffic.
How it's calculated: The median value of the Revenue Uplift Confidence Interval, divided by the number of days the campaign ran and multiplied by 30 and projected on all traffic
This metric provides a range of potential revenue gains, offering a clear view of the most likely, pessimistic, and optimistic scenarios. This allows you to quantify your risk and potential reward more precisely.
This metric provides the percentage probability that a specific variation will generate a positive revenue uplift compared to the original. A high percentage (e.g., >95%) indicates a high probability of a profitable outcome.
This metric provides a range of potential AOV gains (in currency), offering a clear view of the most likely, pessimistic, and optimistic scenarios. This allows you to quantify your risk and potential reward more precisely.
This existing metric has been fully integrated into the Bayesian engine, improving its accuracy.
RevenueIQ will be available for all tests that started after 1st of august
RevenueIQ will not be available if traffic allocation has been changed by the campaign lifecycle
Evi Analysis is an AI-powered assistant designed to help you analyze campaign data within AB Tasty reports.
Instead of manually sifting through data tables and charts, you can simply type your questions, and the AI will process the underlying metrics, statistical significance, and objective performance to deliver concise and relevant answers.
Evi leverages CRO best practices and statistical data to provide clear, data-backed answers and recommendations.
The assistant can answer with text, table, and/or chart based on your needs.
Use case examples:
Explain the winning variation
Challenge my hypothesis
Give me CRO best practices based on my campaign results
To get the most accurate and helpful responses from the AI, consider the following:
Wait for readiness : if you are willing to make a decision, wait for the readiness to be ok at least on primary goals.
Be Specific: The more precise your question, the better the AI can understand your intent.
Good: "Which variation performed best for the 'Conversion Rate' objective?"
While powerful, the Evi Analysis has some limitations:
It does not work in Frequentist mode
Relies on Available Data: The AI can only analyze the data presented in your report. It cannot infer information not present in the underlying metrics and statistics.
Statistical Interpretation: The AI provides statistical interpretations, but it's crucial for you, to apply domain expertise and strategic context to those interpretations.
Complex Scenarios: For highly complex or multi-variate statistical modeling, you may still need to consult with a data analyst.


Reference Objectives and Metrics: Clearly state the objectives and metrics you are interested in.
"What is the statistical significance for the 'Click-Through Rate' of Variation B versus Control?"
"Show me the difference in average revenue per user between all variations."
Ask for Comparisons: The AI is excellent at comparing performance between variations.
"Compare the bounce rate of Variation A and Variation B."
"Which variation had the highest increase in sign-ups compared to the control?"
Inquire About Statistical Significance:
"Is the difference in [Metric] between [Variation X] and [Variation Y] statistically significant?"
"What is the p-value for the 'Add to Cart' objective?"
Ask for Summaries:
"Summarize the overall performance of this A/B test."
"What are the key takeaways from this report?"
Identify Top/Worst Performers:
"Which variation had the highest [Metric]?"
"Which objective performed the worst for Variation C?"
Troubleshooting & Clarification:
"Can you explain the 'Confidence Interval' for the 'Purchase Rate'?"
No Predictive Capabilities (currently): The AI focuses on analyzing past performance; it does not predict future outcomes or design new experiments (unless specifically integrated in a future release).
Discover how to accurately measure and optimize revenue in your experiments thanks to our patented feature. For a deeper dive, download our whitepaper.
The most important KPI in e-commerce is revenue. In an optimization context, this means optimizing two axes:
Conversion: “Turning as many visitors as possible into customers.”
Average Order Value (AOV): “Generating as much value as possible per customer.”
However, CRO often remains focused on optimizing conversion. AOV is often neglected in analysis due to its statistical complexity. AOV is very difficult to estimate correctly with classic tests (t-test, Mann-Whitney) because of highly skewed purchase distributions with no upper bound. RevenueIQ offers a robust test that directly estimates the distribution of the effect on revenue (via a refined estimation of AOV), providing both probability of gain (“chance to win”) and consistent confidence intervals. In benchmarks, RevenueIQ maintains a correct false positive rate, has power close to Mann-Whitney, and confidence intervals four times narrower than the t-test. By combining the effects of AOV and CR, it delivers an RPV impact and then an actionable revenue projection.
In CRO, we often optimize CR due to a lack of suitable tools for revenue. Yet, Revenue = Visitors × CR × AOV; ignoring AOV distorts the view.
AOV is misleading:
Unbounded (someone can buy many items).
Highly right-skewed (many small orders, a few very large ones).
A few “large and rare” values can dominate the average.
In random A/B splits, these large orders can be unevenly distributed → huge variance in observed AOV.
t-test: Assumes normality (or relies on the Central Limit Theorem for the mean). On highly skewed e-commerce data, the CLT variance formula is unreliable at realistic volumes. Result: very low power (detects ~15% of true winners in the benchmark) and gives very wide confidence intervals → slow and imprecise decisions.
Mann-Whitney (MW): Robust to non-normality (works on ranks), so much more powerful (~80% detection in the benchmark). But only provides a p-value (thus only trend information), not an estimate of effect size (no confidence interval) → impossible to quantify the business case.
It uses and combines two innovative approaches:
Uses a bootstrap technique to study the variability of a measure with unknown statistical behavior.
Instead of measuring the difference in average baskets, it measures the average of basket differences. It compares sorted order differences between variants (A and B), with weighting by density (approx. log-normal) to favor “comparable” pairs. This bypasses the problem of very large observed value differences in such data.
And it deduces:
The Chance to win (probability that the effect is > 0), readable for decision-makers.
Narrow and reliable confidence intervals on the AOV effect as well as on revenue.
Alpha validity (on AA tests): good control of false positives. Using a typical 95% threshold exposes only a 5% false positive risk.
Statistical power measurement: 1000 AB tests with a known effect of +€5
MW Test: 796/1000 winners, ~80% power.
t-test: 146/1000, only 15% power.
Beyond techniques and formulas, just remember that RevenueIQ uses a Bayesian method for AOV analysis, allowing this metric to be merged with conversion. Our competitors use frequentist methods, at least for AOV, making any combination of results impossible. Under the hood, RevenueIQ combines conversion and AOV results into a central metric: visitor value (RPV). With precise knowledge of RPV, revenue (€ or other currency) is then projected by multiplying by the targeted traffic (for a given period).
Real Case (excerpt) Here is a textbook case for RevenueIQ:
Conversion gain is 92% CTW, encouraging but not “significant” by standard threshold.
AOV gain is at 80% CTW. Similarly, taken separately, this is not enough to declare a winner.
The combination of these two metrics gives a CTW of 95.9% for revenue, enabling a simple and immediate decision, where a classic approach would have required additional data collection while waiting for one of the two KPIs (CR or AOV) to become significant.
For an advanced business decision, RevenueIQ provides an estimated average gain of +€50k, with a confidence interval [-€6,514; +€107,027], allowing identification of minimal risk and substantial gain.
Without RevenueIQ: “inconclusive” results (or endless tests) ⇒ missed opportunities.
With RevenueIQ: faster, quantified decisions (probability, effect, CI), at the revenue level (RPV then projected revenue).
Stop interpreting observed AOV without safeguards: it is highly volatile.
Avoid filtering/Winsorizing “extreme values”: arbitrary thresholds ⇒ bias.
Measure CR & AOV jointly and reason in RPV to reflect business reality.
Use RevenueIQ to obtain chance to win + CI on AOV, RPV, and revenue projection.
RevenueIQ brings a robust and quantitative statistical test to monetary metrics (AOV, RPV, revenue), where:
t-test is weak and imprecise on e-commerce data,
Mann-Whitney is powerful but not quantitative. RevenueIQ enables faster detection, quantification of business impact, and prioritization of deployments with explicit confidence levels.
A reporting is a single and unique page that is generated for each campaign you execute with AB Tasty, whether that’s a test or a personalization campaign. The reporting is a valuable tool that assists you in making decisions to increase your conversion rates.
You can access the reporting by clicking on the reporting button on any campaign in your Web experimentation or Personalization list.
The green button indicates that the campaign has enough data to be fully analyzed
The yellow button indicates that the campaign does not have enough data to fully analyze the primary goal.
A campaign reporting is composed of the following information and components
The reporting header shows when the campaign was created and for how long it has been live. It also indicates the total number of unique visitors on the campaign.
You can also trace back any modifications done to the campaign from the Open campaign history button.
If you have opted in for AI features, a banner will automatically provide a quick, AI-generated summary of the A/B test campaign's results. The goal is to make campaign analysis more accessible, playful, and surprising with minimal effort.
The reporting shows by default a monitoring graph of the Primary goal
Users can switch between Day to Day and Cumulative graph views. This is particularly beneficial for users who wish to monitor the stabilization of conversion rates over time at the goal level.
Day-to-day view:
Cumulative view:
The graph switcher enables toggling between these views, with "Day to Day" as the default. The cumulative graph displays conversion rates accumulated over time, providing insights into long-term trends.
The availability of the cumulative graph depends on the selected data perspective: it is unavailable when Sessions is selected, but available when Visitors is chosen.
The cumulative graph can display data for up to 45 days, with a maximum of six variations, and users can interact with the graph to view detailed conversion statistics by hovering over the lines. Additionally, users can customize the displayed period using a date-picker, ensuring the graph reflects the desired timeframe.
You can select from either Unique visitors or Unique sessions for both the Original and variation(s) for the Primary goal of the campaign. Conversion rates are also graphed.
The comparative graph is based on all goals and show the number of Unique visitors per goal.
Both graphs can be customized using the following filters:
Select a date range
Select goals to compare
The primary goal is also displayed in table view. The table view shows the name of the Primary goal and the type of tracking used. The table view also allows you to sort data and export them. The data can be viewed generally by either Visitor or Session. However, the options available will depend on the type of metric chosen.
The Variation Name column width can also be adjusted to ease reading. The Variation ID can be copied to the clipboard by clicking on it.
User who subscribed to the EmotionsAI feature can apply EmotionsAI filter templates on the reporting:
To learn more about using EmotionsAI filter templates, read .
You can choose the type of metric view you want for each goal. The metric available depends on the type of goal tracking used.
The following data can be visualized in the table:
Growth: indicated by trending arrows
Chance to win
Confidence interval
Opportunities
Below are a few examples:
Opportunities aim at identifying losing, neutral and winning status. Two subviews are available out here :
highlights : list sure opportunities out of the following rules :
Winner : chance to win above 95%, AND owns more than 5000 visitors and owns more that 300 conversions (rules worth whatever it is filtered / segmented or not)
Looser : chance to win under 5%, AND owns more than 5000 visitors and owns more that 300 conversions (rules worth whatever it is filtered / segmented or not)
Best practices : Ideally the campaign ran more than 14 days in a raw before you make decision\
Detailed view : list all the variations’ performances with detailed metrics
Horizontal bars in the opportunities column indicate:
How big is the growth: Length of the bar
Positive growth of conversion: head right
Negative growth of conversion: head left
How statistically valuable they are : - Red : sure loser - Grey : neutral - Red : sure winner
The view includes several columns for in-depth analysis and is sorted by audience size by default.
Bounce rate:
The color logic is inverted for bounce rate. The higher the bounce rate, the lower the change to win.
The following screenshot shows a low bounce rate (trend arrow going down) but green.\
The default table view displayed when the reporting is open.
The statistics view shows the growth, chance to win and Confidence Interval:
Growth: Probability that visitors will perform more tracked actions compared to baseline
Chance to win: Probability that visitors will perform more tracked actions compared to baseline
Confidence interval: Indicator that assures (with 95% confidence) that the variation growth rate will be between the lower and upper bounds if all traffic is allocated to the variation. The median value indicates the most probable growth rate
These segments show performance based on growth and chance to win. Segments on the right indicate strong performance, while those on the left indicate weaker performance. The length of a segment correlates with the amount of growth: longer segments on the right signify greater growth, and longer segments on the left signify lesser growth. Green segments denote the highest probability of success, whereas red segments indicate the lowest probability of success.
To learn more about the analysis process, read more on the
When enough data has been gathered for the campaign, the CAMPAIGN READY status is displayed on the table. Hovering it will give you a quick summary of the campaign and its Readiness.
Refer to article for more details
Cell Values in Reporting tables
When reviewing a reporting table in AB Tasty, you may encounter three types of cell values:
Figures
This represents an aggregated figure related to your metrics or statistics. A value of "0" is a possible figure and should be interpreted accordingly.
N/A (Not Available)
This indicates that the figure is intentionally not provided for that cell. This usually occurs because the campaign configuration doesn't allow for analysis.
Data can be exported. You can either export the Reporting data or the Raw data. The Reporting data export contains only data displayed in the reporting whereas the Raw data export contains hits for all types of goal. For more information on data export, please refer to .
The secondary goals are listed just after the primary goal table view and offers the same features as the primary goal table view.
To learn more about campaign goals, read
You can now access and view campaign reporting and full analysis of your data.
To learn more about advanced features of the reporting, read
The Custom Views feature allows you to personalize your reporting experience by creating and selecting the metrics most relevant to your needs.
You can switch to different views by clicking on the labelled views button, on top of the table.
Click on the edit icon to manage your metric views. You can add or remove metrics, as well as create new views. You can also select which view is displayed by default for each goal at your user level.
When having one or multiple variations, you can choose a baseline to which your data will be compared. By default, the baseline is set to the original website. However, you can set it to any variation.
To change the baseline, select any variation from the Baseline drop-down list in the reporting header.
This will set the baseline indicator to the variation you chose and compare data to it.
The reporting can also be shared to non AB Tasty users and will be available for 30 days.
To share the reporting, click on the Share reporting button in the reporting header.
Next, click on Create link
The reporting is refreshed in realtime if:
The campaign has less than 1000 visitors (after that amount of visitors, realtime report will not process anymore)
The campaign was created not longer than 7 days ago
The campaign has started within the last 12 hours (so it will work only for the first 12 hours after the start)
After that, the Reporting data update frequencies are as follows:
Scheduling is performed according to the "live" status, regardless of the QA status, A good practice is to clear the data after the QA to reset the scheduling
Any live reporting that would not have been open for 20 days is not updated anymore until it is reloaded.
You can reset the data at any time from the reporting page:
Click the three dots icon next to the campaign status, in the header.
Click the "Clear data" option.
Confirm your choice in the warning popup
In this section, you'll find useful information to learn how to read the different metrics you'll find in the AB Tasty reports, as well as the events on which they are based:
For more information and definitions about metrics, events and session/ visitor scopes, please refer to the article Data & Reports: generalities and definitions.
RevenueIQ: 793/1000 (≈ equivalent to MW). ~80% power.
Confidence interval (CI): RevenueIQ produces CIs of €8 width, which is reasonable and functional in the context of a real effect of €5. With an average CI width of €34, the t-test is totally ineffective.
CI coverage: The validity of the confidence intervals was verified. A 95% CI indeed has a 95% chance of containing the true effect value (i.e., €0 for AA tests and €5 for AB tests).
Decide via projected revenue (average gain, lower CI bound) rather than isolated p-values.

To learn more, read our RevenueIQ White paper
Example: If there is no traffic allocated to the original variant, metrics like "Chances to Win," "Confidence Interval," and "Growth" can still be calculated, but their values are not meaningful. In such cases, comparing the performance of two variants is not possible if one variant has no traffic.
No Data
If you encounter this value, try reloading the reporting page. If the issue persists, please contact your Customer Success Manager (CSM).
No more updates
Campaign Live
Frequency
Up to 96 hours
Every hour
4 to 7 days
Every 4 hours
8 to 14 days
Every 8 hours
15 to 30 days
Every 24 hours
31 days to 60 days
Every 48 hours
61 days and more
Every 168 hours (1 week)























After 181 days
Various filters can be applied to the campaign reporting. They can be separated into several categories:
Date range and goals customizations, covered in the Campaign Reporting article
Data can be sorted per column. Sorting is done per variation, and then per filter segment.
As shown in the screenshot below, sorting the highest Unique visitors lists the variation with the highest Unique visitors first, and then sorts the segments inside this variation.
The same applies for filter groups:
This functionality enables you to export data related to your reporting goals. This gives you the ability to export the data into a CSV file to be combined with other pieces of data and imported into other tools.
For more information on the reporting, please refer to . You can export 2 types of data: Reporting data and Raw data.
The Export data button is available for each of the goals displayed in your reporting. 2 types of data can be exported: Raw data and Reporting data, including the following pieces of information:
The "Refresh on Demand" feature allows users to manually refresh their report statistics, providing up-to-date data without waiting for the next scheduled refresh. This feature is particularly useful for campaigns that have been running for more than 14 days and need immediate data updates.
This feature complements the real-time mode, which becomes active as soon as the campaign goes live and continues until either 12 hours have elapsed or 1,000 users have interacted with the campaign.
Evi Feedback helps you analyze customer feedback from (Net Promoter Score®️) or CSAT (Customer Satisfaction Score) campaigns. It processes large volumes of comments, identifies key themes, and provides actionable insights. To ensure meaningful analysis, the system requires at least 100 comments.
Goal
You can choose the export data only for the goal you selected OR for all goals
All types of goals that are similar to the one you selected (if you export the data of an Action tracker, you will also see those of the other Action tracker).
Export method
CSV file downloaded directly into your browser
Sent as a CSV file to your email address (or several files for heavy extracts)
Type of goals included
All
Transaction indicators, Action Tracker, Page Tracker and Custom tracker
Type of goals excluded
None
Browsing metrics (Bounce rate, Pages per session, Revisit rate)
Export frequency
Anytime
10 extracts per goal per week (per account)
Filter(s) applied
CSV file will only contain the filtered data.
CSV file will not take the applied filters into account and will contain only the non-filtered data.
For legal reasons (GDPR compliance), to be able to export data, you must give your consent for data export. To do so, enable the option via Settings > Advanced settings > Data export.
Reporting data
Raw data
Type of data
Aggregated performance statistics displayed for each goal in your reporting. Applying filters will also drive the data that is exported into the CSV file.
List of hits used for the computation of the reporting data.


Log in to your AB Tasty account. Navigate to the reporting section, where your campaign data is displayed.
Ensure your campaign is older than 14 days. The refresh button is only available for campaigns that meet this criterion.
Click the "Refresh Now" button. A confirmation modal will appear asking if you want to refresh the data.
In the modal, click "Refresh Data Anyway" to proceed. This action will trigger the refresh process.
You can only manually refresh your data one time per campaign.
The modal will display a loading status while the refresh is in progress. Once completed, a success message will appear, and the report stats will be updated.
Check the "Results last updated" section to ensure the data has been refreshed. The updated time should reflect the recent refresh.
The "Refresh on Demand" feature is ideal for users who need to
Quickly access the latest data for decision-making without waiting for the next scheduled refresh.
Ensure data accuracy before presenting reports to stakeholders.
Evi Feedback begins by calculating the median score of the campaign. It then divides the comments into two groups:
Positive comments, which have scores above the median
Negative comments, which have scores below or equal to the median.
This separation helps in understanding the overall sentiment of the feedback.
Once the comments are categorized, Evi Feedback converts them into vectors, which are numerical representations that allow for efficient processing of large volumes of text. It then groups similar comments into clusters based on sentiment. Positive comments are clustered separately from negative comments, and each cluster represents a specific theme that emerges from the feedback.
To make the themes more understandable, Evi Feedback extracts key terms from each cluster, such as "price" or "expensive." It then assigns a theme name and a brief description based on the 10 most representative comments in each cluster. This ensures that each theme accurately reflects the main concerns or praises expressed by customers.
After identifying and naming the themes, the Evi Feedback calculates the number of comments linked to each theme. The interface then displays the top three positive themes and the top three negative themes, providing a clear overview of the most significant feedback trends. Any remaining comments that do not fit into a specific theme are grouped under "uncategorized comments."
The initial processing of feedback can take up to two minutes. When you reload the page, the latest results are displayed automatically. If you want to update the analysis with the most recent comments, you can manually refresh the analysis, which will process all campaign comments again.
Each theme includes a name and a brief description summarizing the feedback. Additionally, the system provides the number of comments associated with each theme, helping you gauge the importance of different topics. The breakdown of positive and negative themes allows you to quickly identify areas of satisfaction and concern.
Evi Feedback helps you quickly identify key customer concerns and positive feedback trends, making it easier to take action based on real customer insights.
Net Promoter®, NPS®, NPS Prism®, and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., NICE Systems, Inc., and Fred Reichheld. Net Promoter ScoreSM and Net Promoter SystemSM are service marks of Bain & Company, Inc., NICE Systems, Inc., and Fred Reichheld.
A metric is based on an event and helps to analyze the number of collected events (or their mean/average) and compare it to a baseline, generally the total number of unique visitors or the total number of sessions. For more information about definitions, please refer to the following article.
This event is sent via our tag from every page where the tag is displayed.
This event is universal and automatically sent: You don’t have to set up anything in your campaign to use a metric based on pageviews because these events are stored in the session history in our database.
This event is sent by the tag: The higher the generic tag is placed on the page, the faster the event is sent to our database. It means that some events can be sent even if the visitor changes their mind and doesn’t wait for the end of the load of the page (goes back, click elsewhere, or closes the tab)
It’s displayed as “unique” in the column “unique conversions” in the reports, conversions meaning “click done” when the type of data is “visitor”
In this case, it represents the number of visitors who have visited at least one time a certain page (if a unique visitor have seen it three times, the total number of pageviews will still be one)
It’s displayed as “total” in the reports, conversions meaning “click done” when the type of data is “session”.
In this case, it represents the total number of visits to a certain page (if a unique visitor visits three times the same page, the total number of pageviews will be three).
It can be composed of:
A single page (e.g. homepage, basket page), declared with a specific and unique URL
Several pages (e.g. three specific product pages), declared with the three specific URLs
A series of equivalent pages (e.g. all the product pages), declared by a specific rule (regex or other)
etc.
The pageview conversion rate represents the percentage of unique visitors who have visited a specific page at least one time, versus the total traffic on the variation.
88 unique visitors have seen the page
Total traffic is 880
Conversion rate = 88 / 880 * 100 = 10%
The pageview conversion rate represents the percentage of pageviews (total of impressions of the page) versus the total number of sessions on the variation.
100 impression has been performed
The total number of sessions is 900
Pageviews conversion rate = 100 / 900 * 100 = 11.11%
In a testing campaign, this metric compares two pageview conversion rates (at a visitor level or a session level) and helps to identify the best performer between the two variations (the variation is compared to the baseline, which is the original version).
The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this to learn how to change the baseline in a report.
EmotionsAI filter breakdowns allow you to filter the campaign reports based on 10 emotions segments. Those segments will help you decide on which type of element / widgets you can add to increase your conversion rates.
You can apply filters and breakdowns together in the reporting tab. This enables deeper exploration, such as crossing EmotionsAI scores with visitor segments (e.g. "new visitors on Safari").
Filters can be applied before or after selecting a breakdown
Reports now support preloading with EmotionsAI segments, filters, or both
Breakdowns now highlight positive and negative segments when statistical confidence and readiness criteria are met
This unlocks a more flexible and insight-driven analysis workflow using EmotionsAI data.
To apply the breakdowns, click on any available:
You can also add filters within the breakdown
When you apply a breakdown, the opportunity view with segment highlights is selected by default so that you can access the opportunities data infographics. It highlights segmented results that have a high level of statistical significance, allowing you to quickly identify winners and losers at a glance.
Significance is reached at segment level when all 3 criteria are matching : - Conversion's Chance to win = 95% for winners (and Conversion's Chance to win =<5% for="for" losers)="losers)"> - Number of Unique Visitors = 5000 - Number of Conversions = 300</5%>
You can switch to Detailed View to gain deeper insights into your campaigns. These insights include Audience Size and Transaction Gain (when the goal is transactional). They can help you optimize your campaigns, even when there are no winning or losing variations.
You can display a description of each column by clicking on the info icon in the column headers.
You can clear the applied breakdown by hovering on the breakdown and clicking on the Clear button.
To learn more about EmotionsAI segments criteria, read the following article: .
The Net Promoter Score®️ (NPS) Report in AB Tasty helps you analyze customer feedback collected through the NPS widget. This tutorial will guide you through accessing the report, understanding responses, and using Feedback Analysis Copilot.
After launching an NPS campaign, you can access the report to analyze the collected data.
Steps to Access the NPS Report:
Go to the Reporting section in AB Tasty.
In the left sidebar, click on the "NPS" tab (smiley icon), to view the report.
The NPS report categorizes responses into three groups:
Promoters (9-10): Loyal customers who are likely to recommend your brand.
Passives (7-8): Neutral customers who are satisfied but not enthusiastic.
Detractors (0-6): Unhappy customers who may spread negative feedback.
The NPS is calculated as: NPS = %Promoters − %Detractors. This score helps you gauge overall customer satisfaction and loyalty.
To help analyze qualitative feedback, AB Tasty uses AI-powered Evi Feedback (BETA). This feature groups comments based on sentiment and key themes.
How Evi Feedback works:
AI scans customer comments and identifies key terms.
Comments are categorized into positive, neutral, or negative sentiments.
The system groups similar feedback to highlight common themes.
This feature is more efficient than simple keyword matching because it considers the context of each comment.
To ensure meaningful analysis, the system requires at least 100 comments.
Click on Launch the analysis to access the
It provides the top 3 satisfying and unsatisfying themes of the campaigns. Evi Feedback also provides key takeaways: a list of meaningful keywords, most relevant comments, and feedback attributed to each theme. NB : Comments that are not considered valuable for the analysis are grouped under "uncategorized comments theme." This allows you to skip reading them, but still have the option to review their content if needed.
For more details on comment categorization, refer to
To get a detailed view of customer feedback:
Go to the NPS Report in AB Tasty.
Scroll down to the "NPS®️ comments" section.
This allows you to quickly search comments by variation, containing specific text and posted in a specific period of time.
Net Promoter®, NPS®, NPS Prism®, and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., NICE Systems, Inc., and Fred Reichheld. Net Promoter ScoreSM and Net Promoter SystemSM are service marks of Bain & Company, Inc., NICE Systems, Inc., and Fred Reichheld.

A metric is based on an event and helps to analyze the number of collected events (or their mean/average) and compare it to a baseline, generally the total number of unique visitors or the total number of sessions. For more information about definitions, please refer to the following .
This event is sent via our tag from every page where the tag is displayed and an “action tracking” has been set up.
It’s displayed as “
The live hits screen enables you to instantly display hits on a campaign. This provides a global view of the actions carried out by users who visit your website. This service is only available on demand for a specific campaign and helps you perform the QA of your campaign. We recommend that you enable the on your campaign before clicking See live hits in your campaign report. This way, you can make sure information is forwarded correctly before launching your campaign and starting data collection.
The live hit feature is available in the left navigation panel, on the bottom.
When clicking See live hits, the request can take up to 30 seconds to be approved. The button then changes to Live hits ready.
When optimizing your conversion rate (CRO), it’s easy to rely on Average Order Value (AOV) as a key metric. However, AOV can be misleading and often results in incorrect conclusions if misinterpreted. Let's explore why this happens and how to use AOV effectively.
Average Order Value (AOV) is calculated by dividing the total value of orders by the number of orders. It’s a convenient metric to track how much customers are spending on average. Many CRO professionals use it to compare the performance of different A/B test variations. Unfortunately, focusing on AOV alone can lead to inaccurate insights.
























In this case, it represents the number of clickers, the number of unique visitors who have clicked at least one time (if a unique visitor clicks three times, the total number of clicks will still be one)
It’s displayed as “total” in the reports, conversions meaning “click done” when the type of data is “session”.
In this case, it represents the total number of clicks (if a unique visitor clicks three times, the total number of clicks will be three)
The click rate or conversion rate represents the percentage of clickers (unique visitors who performed at least one click) on a certain element vs. the total traffic on the variation.
88 unique visitors have clicked
Total traffic is 880
Click rate = 88 / 880 * 100 = 10%
The click rate or conversion rate represents the percentage of clicks (all the clicks) on a certain element vs. the total number of sessions on the variation.
100 clicks have been performed
Total number of sessions is 900
Click rate = 100 / 900 * 100 = 11.11%
In a testing campaign, this metric compares two conversion rates (at a visitor level or at a session level) and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).
The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this article to learn how to change the baseline in a report.\
You need to click on "View live hits" to access the window displaying with current hits. The live hits feature is only effective for 60 minutes. After this time, if your QA is not completed yet, you need to relaunch the service. Hits are displayed in the window for 10 minutes, after which they disappear.
The following types of hits can be displayed on the live hits screen:
Type of hit
Explanation
Event
Hits related to actions set up previously, during the goal configuration step (Action tracking, for instance)
Pageview
Hits related to all pages viewed by users
Transaction
Hits related to transaction actions carried out by users
Item
Hits related to each product purchase (one hit per product in a transaction)
NPS
Hits related to actions carried out on the NPS widget
Segment
Hits related to segments previously configured in your source code.
Each type of hit is easily identified thanks to a visual label. When it is detected, the primary goal is highlighted on the page. Tracking down your primary goal should be your main point of interest when analyzing a campaign.
The live hits screen only displays up to 500 simultaneous hits on the campaign.
The live hits screen features a summary of the campaign information:
the campaign ID;
the campaign status (live or paused);
the number of hits received since the date of the first displayed hit;
a data refresh button;
The collection of the newest data is automated. When new hits have been received, the refresh button is available in the interface.
The table displaying hits includes the following information:
Information
Explanation
Hit
The type of hit received. You can get additional information about the value of the hit by hovering over its label (e.g. transaction revenue, event label, segment key)
Date and time
The date (dd/mm/yyyy format) and time the hit was received on
Variation ID
The ID of the variation the action was carried out on
URL
The URL of the page the action was carried out on
Device
The type of device the user carried out the action on (PC, mobile or tablet)
Visitor ID
The ID of the visitor who carried out the action
Hits are displayed in chronological order, which means the most recent hit is found at the top of the screen. Once the variation has recorded 500 hits, the oldest hits must be deleted in order for more to be displayed. The system automatically deletes the oldest hits (particularly those carried out by visitors over 10 minutes ago). This means that when the number of hits received within a 10-minute period exceeds 500, this screen only displays a sample of the actual activity.
Imagine running an A/B test to compare two versions of your website. If just one customer makes an unusually large purchase, the AOV for that variation could spike significantly. This single outlier can make it seem like the variation is outperforming the other, even though it might not be true in a broader context. This issue is exacerbated in small sample sizes, but even large datasets are not immune.
Consider an A/B test where two subgroups are being compared:
Variation A has an AOV of €56.6 with 5 orders of value 50, 51,30, 75, and €77
Variation B has an AOV of €56.8 with 5 orders of value 52, 50, 40, 62 and €80, which is a minor difference.
At first glance, this small difference seems insignificant. Now, imagine that a new customer places an order worth €202 in Variation B. This dramatically increases the AOV of Variation B to €81, with 6 orders of value 52, 50, 40, 62, 80 and €202.
This spike could falsely suggest that Variation B is significantly better, even though it is due to just one high-value purchase. Similarly, the opposite can also occur: an outlier could make it appear that a variation is underperforming when there is no real effect, or even obscure a winning variation, making it look like a loser. This kind of misleading result is more likely to occur with smaller datasets, but even larger datasets can still be influenced by similar outliers. It's important to recognize that these effects can lead to incorrect conclusions in both directions, emphasizing the need for careful analysis.
We will break down why relying on Average Order Value (AOV) can be misleading in conversion rate optimization (CRO). There are two key factors that impact the reliability of AOV as a metric:
The number of customers: This is a factor of measurement reliability. Generally, we expect that as the number of customers increases, our metrics should become more stable.
The magnitude of the maximum value: This is a factor of measurement imprecision. As the number of customers increases, the highest values in the dataset are also likely to increase, which can introduce instability into the metric.
The "law of large numbers" is often mentioned in this context as a reason to trust AOV when the number of customers is large enough. However, it is crucial to understand that increasing the number of customers does not always solve the problem. In fact, as the customer base grows, the maximum values in order size are also likely to grow, which means that these two factors effectively counterbalance each other. As a result, the AOV can remain unstable even with a larger dataset.
To illustrate this, let’s look at data collected from an e-commerce site. In the graph showing basket values:
The horizontal axis represents the number of basket values collected.
The vertical axis represents the maximum value found in the list of collected basket values.
From the graph, we see that as more data is collected, the maximum values tend to increase. This behavior can affect the stability of AOV. Next, let’s consider the evolution of the average value as we collect more data:
The horizontal axis represents the number of baskets collected, and the vertical axis represents the average value of the baskets collected.
At first glance, it may seem that the average value stabilizes quickly. However, upon closer inspection, it takes 20,000 customers to reach what seems like stabilization. Achieving 20,000 customers actually requires significantly more visitors due to conversion rates. For instance, with a typical conversion rate of 5%, you would need 400,000 visitors (calculated as 400,000 * 0.05 = 20,000). Therefore, to conduct an A/B test effectively, you would require a traffic volume equivalent to 800,000 visitors.
Even when it appears that AOV is stabilizing, this stabilization is often deceptive. If we zoom in on the data beyond 20,000 customers, we notice fluctuations of -€1 to +€1 as more data is collected.
This means that, even with a large number of data points, AOV can still vary significantly—by +/- €2 in total. In an A/B test context, this implies that a seemingly stable AOV could indicate a false difference between two variations, even when no real effect exists.
In other words, an A/A test—where there is no difference between the variations—could still indicate a difference in AOV of +/- €2 simply due to fluctuations. This makes it impossible to confidently measure any effect smaller than +/- €2 just by observing AOV.
This problem extends beyond AOV:
Revenue per Visitor (RPV): Since it divides the total order value by the number of visitors, RPV can also show large, misleading jumps, especially influenced by big spenders.
Total Revenue: Summing all cart values can be misleading, even without explicitly averaging. Comparing these large figures can lead to similar effects as averaging, particularly when the number of visitors to each variation is similar. Rare, high-value orders may not be evenly distributed across variations, creating an illusion of significant differences.
To make informed decisions, use statistical tools like the Mann-Whitney U test, specifically designed for data like AOV. This test helps you determine whether a difference in AOV between two groups is statistically significant by comparing the distributions of order values in each group, rather than relying solely on raw AOV numbers, which can be misleading due to outliers. Unlike a simple average comparison, the Mann-Whitney U test evaluates whether the overall ranking of values differs significantly between the two groups, making it more robust in handling variations caused by extreme values.
It’s not uncommon for the results from the Mann-Whitney test to contradict the simple AOV measurements—for example, indicating a loss even when the AOV appears to show a gain. This is why statistical validation is crucial.
While AOV is a useful metric, relying solely on it in A/B testing can lead to erroneous conclusions due to its susceptibility to outliers and inherent fluctuations. Always back AOV analysis with appropriate statistical testing to ensure your CRO decisions are based on stable, meaningful data.

Readiness is an indicator that lets you know when your campaign or goal has reached statistical reliability and is therefore ready to be analyzed.
Readiness is available in the reporting of both your test and personalization campaigns.
The readiness test is also displayed in the campaign dashboards to help you to identify easier what campaigns you can pause and analyze. The color of the access report CTA changes accordingly to the readiness level.
Campaign readiness is always based on the readiness of the Primary goal you’ve chosen for your campaign.
Readiness is available for each goal you have selected for your campaign: goal readiness
Readiness for goals is also used in our to help you analyse your results.
The campaign readiness indicator is the readiness is based on the campaign’s primary goal performance. When the primary goal is ready, meaning that it has reached the required number of days, conversions, and visitors, the campaign is considered ready as well and reliable. We recommend waiting for your campaign to be ready before analyzing its results.
This indicator is displayed in the to help you to identify easier what campaigns you can pause and analyze.
Readiness indicator is based on the 3 following metrics:
Good to know 💡
For browsing metrics (Revisit rate, Pages per session, and Bounce rate), the conversion metric is not taken into account in readiness calculation, meaning that goal readiness is based on days and the number of unique visitors only.
The readiness is available for each reporting indicator, on the top right.
The label informs about the readiness status. Hovering the (i) triggers more details about the readiness calculation and the status of each metric.
We recommend waiting for each goal to be ready before analyzing its results.
There are 4 readiness statuses, which enable you to know whether the campaign is ready for analysis at a glance.
Variations that have less than 1% of the traffic won’t be taken into account in the readiness calculation.
When filtering your reporting data, the readiness of the filtered data is displayed in addition to the goal readiness, to let you know if the reporting data are also ready to be analyzed.
The calculation is the same and based on campaign duration, traffic volumetry, and number of conversions.
Once these criteria have been reached, the readiness of the filtered data turns blue, meaning that filtered data is ready to be analyzed. You will also see a blue-striped banner at the left of the Unique visitors card to inform you that filters have been applied to the reporting.
The navigation metrics are based on browsing indicators. They help you understand your visitor journey.
The navigation metrics are available for tests and personalizations and must be selected in the Goals step of the campaign flow to appear in the reports.
A “bounce” is recognized and sent via our tag each time a visitor lands on a targeted page and decides to leave the website immediately after having seen the tested page.
A visitor can only bounce once.
This event is sent via our tag from every targeted page(s).
The bounce rate represents the percentage of unique visitors who have bounced in a campaign.
It's available on the type of data: session only.
100 visitors have bounced on the targeted page(s) during their session(s).
Total number of unique visitors in the test variation is 1,000
Bounce rate = 100 / 1,000 * 100 = 10%
The Exit Rate metric lets you measure how often users leave your site during or after exposure to a campaign. This helps you assess the real impact of your A/B tests on user retention and engagement.
It helps you:
Spot variations that drive users to exit prematurely
Monitor exit behavior on key pages like /cart, /pricing, or landing pages.
Add behavioral depth to your A/B test results — beyond conversions.
A pageview is counted each time a tracked page is viewed (based on your setup using contains, exact match, or regex).
A pageview is flagged as isExit = true when the user leaves the website from that page, without navigating to another internal page.
Let’s say you create an Exit Rate tracker with the condition:
URL contains /cart
Then the following happens:
You visit the /cart page 3 times during different sessions
You leave the site from that page once
The result will be:
Later, if you return directly to /cart and exit again, it becomes:
This gives you a clear view of how likely users are to abandon the site from a critical page like the cart — which is particularly useful for e-commerce optimization.
Each time a page is loaded on the visitor's browser, AB Tasty counts it. All the browsing data are stored in the .
The metric average number of viewed pages per visit is the average quantity of pages that have been viewed per session on the entire perimeter where the AB Tasty tag is placed, starting from when the visitor is assigned to a campaign.
Visitor A has visited the website 3 times during the campaign: session #1 for 5 viewed pages, session #2 for 6 viewed pages, and session #3 for 7 viewed pages.
Visitor B has visited the website 2 times during the campaign: session #1 for 10 viewed pages, session #2 for 12 viewed pages
Average viewed pages = (5+6+7+10+12) / 5 = 8
In a campaign, this metric compares two amounts of average viewed pages and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).
The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this to learn how to change the baseline in a report.
The number of revisits represents the number of unique visitors who have triggered the campaign at least two times in at least two distinct sessions.
The revisit rate is the percentage of unique visitors who have triggered the campaign at least two times in at least two unique sessions.
100 unique visitors have visited the targeted page(s)
80 unique visitors have visited the targeted page(s) only once
20 unique visitors have visited the targeted page(s) at least two times
Revisit rate = 20 / 100 * 100 = 20%
In a campaign, this metric compares two percentages of revisit rates to help identify the best performer between two variations (the variation is compared to the baseline, which is the original version).
The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this to learn how to change the baseline in a report.
To deep dive into campaign reporting data analysis and get refined results based on certain criteria, you can apply filters to your reporting, by choosing from a list of filters.The AB Tasty reporting lets you easily compare filtered data with non-filtered data, which enables you to make statistically accurate decisions. This concerns all types of goals and metrics.
You can select which filter you want to apply by clicking on the Filter button in the reporting header.
You can then select from a list of filter categories.
Once applied, your reporting will update, indicating:
The number of filters applied
The name of the filter. If several filters have been applied, they will be grouped
The percentage of visitors concerned by this filter.
You can add up to two filter groups per report. The percentage of visitors per filter group is indicated as a pie chart in the reporting header.
To add filter groups, click on the Filters button and select your first filter(s). You can then click on Add filter group to create the second filter group that will be used to compare against the first.
This is ideal when you need to compare your campaign performance per goal.
Common filters used for comparisons are:
Device: Mobile Vs Desktop
Browser: Chrome Vs Firefox
Loyalty: New visitors Vs Returning visitors
You can create filter presets that can be reused later. This will help you save time and get the same filters applied to different data.
In the filter menu, click on the Filter presets tab and click on New preset filter. The Filter presets tab will also show you the number of existing presets.
You can apply up to two filter groups by default in a report.
You can set a default Filter preset, click on the star icon that appears while hovering the Filter preset.
Clicking again on the star icon will remove the Filter preset as the default preset.
You can clear all filters by clicking on Filters -> Clear and close
Once a filter is applied, you can expand the results' breakdown by clicking on the Expanding arrow icon (pointing down):
You can also collapse the breakdown using the Collapsing arrow (pointing up).
Here is a common case where you may need to filter your reporting data:
Let’s say you implement an A/B test to analyze the performances of a new user flow for all users with no responsive design. Once the test is ready to be analyzed, you notice that one variation is performing better than the other. However, you want to know if this is also the case for mobile users. In this case, you can filter by device > mobile and you will be able to see the parallelized data to compare them.
While our primary framework is Bayesian, some users prefer Frequentist analysis. This option uses the same core data and offers a different statistical perspective.
AB Tasty suggests avoiding Frequentist analysis for A/B testing due to its complexity in interpretation. Yet, we've created a feature allowing users to utilize it if necessary.
Please note that , and opportunities views are not available with frequentist mode.
Ask your CSM to activate the feature to your account
Switch from Bayesian stat engine to Frequentist stat engine
Access your campaign.
In AOV gain p-value and in Conversion gain p-value you will face :
Blue artifact and stats’ font if p-value =< 0.05. Highlighting that the tests results provide potential learnings.
Orange artifact and stats’ font when p-value >= 0.95. Highlighting that the tests results provide no learnings.
Frequentist analysis uses the exact same conversion metrics as Bayesian tests, including inverse metrics like bounce rate.
Confidence Intervals remain a core output. They are computed differently but maintain the same format and meaning, indicating the range within which the true effect likely lies.
The p-value is the central concept in Frequentist analysis, replacing the Bayesian "Chance To Win (CTW)".
Definition: The p-value is the probability that there would be no real difference between your variations (A & B) despite the observed difference in your experiment.
Interpretation:
A lower p-value indicates a higher chance that a real difference exists between A & B.
Unlike Bayesian CTW, the p-value does not use red/green color coding for uplift/loss.
Example: A p-value of suggests a high chance of a real difference, but this could be either an uplift or a loss.
Standard Significance: The industry standard significance threshold is a p-value of <0.05. This indicates a statistically significant difference has been found.
This is equivalent to a Bayesian CTW >95 for an uplift, or <5 for a loss.
Emotion AI Segments: For Emotion AI segments, a p-value of <0.10 combined with a Confidence Interval showing a positive gain (i.e., the measured gain is >0) corresponds to the 90 Bayesian CTW.
In order to ensure accurate and reliable results from A/B testing experiments, it is important to use the right sampling approach and understand the implications of different samples. Statistical sampling is the process of selecting a subset of data from a larger population of data in order to make inferences about the population. In the context of A/B testing, sampling involves randomly selecting visitors from the larger population and directing them to the different variations of a test. This allows for an unbiased comparison of the different variations, as well as a more accurate assessment of the impact of the changes.
Once the sampling approach has been determined, it's important to monitor the progress of the experiment and keep an eye out for any potential sample ratio mismatches (SRMs).
Sample ratio mismatch is an issue that can occur in A/B testing experiments, where the expected traffic allocation between variations does not match the observed visitor numbers. This mismatch can be caused by several different factors, including technical issues that are described after.
The redirection might take too long time or might crash at some point, making your visitor not land on the variation
Performance differences for users who suffer from the extra loading of the redirection
Bots that are leaving just after being redirected
Direct link to the variation URL shared across media (email, social media, etc.)
This cause isn’t applicable in AB Tasty solutions as the service verifies if the user comes from a redirection or not.
Only your variation would be impacted which could cause SRM
The impact of SRM depends on the size of the difference between the expected ratio and the observed ratio, as well as the total number of visitors observed. When an SRM problem is detected, it's important to understand the size of the issue and the cause of the problem, in order to be able to correct the issue before restarting the experiment.
To enhance efficiency, a Sequential algorithm layer has been integrated into the SRM, resulting in the S-SRM. This addition enables real-time alert detection, allowing you to take immediate action.
This feature is currently part of an Early Adopters program. Please contact your CSM to benefit from it.
The Sequential approach follows a frequentist methodology, relying on p-value principles for detection. The chosen significance threshold is 0.01.
This advancement addresses two critical limitations of traditional SRM systems:
Elimination of manual monitoring delays — With classic SRM, relying on self-initiated checks often results in delayed detection, potentially allowing issues to escalate beyond manageable levels.
Prevention of post-experiment disappointment — The S-SRM eliminates the scenario where problems are discovered only during final analysis, preventing the frustration of having to restart experiments from scratch.
By implementing the S-SRM, you ensure that anomalies are detected as early as possible, minimizing delays and optimizing decision-making.
Debugging Sample Ratio Mismatch (SRM) issues can be challenging. However, based on our experience and insights from Trustworthy Online Controlled Experiments by Ron Kohavi, Diane Tang, and Ya Xu, here are some key steps to help you diagnose and resolve SRM problems effectively:
Check your redirection performance
Verify experiment allocation consistency Confirm that the allocation of users has remained stable over time and hasn’t been altered overtime.
Investigate specific segments Filter your report by key dimensions (e.g., browser, operating system, language) to determine if the issue is isolated to a particular segment. To know more about filters and how you can use them, please read our dedicated to our .
Analyze the period before the SRM alert Identify whether the issue has been present since the beginning of the test or if it emerged later. Then, review your website/product metrics to check for any performance degradation that could explain the anomaly.
Once you have identified the root cause, you can confidently proceed with your experimentation roadmap without disruptions.
At AB Tasty, we proactively monitor Sample Ratio Mismatch (SRM) alerts across all active campaigns—over 15,000 at any given time—to detect trends and ensure system reliability. Our analysis focuses on whether the total number of SRM alerts is increasing over time. Over the past six months, our data has consistently shown that the SRM alert rate remains below 1%, confirming that SRM issues are rare.
These findings indicate that SRM discrepancies are not inherently linked to AB Tasty’s system but rather stem from external factors beyond our control. By continuously tracking and analyzing these alerts, we ensure that our platform remains robust, reliable, and optimized for accurate experimentation.
The Tracker Status feature provides real-time visibility into the health of your trackers (goals/indicators) across all campaigns both at the reporting level and at the tracker level.
Tracker Status automatically monitors tracker activity and alerts you to potential issues. This feature helps you quickly identify and resolve issues with trackers, ensuring reliable data and trustworthy test results.
: Each goal in a campaign report displays a status tag, indicating if it is collecting data as expected.
: The tracker list page shows a for each tracker, summarizing its health across all campaigns.
Tracker Details View: When you click on a tracker to open a detailed view, the “Tracker Activity” tab lists all campaigns using the tracker, with a status for each campaign. It shows:
The current health status of the tracker
To help users quickly identify which trackers need attention, we’ve added cumulative status indicators directly in the tracker listing page.
Next to each tracker name, you’ll now see a status tag that reflects its global health, aggregated across all the campaigns where it’s currently used.
Assess whether the SRM issue affects all your experiments or is limited to this particular test.






















Campaign duration (days)
The campaign must be live for at least 14 days.
Traffic volumetry (visitors)
At least 5,000 unique visitors must see each variation.
Number of conversions
At least 300 unique conversions must take place on each variation
👀 What you see
🏁 Status
📝 Explanation
No data
No data
Your campaign and/or goal hasn’t collected any data yet, is in QA, or has never been launched.
Ready
Ready to be analyzed
Your campaign and/or goal are statistically reliable. That is to say, it has been live for more than 14 days, and and had enough visitors and conversions on the primary goal.
You can start analyzing the results of your campaign.
Not ready
Not ready to give reliable results yet
Your campaign and/or goal is not statistically reliable yet. Either because there have not been enough visitors and/or conversions, or because your campaign has been live for less than 14 days.
We recommend waiting before analyzing the results of your campaign.















Select the Frequentist engine.
If you want to apply to all reports of the account, check "Apply to all reportings". If you don't, this statistical analysis, will only apply on the campaign your are currently browsing).
Click on "Apply".
“Confidence interval” switches to “Confidence interval” (Frequentist one)
“Chance to win” switches to “p-value”
AOV changes (transaction goals only):
“Chance to win” switches to “p-value” (Frequentist one)
The observed difference is based on a very small number of visitors.
The observed difference is very small, even with a large number of visitors.
Significance vs. Direction:
The p-value measures only the significance of a difference, not its direction (whether it's an uplift or a downlift).
The direction of the effect is determined by the Confidence Interval.
The list of campaigns where the tracker is used
Hits received or not, broken down by campaign
Non-live campaign
Campaign is not live (paused, stopped, or draft); no hits expected
Play your campaign to collect the data.
Hits received
Tracker received data in the last 24 hours (healthy)
You're all set!
No hits in last 24h
Tracker has not received data in the last 24 hours.
The tracker might be broken or misconfigured.
To troubleshoot your goal, review the tracker's configuration and the selector it is based on. Don't forget to update the tag after any changes on a live campaign!
No hits on X campaign(s)
Tracker used in at least one LIVE campaign but at least one campaign using this tracker shows No hits in last 24h
Review the tracker's configuration and the selector it is based on. It's possible that the source code of your page has changed and that the selector does not target an existing element anymore. Don't forget to update the tag after any changes on a live campaign!
No hits yet
The campaign is live, but this tracker hasn’t received a hit since launch (new or misconfigured)
Hits received
If the tracker is used in at least one live campaign and all live campaigns show ‘Hit received’.
🟩 Everything’s healthy!
No hits on X campaign(s)
At least one campaign using this tracker shows No hits in last 24h. 👉The status indicates the number of campaign affected by a dead tracker.
🟥 This is the highest priority warning.
No hits yet
No campaign has received hits and all campaigns are in ‘No hit yet’
👉 Useful for new campaigns or trackers awaiting traffic.
No-live campaign
If the tracker is only used in non-live campaigns, no hit checks are performed.

: - If the event is firing, the data should be displayed soon in the reporting. - If the event is not firing, review the tracker's configuration and the selector it is based on.


A metric is based on an event and helps to analyze the number of collected events (or their mean/average) and compare it to a baseline, generally the total number of unique visitors or the total number of sessions. For more information about definitions, please refer to the following article.
In this article, you'll find detailed information about all AB Tasty transactional metrics:
You need to set up the transaction tag on your website, it collects transactions from your checkout page. For more information, please refer to this
It’s triggered each time a visitor performs a transaction on the website and is normally sent from your checkout/validation page.
AB Tasty receives the event transaction but also all the information added in the transaction tag:
transactionRevenue
affiliations
paymentMethods
currencies
All this information is useful to calculate metrics and also to filter your report on specific purchases.
This data is displayed two times and with a different definition and calculation in the reports:
Total transactions: total number of transactions/single purchases performed during the campaign for each variation
Unique transactions: total number of different buyers (unique visitors who have performed one transaction at least) In this example, as the unique transaction and the total transactions column are not equal, we conclude that some unique visitors have purchased more than one time during the campaign duration, in both original and variation groups. In the , select metric view: Raw data
AB Tasty variable : transactions
In the , select:
metric view: Overview &#xNAN;or
metric view: Transaction detailed data
The transaction rate is always automatically calculated at the visitor level:
AB Tasty variable: transactionUserConversionRate
In the , select: metric view: Transaction detailed data
In a test campaign, this metric compares two transaction rates (at a visitor level or a session level) and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).
The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this to learn how to change the baseline in a report.
In the , select: metric view: Overview
The average order value calculated on all the recorded purchases in the variation.
20 different transactions recorded, for a total amount of $10,000.
Average order value = $10,000/ 20 = $500
In a testing campaign, this metric compares two average order values and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).
In the , select: Average order value detailed data
Average order value variation - average order value baseline
Average order value original = $154.20
Average order value variation = $153.90
Average order value growth = $154.20 - $153.90 = $+0.30
In the , select: metric view: Average order value detailed data
Average product quantity is calculated on all the recorded purchases in the variation.
Total number of items purchased in all transactions / number of transactions
Total number of transactions = 153
Number of purchased items = 298
Average product quantity = 298 / 153 = 1.94
In the , select: metric view: Average order value detailed data
Average price of a purchased item per variation
Total revenue: $10,000
Number of purchased items: 298
Average product price = $10,000 / 298 = $33.55
In the , select: metric view: Revenue detailed data
This is the revenue generated by each variation (turnover = sum of all transaction values).
Please consider the variable you use to capture the amount of a purchase when you’ve installed your transaction tag, this amount doesn’t have to contain the delivery fees or taxes.
In the , select: metric view: Revenue detailed data
This is the difference between the revenue of a variation compared to the revenue of the baseline (original)
Revenue variation - revenue original
In the , select: metric view: Revenue detailed data
This is the fictive amount that could have been earned if 100% of the traffic of the campaign had been assigned to the variation (assuming the same behavior in terms of transaction rate and average order value).
For an A/B Test with only one variation:
AB Tasty gives you the capability to create different types of campaigns, and regarding their type, you’ll need to follow metrics to make the right decisions.
The are based on a hypothesis: is the evolution idea I have in mind better for my website (whatever the decision metric) than the current product version?
They need at least one - one main metric to follow - to make a decision.
This is the purpose of a test: to be able to base the final decision on specific and reliable data. are made to double check that there are no critical collateral impacts.
📎 For testing activities, implementing events/trackers and following metrics based on them is mandatory. 📎 The more goals you select for your campaign, the more detailed information you will have, but the harder the decision-making will be.
The are not based on testing a hypothesis. Their objective is to push what you think the best message is to the best audience segment.
As you already know that your customized content will help your visitors to convert more, to get visitors to adopt the behavior you want to boost, following the relative metrics is not mandatory, even if it is advised.
For Personalization activities, implementing events/trackers and following metrics based on them is recommended to keep an eye on the general performance of your website. Personalization initiatives might also be the result of a deeper analysis of an A/B Test campaign's result - filter features might highlight higher performances on your traffic attributes (device, loyalty, etc.).
The are designed to push a fix to your website in seconds. The objective is to deploy fast, for all the traffic, waiting for a hardcoded and more definitive fix.
For Patch activities, the follow-up of the performance is not relevant.
In the AB Tasty platform, especially in , you’ll encounter different terms that need to be defined.
An event is a simple interaction between a visitor and your website.
It can be:
A click
A hover
A pageview
A transaction
Tracking events are the base of every analytics tool and constitute the primary material to build metrics.
There are two ways to count events:
At a unique visitor level - unique count That means that we count only 1x a visitor that will trigger a specific event twice or more. In this case, we remember that the visitor did the action versus the other ones who didn’t do the action. It’s a boolean way to count events.
At a session level - multiple count That means that we count Nx a visitor that will trigger a specific event twice or more. In this case, we can follow the frequency of an event and calculate an average of performed events per session.
These will be useful to know if you need to check your metrics at a unique visitor level (to track the percentage of visitors that have done a certain action vs. those who did nothing) or at a session level (to track the frequency of an event).
A metric is based on an event and helps to analyze the number of collected events (or their mean/average) and compare it to a baseline, generally the total number of unique visitors or the total number of sessions.
A metric is a calculation, specifically:
Metric = number or average number of events that occurred / number of visitors or sessions
Metrics are useful to challenge a recorded number of events relative to the total number of occasions to perform them. You’ll find:
Click rates aka Action Tracking
Pageviews
Scroll rate
Average time spent on page
A goal is a metric that you will follow as a priority throughout your campaign, guiding you to make a decision at the end of your campaign. This is one of your main objectives.
To set-up your campaign goals, please refer to this .
When you create a campaign, you will have a hypothesis: “Changing this element will positively impact the visitor’s behavior by helping them to perform more of this specific action.”
e.g. Changing the color of a CTA from red to blue will be more calming, so visitors will click more.
The Primary Goal should be the metric based on the event that will be most impacted by your change. For example, any change on a specific block can have a direct effect on the click event on this element, or on the time spent on the page, depending on the nature of the change (add some digest content, highlight an action, etc.).
So you will have to create a tracker based on this event, in order to generate the calculation of a metric you will choose as your primary goal for your reporting.
Tip: Choose the metric that seems obvious in terms of cause and effect. A change on a button > possibly more events "Click" > Click Rate
Your final decision should not be based on a secondary goal, especially since the link between the change on the website and its effect on an indirect event is not proven.
For example, we can’t be certain that a modification on a CTA on the product page will have a direct impact on the transaction rate, as the event "transaction" is too far removed from the modification (it might be 3 or 4 pages away from the event and the goal, which is not close enough to be certain). Still, it can be interesting to create and follow relevant secondary goals, including:
Keeping track of the most important metrics for your business, such as the transaction rate if your business is an e-commerce website
Deciding between two variations in a test campaign: if the two variations are the same in terms of Primary Goal results, the Secondary Goals can help to find the best option
Definition: grouping of hits received from the same unique visitor.
The session is closed after 30 minutes of inactivity, or every night at 2am depending on the time zone.
The session view will calculate and display metrics based on all collected events, though all sessions.
In the reporting, select type of data: session:
AB Tasty variable: sessionID
To be recognized as a unique visitor, AB Tasty gives a unique visitorID to each website visitor that doesn’t get the . Then the visitorID is stored in the AB Tasty cookie.
The visitor view will calculate and display metrics based on users that have performed an event linked to the tracker at least one time though its sessions.
In the reporting, select type of data: visitor:
AB Tasty variable: visitorID
Some widgets enables to add trackers in your campaigns:
Track if your visitors have reached a pre-defined scroll percentage on a page.
Track the time your visitors spend on a page by creating a duration goal.
Track your visitor’s scroll progression on a page based on elements made visible on the visitor's screen (viewport).
Statistical indicators characterize the observed eligible metrics for each variation, as well as the differences between variations for the same metrics. They allow you to make informed decisions for the future based on a proven Bayesian statistical tool.
When you observe a raw growth of X%, the only certainty is that this observation has taken place in the past in a context (time of year, then current events, specific visitors, …) that won’t happen in the future in the exact same way again.
By using statistical indicators to reframe these metrics and associated growth, you get a much clearer picture of the risk you are taking when modifying a page after an A/B test.
Statistical indicators are displayed with the following metrics:
All “action trackers” growth metrics (click rate, scroll tracking, dwell time tracking, visible element tracking)
customVariableNames
productCategories



















A scroll
The number of seconds on a page
A form-filling
A validation
An upload/download
An element that arrives on the visible screen area (above the fold) etc. By gathering and adding up the various events, at the level of a single session, several sessions, and the traffic as a whole, we can build specific trackers and metrics.
Bounce rate
Average number of viewed pages etc.


Iframe Click Tracking widget: Track clicks in an iframe. Track clicks on Facebook like buttons, Google Adsense ads, YouTube videos, or any other iframe. Clicks must not be consecutive in order to be counted. This means that in order to count a second click, the user must click elsewhere before clicking on the iframe. This preserves reporting results to be distorted with repetitive clicks and spam.
In this article, we provide more details on the trackers built using these widgets.
This event is sent if you have added the widget scroll rate tracking to your campaign and a visitor reaches the defined percentage of scrolling during their navigation on the targeted page(s).
This event is sent via our tag from every page where the campaign containing the widget is triggered (see Targeting article).
AB Tasty considers this type of event as action tracker (label in the report/same type of hit stored in the database).
It’s displayed as “unique” in the column “unique conversions” when the type of data is “visitor”. It represents the number of unique visitors that have reached the percentage of scroll on the targeted page(s) at least once thought its session(s).
In this case, it represents the number of visitors who have reached the defined percentage of scroll during their sessions at least one time, on the targeted page(s) of the campaign (if a unique visitor has reached the scroll level three times, the total will still be one).
It’s displayed as “total” in the column "Total conversions" when the type of data is “session”. It represents the total number of events “percentage of scroll reached” on the targeted page(s) on all session(s).
In this case, it represents the total number of events when the percentage of scroll has been reached, on the targeted pages(s) of the campaign (if a unique visitor has reached the scroll level three times, the total will be three).
In this case, the scroll conversion rate represents the percentage of scrollers (unique visitors who performed the level of scroll) on the targeted page(s) versus the total traffic on the variation.
Calculation
Example
88 unique visitors have scrolled
Total traffic is 880
Percentage of scroll rate = 88 / 880 * 100 = 10%
In this case, the scroll conversion rate represents the percentage of sessions where the percentage scroll has been reached on the targeted page(s) versus the total number of sessions on the variation.
Calculation
Example
The scroll percentage has been reached 120 times
Total number of sessions is 1,000
Percentage of scroll rate = 120 / 1,000 * 100 = 12%
In a testing campaign, this metric compares two percentages of scroll conversion rates (at a visitor level or a session-level) and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).
The growth metrics are always displayed on all variations except on the one which is used as the baseline. See the Campaign reporting article to learn how to change the baseline in a report.
This event is sent if you have added the widget “dwell time tracking” to your campaign and a visitor reaches the defined number of seconds on a targeted page during their navigation.
This event is sent via our tag from every page where the campaign containing the widget is triggered (see Targeting: Where section). AB Tasty considers this type of event action tracker (label in the report or the same type of hit stored in the database).
It’s displayed as “unique” in the column “unique conversions” when the type of data is “visitor”. It represents the number of unique visitors that have reached the minimum time defined on the targeted page(s) at least once thought its session(s).
In this case, it represents the number of visitors who have reached the defined time on the page during their sessions at least one time, on the targeted page(s) of the campaign (if a unique visitor reaches the time on the page three times, it will only count as one time).
It’s displayed as “total” in the column "Total conversions" when the type of data is “session”. It represents the total number of events “minimum time reached” on the targeted page(s) on all session(s).
In this case, it represents the total number of events when the defined time on the page has been reached on the targeted page(s) of the campaign (if a unique visitor reaches the defined time on the page three times, it counts as three times).\
In this case, the time-on-page conversion rate represents the percentage of visitors (unique visitors who performed the time-on-page objective) on the targeted page(s) versus the total traffic on the variation.
Calculation
Example
88 unique visitors have reached the time on page objective
Total traffic is 880
Percentage of scroll rate = 88 / 880 * 100 = 10%
In this case, the time on page rate represents the percentage of sessions where the time on page has been reached on the targeted page(s) versus the total number of sessions on the variation.
Calculation
Example
The time on page has been reached 120 times
Total number of sessions is 1,000
Percentage of time on page = 120 / 1,000 * 100 = 12%
In a testing campaign, this metric compares two “time on page reached” conversion rates (at a visitor level or a session level) and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).
The growth metrics are always displayed on all variations except on the one which is used as the baseline. See the campaign reporting article to learn how to change the baseline in a report.
This event is sent if you have added the widget “visible element tracking” to your campaign and a visitor has seen (in the visible part of their screen) the defined element during their navigation.
This event is sent via our tag from every page where the campaign containing the widget is triggered (see Targeting article).
AB Tasty considers this type of event as action tracker (label in the report/same type of hit stored in the database).
It’s displayed as “unique” in the column “unique conversions” when the type of data is “visitor”. It represents the number of unique visitors that have seen the defined element on the targeted page at least once thought its session(s).
In this case, it represents the number of visitors who have seen the defined element on the page during their sessions at least 1 time, on the targeted page(s) of the campaign (if a unique visitor has viewed the element on the page three times, it will only count as one view).
It’s displayed as “total” in the column "Total conversions" when the type of data is “session”. In this case, it represents the total number of events when the defined element on the page has been seen, on the targeted page(s) of the campaign (if a unique visitor has viewed the defined element on the page three times, it will count as three views).
In this case, the visible element conversion rate represents the percentage of viewers (unique visitors who have seen the element on the screen) on the targeted page(s) versus the total traffic on the variation.
Calculation
Example
88 unique visitors have seen the element
Total traffic is 880
Visible Element conversion rate = 88 / 880 * 100 = 10%
In this case, the visible element conversion rate represents the percentage of sessions where the element has been seen on the targeted page(s) versus the total number of sessions on the variation.
Calculation
Example
The element has been seen 120 times
Total number of sessions is 1,000
Visible element conversion rate = 120 / 1,000 * 100 = 12%
In a testing campaign, this metric compares two percentages of element visible conversion rates (at a visitor level or a session-level) and helps to identify the best performer between 2 variations (the variation is compared to the baseline aka original version).
The growth metrics are always displayed on all variations except on the one which is used as the baseline. See this article to learn how to change the baseline in a report.
The Iframe Click Tracking widget enables you to record clicks in an iframe (an HTML element that lets you display content from another web page - the same website or another one).
For example, YouTube videos are displayed in an iframe when you embed them in your website. It is pretty much the same thing for any web app that lets you embed a part of or the full content of a page. Generally speaking, except for websites that forbid it, any web page can be embedded in an iframe.
This event is sent via our tag from every page where the campaign containing the widget is triggered (see Targeting step article).
AB Tasty considers this type of event as action tracker (label in the report/same type of hit stored in the database).
It’s displayed as “unique” in the column “unique conversions” when the type of data is “visitor”. It represents the number of unique visitors that have performed a click on the iframe at least once thought its session(s) (if a unique visitor clicks several time, it will only count as one).
It’s displayed as “total” in the reports, in the column “total conversions” when the type of data is “session”. In this case, it represents the total number of clicks in the iframe (if a unique visitor clicks 3 times, it will increment 3).\
In this case, the Iframe tracking rate represents the percentage of unique visitors who performed at least one event Iframe tracking on the targeted page(s) versus the total traffic on the variation.
Calculation
Example
88 unique visitors have clicked in the Iframe
Total traffic is 880
Percentage of scroll rate = 88 / 880 * 100 = 10%
In this case, the Iframe tracking rate represents the ratio between the number of events on the Iframe versus the total number of sessions on the variation.
Calculation
Total number of events clicks on Iframe / Total number of sessions X 100
Example
We have recorded 120 clicks on Iframes though sessions
Total number of sessions is 1,000
Percentage = 120 / 1,000 * 100 = 12%
In a testing campaign, this metric compares two “Iframe tracking” conversion rates (at a visitor level or a session level) and helps to identify the best performer between two variations (the variation is compared to the baseline, which is the original version).
The growth metrics are always displayed on all variations except on the one which is used as the baseline. See the Campaign reporting article to learn how to change the baseline in a report.
Transaction growth metrics(except average product quantity, price, and revenue)
Bounce rate growth
Revisit rate growth
Statistical indicators are not displayed with the following metrics:
Transaction growth metrics for average product quantity, price, and revenue
Number of viewed pages growth
Lastly, statistical indicators are only displayed on visitor metrics and not on the session metrics. The former are generally the focus of optimizations and, as a consequence, our statistical tool was designed with them in mind and is not compatible with session data.
These indicators are displayed on all variations, except on the one used as the baseline. See this Campaign reporting guide to learn how to change the baseline in a report.
The confidence interval indicator is based on the Bayesian test. The Bayesian statistical tool calculates the confidence interval of a gain (or growth), as well as its median value. They enable you to understand the extent of the potential risk related to putting a variation into production following a test.
Where to find the confidence interval
How to read and interpret the confidence interval
Our Bayesian test stems from the calculation method developed by mathematician Thomas Bayes. It is based on known events, such as the number of conversions on an objective in relation to the number of visitors who had the opportunity to reach it, and provides as we have seen above a confidence interval on the gain as well as its median value. Bayesian tests enable sound decision-making thanks to nuanced indicators that provide a more complete picture of the expected outcome than a single metric would.
In addition to the raw growth, we provide a 95% confidence interval.
“95%” simply means that we are 95% confident that the true value of the gain is situated between the two values at each end of the interval.
👉 Why not 100%?
In simple terms, it would lead to an confidence interval of infinite width, as there always will be a risk, however minimal.
“95%”is a common statistical compromise between precision and the timeliness of the result.
The remaining 5% is the error, equally divided below and above the low and high bounds of the interval, respectively. Please note that, of those 5%, only 2.5% would lead to a worse outcome than expected. This is the actual business risk.
👉As seen previously, the confidence interval is composed of three values: the lower and higher bounds of the interval, and the median.
Median growth vs Average growth:
These values can often be very close to one another, while not matching exactly. This is normal and shouldn’t be cause for concern.
In the following example, you can see that the variation has a better transaction rate than the original: 2.3% vs 2.46%. The average growth is about +6.89%.
Zooming in on confidence interval visualization, we see the following indicators:
Median growth: 6.88%
Lower-bound growth: 0.16%
Higher-bound growth: 14.06%
An important note is that every value in the interval has a different likelihood (or chance) to actually be the real-world growth if the variation were to be put in production:
The median value has the highest chance
The lower-bound and higher-bound values have a low chance
👉 Summarizing:
Getting a value between 0.16% and 14.06% in the future has a 95% chance of happening
Getting a value inferior to 0.16% has a 2.5% chance of happening
Getting a value superior to 14.06% has a 2.5% chance of happening
👉Going further, this means that:
If the lower-bound value is above 0%: your chances to win in the future are maximized, and the associated risk is low;
If the higher-bound value is under 0%: your chances to win in the future are minimized, and the associated risk is high;
If the lower-bound value is under 0% and the higher-bound value above 0%, your risk is uncertain. You will have to judge whether or not the impact of a potential future negative improvement is worth the risk, if waiting for more data has the potential to remove the uncertainty, or if using another metric in the report for the campaign to make a decision is possible.
Heads up⚡️ In any case, AB Tasty provides these Bayesian tests and statistical metrics to help you to make an informed decision, but can’t be responsible in case of a bad decision. The risk is never null in any case and even if the chance to lose is very low, it doesn’t mean that it can’t happen at all.
This metric is another angle of the confidence interval and answers the question, “What are my chances to get a better/strictly positive growth in the future with the variation I’m looking at?”, or a better/strictly negative growth in the future with the variation I’m looking at?” for the specific bounce rate which have to be the lowest possible.
The chance to win enables a fast result analysis for non-experts. The variation with the biggest improvement is shown in green, which simplifies the decision-making process.
The chance to win indicator enables you to ascertain the odds of a strictly positive gain on a variation compared to the original version. It is expressed as a percentage. When the chance to win is higher than 95%, the progress bar turns green.
As in any percentage of chances that is displayed in betting, it gives you a focus on the positive part of the confidence interval.
The chance to win metric is based on the Bayesian test as it is based on the confidence interval metric. See the section about Bayesian tests in the confidence interval metric section.
This metric is always displayed on all variations except on the one which is used as the baseline. See this guide to learn how to change the baseline in a report.
Where to find the chance to win
In the “Statistics” tab for non-transactional metrics
In the detailed view of transactional metrics
How to read and interpret the chance to win
This index assists with the decision-making process, but we recommend reading the chance to win in addition to the confidence intervals, which may display positive or negative values.
The chance to win can take values between 0% and 100% and is rounded to the nearest hundredth.
If the chance to win is equal to or greater than 95%, this means the collected statistics are reliable and the variation can be implemented with what is considered to be low risk (5% or less).
If the chance to win is equal to or lower than 5%, this means the collected statistics are reliable and the variation shouldn’t be implemented with what is considered to be high risk (5% or more).
If the chance to win is close to 50%, it means that the results seem “neutral” - AB Tasty can’t provide a characteristic trend to let you make a decision with the collected data.
👉 What does this mean?
The closer the value is to 0%, the higher the odds of it underperforming compared to the original version, and the higher the odds of having confidence intervals with negative values.
At 50%, the test is considered “neutral”, meaning that the difference is below what can be measured with the available data. There is as much chance of the variation underperforming compared to the original version as there is of it outperforming the original version. The confidence intervals can take negative or positive values. The test is either neutral or does not have enough data.
The closer the value is to 100%, the higher the odds of recording a gain compared to the original version. The confidence intervals are more likely to take on positive values.
Good to know 💡
If the chance to win displays 0% or 100% in the reporting tool, these figures are rounded (up or down). A statistical probability can never equal exactly 100% or 0%. It is, therefore, preferable to display 100% rather than 99.999999% to facilitate report reading for users.
Bonferroni correction
The Bonferroni correction is a method that involves taking into account the risk linked to the presence of several comparisons/variations.
In the case of an A/B Test, if there are only two variations (the original and Variation 1), it is estimated that the winning variation may be implemented if the chance to win is equal to or higher than 95%. In other words, the risk incurred does not exceed 5%.
In the case of an A/B test with two or more variations (the original version, Variation 1, Variation 2, and Variation 3, for instance), if one of the variations (let’s say Variation 1) performs better than the others and you decide to implement it, this means you are favoring this variation over the original version, as well as over Variation 2 and Variation 3. In this case, the risk of loss is multiplied by three (5% multiplied by the number of “abandoned” variations).
A correction is therefore automatically applied to tests featuring one or more variations. Indeed, the displayed chance to win takes the risk related to abandoning the other variations into account. This enables the user to make an informed decision with full knowledge of the risks related to implementing a variation.
Good to know: When the Bonferroni correction is applied, there may be inconsistencies between the chance to win and the confidence interval displayed in the confidence interval tab. This is because the Bonferroni correction does not apply to confidence interval.
Examples
✅ Case #1: High chance to win
In this example, the chosen goal is the revisit rate in the visitor view. The A/B Test includes three variations.
The conversion rate of Variation 2 is 38.8%, compared to 20.34% for the original version. Therefore, the increase in conversion rate compared to the original equals 18.46%.
The chance to win displays 98.23% for Variation 2 (the Bonferroni correction is applied automatically because the test includes three variations). This means that Variation 2 has a 98.23% chance of triggering a positive gain, and therefore of performing better than the original version. The chance of this variation performing worse than the original equals 1.8%, which is a low risk.
Because the chance to win is higher than 95%, Variation 2 may be implemented without incurring a high risk.
However, to find out the gain interval and reduce the risk percentage even more, we would need to also analyze the advanced statistics based on the Bayesian test.
✅ Case #2: Neutral chance to win
If the test displays a chance to win around 50% (between 45% and 55%), this can be due to several factors:
Either traffic is insufficient (in other words, there haven't been enough visits to the website and the visitor statistics do not enable us to establish reliable values)
In this case, we recommend waiting until each variation has clocked 5,000 visitors and a minimum of 500 conversions.
Or the test is neutral because the variations haven't shown an increase or a decrease compared to the original version: This means that the tested hypotheses have no effect on the conversion rate.
In this case, we recommend referring to the confidence interval tab. This will provide you with the confidence interval values. If the confidence interval does not enable you to ascertain a clear gain, the decision will have to be made independently from the test, based on external factors (such as implementation cost, development time, etc.).
✅ Case #3: Low chance to win
In this example, the chosen goal is the CTA click rate in visitor view. The A/B Test is made up of a single variation.
The conversion rate of Variation 1 is 14.76%, compared to 15.66% for the original version. Therefore, the conversion rate of Variation 1 is 5.75% lower than the original version.
The chance to win displays 34.6% for Variation 1. This means that Variation 1 has a 34.6% chance of triggering a positive gain, and therefore of performing better than the original version. The chance of this variation performing worse than the original equals 65.4%, which is a very high risk.
Because the chance to win is lower than 95%, Variation 1 should not be implemented: the risk would be too high.
In this case, you can view the advanced statistics to make sure the confidence interval values are mostly negative.
An AB Tasty session begins when a visitor first accesses a page on the website and a cookie named ABTastySession does not exist. To determine if a current session is active, the code checks for the presence of this cookie. If the cookie exists, a current session is active. If the cookie is not present, a new session is initiated.
A session ends when a visitor remains inactive on the website for 30 minutes or more. This inactivity is tracked regardless of whether the website is open in a tab or not. Once the session ends, the ABTastySession cookie is removed, and all data stored in the cookie is lost and will not be reused in the browser.
For example:
A visitor comes to the website, visits 2 pages, and closes their browser. 30 minutes later, the session will end.
A visitor comes to the website, visits 2 pages, and closes their tab. 30 minutes later, the session will end.
A visitor comes to the website, visits 2 pages, and stays on the second page for more than 30 minutes. The session will end.
The ABTastySession cookie contains useful information to assist the tag in functioning. The cookie stores:
mrasn data: data filled by the tag during a redirection campaign when the "Mask redirect parameters" feature is activated.
lp (landing page) data: the URL of the first page of the website viewed by the visitor during their current session.
sen (session event number) data: the number of ariane hits sent since the beginning of the session.
Referrer data: the value of the document.referrer variable on the first page viewed by the visitor during their current session. This data is only available when the targeting criteria "source" or "source type" is used in an active campaign.
The cookie is only added to the browser if the tag is permitted to do so based on the "restrict cookie deposit" feature. The cookie cannot be moved to another type of storage, unlike the ABTasty cookie.


















Statistical indicators characterize the observed eligible metrics for each variation, as well as the differences between variations for the same metrics. They allow you to make informed decisions for the future based on a proven Bayesian statistical tool.
When you observe a raw growth of X%, the only certainty is that this observation has taken place in the past in a context (time of year, then current events, specific visitors, …) that won’t happen in the future in the exact same way again.
By using statistical indicators to reframe these metrics and associated growth, you get a much clearer picture of the risk you are taking when modifying a page after an A/B test.
Statistical indicators are displayed with the following metrics:
All “action tracking” growth metrics (click rate, scroll tracking, dwell time tracking, visible element tracking)
Pageviews growth metrics
Transaction growth metrics(except average product quantity, price, and revenue)
Bounce rate growth
Revisit rate growth
Statistical indicators are not displayed with the following metrics:
Transaction growth metrics for average product quantity, price, and revenue
Number of viewed pages growth
Lastly, statistical indicators are only displayed on visitor metrics and not on the session metrics. The former are generally the focus of optimizations and, as a consequence, our statistical tool was designed with them in mind and is not compatible with session data.
These indicators are displayed on all variations, except on the one used as the baseline.
The confidence interval indicator is based on the Bayesian test. The Bayesian statistical tool calculates the confidence interval of a gain (or growth), as well as its median value. They enable you to understand the extent of the potential risk related to putting a variation into production following a test.
In the reporting, the confidence interval is visible in the "statistics" metric view:
Our Bayesian test stems from the calculation method developed by mathematician Thomas Bayes. It is based on known events, such as the number of conversions on an objective in relation to the number of visitors who had the opportunity to reach it, and provides as we have seen above a confidence interval on the gain as well as its median value. Bayesian tests enable sound decision-making thanks to nuanced indicators that provide a more complete picture of the expected outcome than a single metric would.
In addition to the raw growth, we provide a 95% confidence interval.
“95%” simply means that we are 95% confident that the true value of the gain is situated between the two values at each end of the interval.
👉 Why not 100%?
In simple terms, it would lead to an confidence interval of infinite width, as there always will be a risk, however minimal.
“95%”is a common statistical compromise between precision and the timeliness of the result.
The remaining 5% is the error, equally divided below and above the low and high bounds of the interval, respectively. Please note that, of those 5%, only 2.5% would lead to a worse outcome than expected. This is the actual business risk.
👉As seen previously, the confidence interval is composed of three values: the lower and higher bounds of the interval, and the median.
Median growth vs Average growth:
These values can often be very close to one another, while not matching exactly. This is normal and shouldn’t be cause for concern.
In the following example, you can see that the variation has a better growth than the original: 5.34%.
Zooming in on confidence interval visualization, we see the following indicators:
Median growth: 5.38%
Lower-bound growth: 0.13%
Higher-bound growth: 10.84%
An important note is that every value in the interval has a different likelihood (or chance) to actually be the real-world growth if the variation were to be put in production:
The median value has the highest chance
The lower-bound and higher-bound values have a low chance
👉 Summarizing:
Getting a value between 0.16% and 14.06% in the future has a 95% chance of happening
Getting a value inferior to 0.16% has a 2.5% chance of happening
Getting a value superior to 14.06% has a 2.5% chance of happening
👉Going further, this means that:
If the lower-bound value is above 0%: your chances to win in the future are maximized, and the associated risk is low;
If the higher-bound value is under 0%: your chances to win in the future are minimized, and the associated risk is high;
If the lower-bound value is under 0% and the higher-bound value above 0%, your risk is uncertain. You will have to judge whether or not the impact of a potential future negative improvement is worth the risk, if waiting for more data has the potential to remove the uncertainty, or if using another metric in the report for the campaign to make a decision is possible.
The smaller the interval, the lower the level of uncertainty: at the beginning of your campaign, the intervals will probably be spaced out. Over time, they will tighten until they stabilize.
In any case, AB Tasty provides these Bayesian tests and statistical metrics to help you to make an informed decision, but can’t be responsible in case of a bad decision. The risk is never null in any case and even if the chance to lose is very low, it doesn’t mean that it can’t happen at all.
This metric is another angle of the confidence interval and answers the question, “What are my chances to get a better/strictly positive growth in the future with the variation I’m looking at?”, or a better/strictly negative growth in the future with the variation I’m looking at?” for the specific bounce rate which have to be the lowest possible.
The chance to win enables a fast result analysis for non-experts. The variation with the biggest improvement is shown in green, which simplifies the decision-making process.
The chance to win indicator enables you to ascertain the odds of a strictly positive gain on a variation compared to the original version. It is expressed as a percentage. When the chance to win is higher than 95%, the progress bar turns green.
As in any percentage of chances that is displayed in betting, it gives you a focus on the positive part of the confidence interval.
The chance to win metric is based on the Bayesian test as it is based on the confidence interval metric.
This metric is always displayed on all variations except on the one which is used as the baseline.
In the reporting, the confidence interval is visible in the "statistics" metric view:
This index assists with the decision-making process, but we recommend reading the chance to win in addition to the confidence intervals, which may display positive or negative values.
The chance to win can take values between 0% and 100% and is rounded to the nearest hundredth.
If the chance to win is equal to or greater than 95%, this means the collected statistics are reliable and the variation can be implemented with what is considered to be low risk (5% or less).
If the chance to win is equal to or lower than 5%, this means the collected statistics are reliable and the variation shouldn’t be implemented with what is considered to be high risk (5% or more).
If the chance to win is close to 50%, it means that the results seem “neutral” - AB Tasty can’t provide a characteristic trend to let you make a decision with the collected data.
👉 What does this mean?
The closer the value is to 0%, the higher the odds of it underperforming compared to the original version, and the higher the odds of having confidence intervals with negative values.
At 50%, the test is considered “neutral”, meaning that the difference is below what can be measured with the available data. There is as much chance of the variation underperforming compared to the original version as there is of it outperforming the original version. The confidence intervals can take negative or positive values. The test is either neutral or does not have enough data.
The closer the value is to 100%, the higher the odds of recording a gain compared to the original version. The confidence intervals are more likely to take on positive values.
If the chance to win displays 0% or 100% in the reporting tool, these figures are rounded (up or down). A statistical probability can never equal exactly 100% or 0%. It is, therefore, preferable to display 100% rather than 99.999999% to facilitate report reading for users.
There are two kinds of statistical tools depending on the type of data analyzed:
For conversion data, corresponding to the notion of success and failure rate we use a Bayesian framework. Typical data is the act of purchasing, reaching a given page, or consenting to subscribe to a newsletter... This framework gives us a chance to win index and confidence interval for the estimated gain.
For transaction data, like the cart value, we use the Mann-Whitney U test which is robust to "extreme" values. This test does not provide a confidence interval, so it only tells if the average cart value goes up or down, but no information is given about the estimated gain.
For clicks data, we use a Bayesian framework where clicks are represented as , whose parameters are the number of tries and a success rate. In the digital experimentation field, the number of tries is the number of visitors and the success rate is the click or transaction rate. In this case, it is important to note that the rates we are dealing with are only estimates for a limited number of visitors. To model this limited accuracy, we use (which are the to binomial distributions).
These distributions model the likelihood of a success rate measured on a limited number of trials.
Let’s take an example:
1,000 visitors on A with 100 successes
1,000 visitors on B with 130 successes
We build the model
Ma = beta(1+success_a,1+failures_a)
Where success_a = 100, and failures_a = visitors_a – success_a =900.
(Note: the 1+ comes from the fact that this distribution can also have another shape and then model a different type of process.)
For the three following graphs, the horizontal axis is the click rate while the vertical axis is the likelihood of that rate knowing that we had an experiment with 100 successes in 1,000 trials.
We observe that 10% is the most likely, 5% or 15% are doubtful, and 11% is half as likely as 10%.
The model Mb is built the same way with data from experiment B:
Mb= beta(1+100,1+870)
For B, the most likely rate is 13%, and the width of the curve’s shape is close to the previous curve.
Then we compare A and B rate distributions.
We see an overlapping area, a 12% conversion rate, where both models have the same likelihood.
To estimate the overlapping region, we need to sample from both models to compare them.
We draw samples from distributions A and B:
s_a[i] is the i th sample from A
s_b[i] is the i th sample from B
Then we apply a comparison function to these samples:
The relative gain: g[i] =100* (s_b[i] – s_a[i])/s_a[i]) for all i.
It is the difference between the possible rates for A and B, relative to A (multiplied by 100 for readability in %).
We can now analyze the samples g[i] with a histogram:
We see that the most likely value of the gain is around 30%.
The yellow line shows where the gain is 0, meaning no difference between A and B. Samples that are below this line correspond to cases where A > B, and samples on the other side are cases where A < B.
We then define the gain chances to win as:
CW = (number of samples > 0) / total number of samples
With 1,000,000 (10^6) samples for g, we have 982,296 samples that are >0, making
B>A ~98% probable.
We call this the “chances to win” or the “gain probability” (the probability that you will win something).
Using the same sampling method, we can compute classic analysis metrics like the mean, median, percentiles, etc.
Looking back at the previous chart, the vertical red lines indicate where most of the blue area is, intuitively which gain values are the most likely.
We have chosen to expose a best and worst-case scenario with a 95% confidence interval. It excludes 2.5% of extreme best and worst cases, leaving out a total of 5% of what we consider rare events. This interval is delimited by the red lines on the graph. We consider that the real gain (as if we had an infinite number of visitors to measure it) lies somewhere in this interval 95% of the time.
In our example, this interval is [1.80%; 29.79%; 66.15%], meaning that it is quite unlikely that the real gain is below 1.8 %, and it is also quite unlikely that the gain is more than 66.15%. And there is an equal chance that the real rate is above or under the median, 29.79%.
It is important to note that, in this case, a 1.80% relative gain is quite small, and is maybe not worth implementation, at least not yet, even if the best-case scenario is very appealing (66%). This is why, in practice, we suggest waiting for at least 5000 visitors per variation before one calls a test "ready", to obtain a smaller confidence interval.
For data like transaction values, we use the for its nice property with extreme values.
A few customers ordering for huge value can raise a variation in average order value but are not significant by the number of people. Imagine that an A/B test holds 10 extreme values (let's say 10 customers that spend 20 times the average order value). The chance that these 10 visitors are not evenly split between A & B is quite high since the assignment is purely random. This will imply a noticeable difference between the average order value of A & B. But this difference is maybe not statistically significant because of the too small number of visitors concerned.
So it is important to trust the chances to win provided by this statistical test. It's not uncommon to see an observed average order value going up and the statistic says that the chance to win is below 50% showing an opposite trend. And the reverse may also happen: an observed negative trend for the average cart value can be a winner if the chances to win are above 95%.
The Bonferroni correction is a method that involves taking into account the risk linked to the presence of several comparisons/variations.
In the case of an A/B Test, if there are only two variations (the original and Variation 1), it is estimated that the winning variation may be implemented if the chance to win is equal to or higher than 95%. In other words, the risk incurred does not exceed 5%.
In the case of an A/B test with two or more variations (the original version, Variation 1, Variation 2, and Variation 3, for instance), if one of the variations (let’s say Variation 1) performs better than the others and you decide to implement it, this means you are favoring this variation over the original version, as well as over Variation 2 and Variation 3. In this case, the risk of loss is multiplied by three (5% multiplied by the number of “abandoned” variations).
A correction is therefore automatically applied to tests featuring one or more variations. Indeed, the displayed chance to win takes the risk related to abandoning the other variations into account. This enables the user to make an informed decision with full knowledge of the risks related to implementing a variation.
When the Bonferroni correction is applied, there may be inconsistencies between the chance to win and the confidence interval displayed in the confidence interval tab. This is because the Bonferroni correction does not apply to confidence interval.
In this example, the chosen goal is the revisit rate in the visitor view. The A/B Test includes three variations.
The conversion rate of Variation 2 is 38.8%, compared to 20.34% for the original version. Therefore, the increase in conversion rate compared to the original equals 18.46%.
The chance to win displays 98.23% for Variation 2 (the Bonferroni correction is applied automatically because the test includes three variations). This means that Variation 2 has a 98.23% chance of triggering a positive gain, and therefore of performing better than the original version. The chance of this variation performing worse than the original equals 1.8%, which is a low risk.
Because the chance to win is higher than 95%, Variation 2 may be implemented without incurring a high risk.
However, to find out the gain interval and reduce the risk percentage even more, we would need to also analyze the advanced statistics based on the Bayesian test.
If the test displays a chance to win around 50% (between 45% and 55%), this can be due to several factors:
Either traffic is insufficient (in other words, there haven't been enough visits to the website and the visitor statistics do not enable us to establish reliable values)
In this case, we recommend waiting until each variation has clocked 5,000 visitors and a minimum of 500 conversions.
Or the test is neutral because the variations haven't shown an increase or a decrease compared to the original version: This means that the tested hypotheses have no effect on the conversion rate.
In this example, the chosen goal is the CTA click rate in visitor view. The A/B Test is made up of a single variation.
The conversion rate of Variation 1 is 14.76%, compared to 15.66% for the original version. Therefore, the conversion rate of Variation 1 is 5.75% lower than the original version.
The chance to win displays 34.6% for Variation 1. This means that Variation 1 has a 34.6% chance of triggering a positive gain, and therefore of performing better than the original version. The chance of this variation performing worse than the original equals 65.4%, which is a very high risk.
Because the chance to win is lower than 95%, Variation 1 should not be implemented: the risk would be too high.
In this case, you can view the advanced statistics to make sure the confidence interval values are mostly negative.
In this case, we recommend referring to the confidence interval tab. This will provide you with the confidence interval values. If the confidence interval does not enable you to ascertain a clear gain, the decision will have to be made independently from the test, based on external factors (such as implementation cost, development time, etc.).








This article explains the concepts of our conversion mechanics and everything you need to know to understand what happens under the hood.
At AB Tasty, every visitor who enters your website must be identified with a unique identifier, attributed to the AB Tasty tag.
By using this method, we can distinguish all of them and attribute them to the right events they have made on your website.
For more technical information about this, read this part of our developer portal.
Once a visitor enters your website, we will initiate a “session”.
The rule for a session is:
We create a session for a visitor, from the start of the visit to 30 mins after inactivity.
We will store all the events the visitor has made in a session, and for each new session, we will increment the number of events.
For example, if the visitor comes to your website at 09:00 AM, and then leaves your website at 09:10 AM, we will end the session at 09:40 AM.
In this example, you can also see that the visitor made a revisit, at 10:00 AM, respecting the rules of 30 mins of inactivity, by the way, we consider this visit as a whole new session.
On the reporting side, you’ll be able to split the data between unique visitors and sessions.
For more technical information about this, .
When a visitor completes an action, like clicking on a CTA, an event takes place.
At AB Tasty, we call these events HITS.
During a visitor journey, we collect hits.
Some hits are automatically collected, and some have to be set up manually.
Here is the list of all supported hits, if they are “Handled by tag”, it means that they are automatically sent if they match the trigger condition and if the tag is installed on the page.
All those hits are gathered once during the visitor session, if a new session occurs, we increase the number of hits in our database.
Hits are useful for 2 reasons:
Without it, you are technically blind and won’t see the impact of your campaigns.
You won’t be able to validate your hypothesis.
For more technical information about the hit system,
Our product lets you manage allocation for the original version and the variation(s) of a campaign.
We give you the choice between:
Case 1 - Put the visitor on the original version of your website and track the hits.
Case 2 - Put the visitor on one of the variations you have set up and track their steps.
Case 3 - Not tracking visitors, which means that they will see the original version of your website but won't be tracked at all.
For more information about the traffic allocation,
If the visitor is on case 1 or 2 of the traffic allocation, we will track their steps and register every hit from the moment they have been targeted to the variation until the moment their session ends on the website.
We began the calculation for a visitor, once they enter a campaign and are tracked (original or variation).
This means that if a visitor has triggered 50 hits, we start the count only at the moment they enter the campaign.
In this example, we can see that the visitor, during their visit, has been allocated to a variation, which means that we begin the registration of all the hits from that time.
All the hits sent before the allocation are not attached to the campaign, which means that they won’t be shown in the reporting.
Keep in mind that, once they are targeted to a campaign, we will keep continuing to track their steps until the campaigns end OR the AB Tasty cookie is removed.
You have to be careful and remember our calculation mechanism, you may have differences between the figures from our reporting and other analytics tools which make calculations on other criteria.
which explains the differences between our tools and GA4 for example.
Once the visitor is allocated to a variation, we will register all the hits from the moment they have been allocated to the moment the campaign ends.
Use case for this example:
1 visitor
1 campaign with 1 variation
Result:
Number of unique visitors: 1
Number of sessions for your visitor: 2
Number of hits registered for the visitor: 6
In this example, we can see that from the moment the visitor has been allocated to the variation, all the following hits will be saved from the initial session until the end of the campaign (until the campaign is paused or the AB Tasty cookie is removed).
If you have the case of having multiple campaigns during a visitor visit, nothing changes, we will save all the hits of the visitor from the moment they are allocated to the moment the campaigns are ended.
Use case for this example:
1 visitor
2 campaigns (A & B) with 1 variation on each campaign
Blue relates to TEST A
Red relates to TEST B
Results:
Number of unique visitors: 1
Number of sessions for your visitor: 2
Number of hits registered for the visitor in the campaign A: 6
Number of hits registered for the visitor in the campaign B: 4
In this example, we can see that the visitor is allocated to TEST A in session 1, and all the subsequent hits are allocated to it. Let’s say that you have created another test, while the “TEST A” is still ongoing. You can see that the hits will be attached to the visitor for both tests.
As you can see in those examples, when a visitor is allocated to a campaign, we begin the count and add all the hits they have triggered during their visits, for all the sessions and future sessions they will do, until the end of the campaign (until the campaign is paused or the AB Tasty cookie is removed).
Let’s try to understand those principles with examples, here is the setup for it:
1 visitor, will have 3 sessions
Session 1: The visitor visits the website and does all the visit processes (Homepage -> Search product page -> Product page) but in the end decides to not buy the product.
Session 2: The visitor revisits the website, does all the process, and buys the product
Session 3: The visitor revisits the website, and buys another product
For the first visit, the visitor:
Enters the website by accessing the homepage
Look for the item on the search product page
Visits the product page, in this page, there is a “purchase button” - [📌 VISITOR has seen the “TEST A” and is now allocated to a variation]
Decides to leave and buy the product later
Results:
Number of visitors: 1
Number of sessions from your visitor: 1
Number of hits registered for the visitor in the TEST A: 1
Hit 5:
For the second visit, let’s check the visitor flow:
Visitor enters the website by accessing the homepage
Look for the item on the search product page
Visits the product page
Adds the product to the basket
Results:
Number of visitors: 1
Number of sessions from your visitor: 2
Number of hits registered for the visitor in the TEST A: 9
Hits from the previous session: 1
For this third session, let’s say that the visitor wants to buy another product from the website, and do all the processes in one time.
Results:
Number of visitors: 1
Number of sessions from your visitor: 3
Number of hits registered for the visitor in the TEST A: 17
Hits from the previous session: 9
AB Tasty works with session, session ends 30 min after visitor inactivity.
AB Tasty works with hits, some hits are automatically sent, and some have to be set manually.
The hits automatically sent do not require any setup.
We will create 2 A/B tests, to understand the logic behind the allocation and the hits agglomeration
TEST A
Modification of the wording of the purchase button (On the product page)
Original: “Add to cart”
Variation 1: “Order now”
TEST B
Modification of the wording to validate the payment (On the cart page)
Original: “Pay”
Variation 1: “Checkout”
Hit type = Pageview
Visitor have seen the home page
Hit 1:
Hit type = Pageview
Visitor have seen the home page
Hit 2:
Hit type = Pageview
Visitor has seen the search product page
Hit 3:
Hit type = Action tracking
Event category = Action tracking
Visitor have selected an item from the product search list, and by clicking on it, they access the product page
Hit 4:
Hit type = Pageview
Visitor has seen the product page
Hit 5:
Hit type = Action tracking
Event category = Action tracking
Visitor has put the product in the basket
Hit 6:
Hit type = Pageview
Visitor has seen the checkout page
Hit 7:
Hit type = Action tracking
Event category = Action tracking
Visitor has validated the basket and has bought the product
Hit 8:
Hit type = Transaction
AB Tasty sends the result of the transaction
Number of hits registered for the visitor in the TEST B: 3
Hits from the previous session: 0 (The test has not been seen by the visitor during the 1st session)
Hits from the current session:
Hit 6:
Hit type = Pageview
Visitor has seen the checkout page
Hit 7:
Hit type = Action tracking
Event category = Action tracking
Visitor has validated the basket and has bought the product
Hit 8:
Hit type = Transaction
AB Tasty sends the result of the transaction
Hit 1:
Hit type = Pageview
Visitor have seen the home page
Hit 2:
Hit type = Pageview
Visitor has seen the search product page
Hit 3:
Hit type = Action tracking
Event category = Action tracking
Visitor have selected an item from the product search list, by clicking on it, they access the product page
Hit 4:
Hit type = Pageview
Visitor has seen the product page
Hit 5:
Hit type = Action tracking
Event category = Action tracking
Visitor has put the product in the basket
Hit 6:
Hit type = Pageview
Visitor has seen the checkout page
Hit 7:
Hit type = Action tracking
Event category = Action tracking
Visitor has validated the basket and has bought the product
Hit 8:
Hit type = Transaction
AB Tasty sends the result of the transaction
Number of hits registered for the visitor in the TEST B: 11
Hits from the previous session: 3
Hits from current session: 8
Hit 1:
Hit type = Pageview
Visitor have seen the home page
Hit 2:
Hit type = Pageview
Visitor has seen the search product page
Hit 3:
Hit type = Action tracking
Event category = Action tracking
Visitor have selected an item from the product search list, by clicking on it, they access the product page
Hit 4:
Hit type = Pageview
Visitor has seen the product page
Hit 5:
Hit type = Action tracking
Event category = Action tracking
Visitor has put the product in the basket
Hit 6:
Hit type = Pageview
Visitor has seen the checkout page
Hit 7:
Hit type = Action tracking
Event category = Action tracking
Visitor has validated the basket and has bought the product
Hit 8:
Hit type = Transaction
AB Tasty sends the result of the transaction
The hits that have to be set manually have to be set by you, by using our visual editor or implementing custom code, if not, we cannot follow the steps you want to track for your goals.
If a tracker is set up after the launch of a campaign, the data won’t be retroactive.
AB Tasty lets you manage the traffic allocation for your variations, which is important for controlling the tracked population
Original and variations are tracked.
If you have set up a part of the traffic to “Untracked”, this part won't be in the reporting.
AB Tasty gathers hits only after the visitor is targeted (By the original or the variation).
We stop the gathering of the hits only if the campaign is paused or the AB Tasty cookie is removed.









