LogoLogo
PlatformsPricingRessources
  • Home
  • Onboarding
  • User documentation
  • Release Notes
  • Help Center
  • User documentation
  • Dashboard
    • Global Experience Dashboard discovery
    • How to filter the Global Experience Dashboard
    • How to customize your Global Experience Dashboard
    • Uplift card
  • Web Experimentation and Personalization
    • Campaign creation and dashboard
      • Types of campaigns
        • Choosing the right type of campaign with Ally, our assistant
      • How to create a campaign
        • Experimentations
          • How to create an A/B Test
          • How to create a Multipage Test
          • How to create a Split Test/Test by Redirection
          • How to create an A/A Test
          • How to create a Patch / Multipage Patch
          • How to create a Multivariate Test
        • Personalizations
          • How to create a Multi-Experience Personalization
          • How to create a Multipage Personalization
          • How to create a Simple Personalization
      • Campaign duplication
        • How to duplicate a campaign
      • How to set-up Main Information step
      • Understanding campaign duration
      • Campaign loading (deferred/ instant)
      • The "Comment" section of the campaign creation flow
      • Campaign statuses
      • How to exclude IP addresses from your campaigns
      • How to use MDE Calculator
      • Campaigns Dashboard
      • Campaign Scheduler
      • Mutually Exclusive Experiments
      • How to use Campaign Prioritization
        • Prioritization of Personalizations
    • Editors and Widget
      • How to create and edit content in the visual editor
      • How to use our Visual Editor - Interactive demo
      • Discovering the Visual Editor
      • Visual editor - History and review of modifications
      • Code Editor
      • Using the Editor Copilot
      • Prompt Inspiration library for Visual Editor Copilot
      • How to create an Action Tracker in the editor
      • How to use redirection option
      • How to create trackers in the editor
      • Widgets
        • Widgets configuration in the visual editor
        • AB Tasty prebuilt Widgets
          • Action Button Widget
          • Banner Widget
          • Before-After Image Widget
          • CSAT Widget
          • Celebrate Widget
          • Countdown widget
          • Drawer Widget
          • Dwell Time Tracking Widget
          • Element Visible Tracking Widget
          • Iframe Click Tracking Widget
          • IFrame Widget
          • Image Pop-in Widget
          • NPS® (Net Promoter Score) Widget
          • Progress Bar Widget
          • Scratch card Widget
          • Scroll Rate Tracking Widget
          • Search & Replace Widget
          • Simple Popin Widget
          • Snowflake Animation Widget
          • Social proof Widget
          • Social Sharing Widget
          • Spotlight Widget
          • Sticky Element Widget
          • Tooltip Widget
          • Video Pop-in Widget
          • Virtual Click Widget
        • Custom widget
          • Custom Widgets: List of propname Fields and Their Specificities
          • Custom widgets: Global usage and information
        • Custom code widget
      • Chrome extension
      • Shadow DOMs & iFrames
        • Understanding Shadow DOM
        • How to edit Shadow DOM in the visual editor
        • Understanding iframes
        • How to edit Iframes in the visual editor
    • Goals step
    • Targeting step
      • How to set up a campaign Targeting
      • How to create a Segment (Who section)
        • Segments Recommendations
        • Inspiration with Engagement Level segmentation
      • How to define Targeted Pages (Where section)
      • How to create a Trigger (How section)
        • Geolocation criterion
        • Device criterion
        • Screen size criterion
        • IP address criterion
        • Weather criterion
        • Landing Page criterion
        • Source & source type criterion
        • Previous Page criterion
        • URL Parameter criterion
        • Minimum Pages viewed criterion
        • Visited Pages criterion
        • Same day visit criterion
        • Browser language criterion
        • Browser criterion
        • Adblocker criterion
        • JS Variable criterion
        • Code-based criterion
        • CSS Selector criterion
        • DataLayer criterion
        • Visitor attributes criterion
      • How to define a Targeting Frequency (When section)
      • How to set-up targeting verifications
      • How to use the replicate targeting option
      • How to use URL operators to build URL patterns
    • Traffic allocation step
      • Campaign flow: Traffic Allocation step
      • Dynamic allocation
      • Even allocation
    • Advanced Options step
      • Sequential Testing Alerts
    • QA step
      • QA Mode & QA Assistant
        • How to use the QA mode
        • How to use the QA Assistant on desktop
        • How to use the QA Assistant on mobile
      • How to use the browser console to QA a campaign
  • Feature Experimentation & Roll-Out
    • How to create server-side A/B tests
    • How to create feature toggles and manage rollouts
    • How to create server-side personalization
    • FE&R Glossary
    • Managing Flags
    • Managing Context Keys
  • Recommendations & Merchandising
    • Recommendations
      • Web recommendation
        • Most popular recommendation filters
        • Variables by placement
        • Placement
        • Products groups
        • Filters
        • Exceptions
        • Options
      • Recommendation Banner
        • Recommendation banner with tag and custom JS
          • Adding Recos Tag to a website
          • Enriching Recommendations Banner HTML to send events to DataLayer through Recos Tag
          • Setting up GTM to send Recos Datalayer Events
          • Connect Tag Assistant from GTM
          • Check that the events arrive correctly in GA
          • Check the display of recommendations
          • Retrieving recos products from a RECO_ID with Recos Tag
        • Deploying a Recommendations Banner Custom Widget
        • Retrieving a Recommendation RECO_ID
        • How to push recommendations with Adobe Campaign
        • How to build recommendations (recommendation builder variant)
      • Mail recommendations
        • How to push recommendations into your Brevo emails
        • Brevo Feed
        • Products groups
        • Filters
        • Products highlights
        • Exceptions
        • Options
        • Parameters
        • Preview
        • Most popular recommendation filters
    • Merchandising
      • Introduction
      • Category
      • Products groups
      • Filters
      • Products highlights
      • Exceptions
      • Result to refine
      • How to deploy a merchandised product list through Algolia
    • Recos & Merch analytics
      • How to track performances
      • Recos Analytics Event format
      • The impact of BigQuery exports in terms of cost
      • Setup analytics retrieve events with Recos Tag
      • Key indicators over the period & Segment
      • Overview data
      • Evolution metrics
    • Recos & Merch API
      • Getting a product list
      • Accessing Recos API (endpoint)
      • Accessing Recommendations API (only specific accounts)
    • Recos & Merch account management
      • How to manage users in Recommandation & Merchandising module
      • How to manage roles
      • How to manage synchronisations
      • How to manage invoices
    • Security and GDPR
      • Procedure for Reporting Personal Data Breaches
      • Backup Policy
      • Security Configuration of Our Servers
      • PSSI
    • Recos & Merch Algorythms
      • How to manage algorithms
      • How to create your algorithms
      • Better understanding
      • Default algorithms
      • Data used to train Algorithms
    • Recos & Merch integrations
      • Integrations with Recos & Merch
      • Brevo Integration
      • Analytic integrations
        • Google Analytics integration
        • Matomo integration
        • Random data integration
      • Recos & Merch CMS integration
        • PrestaShop integration
        • Shopify Integration
        • Custom integration
  • Emotions AI
    • Understand EmotionsAI
      • EmotionsAI in a nutshell
      • Leverage EmotionsAI to get more value from A/B tests
      • Leverage EmotionsAI to get high-potential A/B test ideas
    • First steps with EmotionsAI
      • Onboarding
      • EmotionsAI functioning
      • Navigating the interface
      • FAQ about EmotionsAI
    • Using EmotionsAI
      • How to activate EmotionsAI features
      • EmotionsAI Website Mapping configuration
      • EmotionsAI Journey Analysis
      • EmotionsAI Page Analysis
      • Target EmotionsAI segments
      • QA a campaign on Desktop with EmotionsAI Chrome Extension
      • Use EmotionsAI targeting on hard-coded personalizations
      • Usage of EmotionsAI in some Push integrations
    • Read EmotionsAI campaigns’ report
  • Library
    • How to set-up your libraries
    • Widget library
      • Creating and managing Widgets
      • How to create a preset Widget
      • How to create a Custom Widget
        • How to configure Custom Widgets Custom Forms
      • How to declare your Widget Theme
    • Assets Library
    • Trackers library
      • Action Trackers
        • How to create and manage Action Trackers
      • How to create a Custom Tracker with custom JavaScript code
      • How to create a Custom Tracker via the DataLayer
    • Creating and managing Segments
      • List of Segment criteria
        • CSAT criterion (Customer Satisfaction)
        • NPS® criterion (Customer Satisfaction)
        • Engagement Level criterion
        • EmotionsAI criterion
        • Content Interest criterion
        • Page interest criterion
        • Device criterion
        • Cookie criterion
        • New/Returning visitors criterion
        • Campaign Exposure criterion
        • Number of sessions criterion
        • Day(s) Since Last Session criterion
        • Day(s) Since First Session criterion
        • Action Tracker criterion
        • Cart Abandonment criterion
        • Last Purchase criterion
        • Purchase Frequency criterion
        • Geolocation criterion
        • DMP, CDP, rd party tool criterion
      • Segments Library - Metrics
    • Creating and managing Triggers
      • List of Trigger criteria
    • Creating and managing Saved Pages
  • Integrations
    • Integrations general information
    • Push integrations
      • Adobe Analytics
      • Air
      • Amplitude (analytics browser)
      • Amplitude
      • AT Internet Smart Tag
      • Contentsquare
      • Google Analytics (Audience creation)
      • Google Analytics (UA)
      • Google Analytics (event only)
      • FullStory
      • Heap Analytics
      • Hubspot
      • Matomo
      • Mixpanel
      • Piano Analytics
      • SalesForce Marketing Cloud (Push)
      • Segment
      • Tealium
    • Pull integrations
      • Air
      • Commander Act
      • EmotionsAI
      • Fullstory
      • Google Analytics
      • Heap Analytics
      • Mixpanel
      • mParticle
      • Piano Analytics (Pull)
      • SalesForce Marketing Cloud (Pull)
      • Segment
      • Tealium : Import Audiences
      • Weborama
    • Other integrations
      • Productivity
        • Microsoft Dynamics Commerce
        • Setting up Microsoft Clarity Integration with AB Tasty
        • Didomi
        • How to Set Up the AB Tasty Connector with Notion
        • How to connect AB Tasty to Google Sheets
        • Connect your Slack Workspace to AB Tasty
      • Data Warehouse
        • Data Warehouse integrations: General information
        • Big Query - Daily exports from AB Tasty to Big Query
        • Snowflake - Daily exports from AB Tasty to Snowflake
        • Redshift - Daily exports from AB Tasty to Redshift
    • Custom integrations
      • Universal Data Connector
      • Custom Integration Connector with a rd party tool (Push data)
      • AB Tasty public API
  • Reporting & Performances
    • Data Explorer
      • How to analyze page traffic via Data Explorer
      • How to get metrics breakdown by multiple dimensions via Data Explorer
      • How to get a list of recent hits via Data Explorer
      • Data Explorer - List of Metrics and Dimensions
    • ROI dashboard
    • Performance Center
      • List of Performance Guidelines
    • Reporting
      • Campaign reporting
        • Reporting Copilot
        • Analysis Copilot
      • Data & reports: generalities and definitions
      • AB Tasty reports Metrics
        • Live hits of the reporting
        • Metrics based on tracking widgets
        • Navigation metrics
        • Metrics based on pageviews
        • Metrics based on clicks
        • Metrics based on transactions
      • Reporting Filters
        • General Reporting filters
        • EmotionsAI Reporting filter templates
      • Using the NPS®️ report
        • How does Feedback Analysis Copilot work
      • Sample ratio mismatch
      • Refresh reporting data using Refresh On Demand
      • Data export from the reporting
      • Frequentist Analysis mode
      • Understand the statistics behind your Reports
        • Statistics for the reporting
        • Statistical metrics
        • Avoiding Pitfalls in AOV Analysis for Conversion Rate optimization
        • Conversion mechanisms & concepts
        • Reporting Readiness
  • Account
    • Tag integration
      • All About Tags
      • How to implement the Generic Tag
        • AB Tasty hosted tag implementation
        • Tag Domain Delegation
          • Tag domain delegation implementation
      • How-to QA the Generic Tag
      • How to implement the Generic Tag via Google Tag Manager
      • How-to join the Next tag program
      • AB Tasty integration with Shopify
        • Understanding Shopify App features
        • How to implement the AB Tasty tag via Shopify
        • How to set-up Shopify Custom Pixel App
      • AB Tasty tag compilation
      • How the AB Tasty tag is designed to handle Single Page Apps (SPA)
    • Technical implementation
      • How to choose your cookies deposit method
      • How to declare my Account Domain?
      • JavaScript in AB Tasty
        • How to configure JavaScript
        • Javascript files execution
        • Campaign JavaScript Execution
      • How to configure jQuery loading
      • How to integrate Product hits
      • Product Hits integration principles
        • How to set-up segment criteria "Content Interest" and "Cart Abandonment"
    • Transaction Tag integration
      • How to implement the Transaction tag
      • How to create a Transaction Tracker via DataLayer
      • How to use the Transaction Tag Generator
      • How to QA the Transaction tag
    • Performance and security
      • How to manage visitor identity
      • How to deactivate AB Tasty
      • Consent policy - cookies, storage and privacy
    • Account management
      • The organization page
      • Subscription page
      • How to manage users?
Powered by GitBook
LogoLogo

AB Tasty Website

  • Home page AB Tasty
  • Blog
  • Sample size calculator
  • Release note

AB Tasty Plateform

  • Login

© Copyright 2025 AB Tasty, Inc, All rights reserved

On this page
  • Confidence interval based on Bayesian tests
  • Chance to win
  • User session

Was this helpful?

Edit on GitLab
Export as PDF
  1. Reporting & Performances
  2. Reporting
  3. Understand the statistics behind your Reports

Statistical metrics

PreviousStatistics for the reportingNextAvoiding Pitfalls in AOV Analysis for Conversion Rate optimization

Last updated 11 days ago

Was this helpful?

Statistical indicators characterize the observed eligible metrics for each variation, as well as the differences between variations for the same metrics. They allow you to make informed decisions for the future based on a proven Bayesian statistical tool.

When you observe a raw growth of X%, the only certainty is that this observation has taken place in the past in a context (time of year, then current events, specific visitors, …) that won’t happen in the future in the exact same way again.

By using statistical indicators to reframe these metrics and associated growth, you get a much clearer picture of the risk you are taking when modifying a page after an A/B test.

Statistical indicators are displayed with the following metrics:

  • All “action trackers” growth metrics (click rate, scroll tracking, dwell time tracking, visible element tracking)

  • Pageviews growth metrics

  • Transaction growth metrics(except average product quantity, price, and revenue)

  • Bounce rate growth

  • Revisit rate growth

Statistical indicators are not displayed with the following metrics:

  • Transaction growth metrics for average product quantity, price, and revenue

  • Number of viewed pages growth

Lastly, statistical indicators are only displayed on visitor metrics and not on the session metrics. The former are generally the focus of optimizations and, as a consequence, our statistical tool was designed with them in mind and is not compatible with session data.

These indicators are displayed on all variations, except on the one used as the baseline. See this to learn how to change the baseline in a report.

Confidence interval based on Bayesian tests

The confidence interval indicator is based on the Bayesian test. The Bayesian statistical tool calculates the confidence interval of a gain (or growth), as well as its median value. They enable you to understand the extent of the potential risk related to putting a variation into production following a test.

Where to find the confidence interval

How to read and interpret the confidence interval

Our Bayesian test stems from the calculation method developed by mathematician Thomas Bayes. It is based on known events, such as the number of conversions on an objective in relation to the number of visitors who had the opportunity to reach it, and provides as we have seen above a confidence interval on the gain as well as its median value. Bayesian tests enable sound decision-making thanks to nuanced indicators that provide a more complete picture of the expected outcome than a single metric would.

In addition to the raw growth, we provide a 95% confidence interval.

“95%” simply means that we are 95% confident that the true value of the gain is situated between the two values at each end of the interval.

👉 Why not 100%?

In simple terms, it would lead to an confidence interval of infinite width, as there always will be a risk, however minimal.

“95%”is a common statistical compromise between precision and the timeliness of the result.

The remaining 5% is the error, equally divided below and above the low and high bounds of the interval, respectively. Please note that, of those 5%, only 2.5% would lead to a worse outcome than expected. This is the actual business risk.

👉As seen previously, the confidence interval is composed of three values: the lower and higher bounds of the interval, and the median.

Median growth vs Average growth:

These values can often be very close to one another, while not matching exactly. This is normal and shouldn’t be cause for concern.

In the following example, you can see that the variation has a better transaction rate than the original: 2.3% vs 2.46%. The average growth is about +6.89%.

Zooming in on confidence interval visualization, we see the following indicators:

  • Median growth: 6.88%

  • Lower-bound growth: 0.16%

  • Higher-bound growth: 14.06%

An important note is that every value in the interval has a different likelihood (or chance) to actually be the real-world growth if the variation were to be put in production:

  • The median value has the highest chance

  • The lower-bound and higher-bound values have a low chance

👉 Summarizing:

  • Getting a value between 0.16% and 14.06% in the future has a 95% chance of happening

  • Getting a value inferior to 0.16% has a 2.5% chance of happening

  • Getting a value superior to 14.06% has a 2.5% chance of happening

👉Going further, this means that:

  • If the lower-bound value is above 0%: your chances to win in the future are maximized, and the associated risk is low;

  • If the higher-bound value is under 0%: your chances to win in the future are minimized, and the associated risk is high;

  • If the lower-bound value is under 0% and the higher-bound value above 0%, your risk is uncertain. You will have to judge whether or not the impact of a potential future negative improvement is worth the risk, if waiting for more data has the potential to remove the uncertainty, or if using another metric in the report for the campaign to make a decision is possible.

Good to know💡 The smaller the interval, the lower the level of uncertainty: at the beginning of your campaign, the intervals will probably be spaced out. Over time, they will tighten until they stabilize.

Heads up⚡️ In any case, AB Tasty provides these Bayesian tests and statistical metrics to help you to make an informed decision, but can’t be responsible in case of a bad decision. The risk is never null in any case and even if the chance to lose is very low, it doesn’t mean that it can’t happen at all.

Chance to win

This metric is another angle of the confidence interval and answers the question, “What are my chances to get a better/strictly positive growth in the future with the variation I’m looking at?”, or a better/strictly negative growth in the future with the variation I’m looking at?” for the specific bounce rate which have to be the lowest possible.

The chance to win enables a fast result analysis for non-experts. The variation with the biggest improvement is shown in green, which simplifies the decision-making process.

The chance to win indicator enables you to ascertain the odds of a strictly positive gain on a variation compared to the original version. It is expressed as a percentage. When the chance to win is higher than 95%, the progress bar turns green.

As in any percentage of chances that is displayed in betting, it gives you a focus on the positive part of the confidence interval.

The chance to win metric is based on the Bayesian test as it is based on the confidence interval metric. See the section about Bayesian tests in the confidence interval metric section.

Where to find the chance to win

  • In the “Statistics” tab for non-transactional metrics

  • In the detailed view of transactional metrics

How to read and interpret the chance to win

This index assists with the decision-making process, but we recommend reading the chance to win in addition to the confidence intervals, which may display positive or negative values.

The chance to win can take values between 0% and 100% and is rounded to the nearest hundredth.

  • If the chance to win is equal to or greater than 95%, this means the collected statistics are reliable and the variation can be implemented with what is considered to be low risk (5% or less).

  • If the chance to win is equal to or lower than 5%, this means the collected statistics are reliable and the variation shouldn’t be implemented with what is considered to be high risk (5% or more).

  • If the chance to win is close to 50%, it means that the results seem “neutral” - AB Tasty can’t provide a characteristic trend to let you make a decision with the collected data.

👉 What does this mean?

  • The closer the value is to 0%, the higher the odds of it underperforming compared to the original version, and the higher the odds of having confidence intervals with negative values.

  • At 50%, the test is considered “neutral”, meaning that ​​the difference is below what can be measured with the available data. There is as much chance of the variation underperforming compared to the original version as there is of it outperforming the original version. The confidence intervals can take negative or positive values. The test is either neutral or does not have enough data.

  • The closer the value is to 100%, the higher the odds of recording a gain compared to the original version. The confidence intervals are more likely to take on positive values.

Good to know 💡

If the chance to win displays 0% or 100% in the reporting tool, these figures are rounded (up or down). A statistical probability can never equal exactly 100% or 0%. It is, therefore, preferable to display 100% rather than 99.999999% to facilitate report reading for users.

Bonferroni correction

The Bonferroni correction is a method that involves taking into account the risk linked to the presence of several comparisons/variations.

In the case of an A/B Test, if there are only two variations (the original and Variation 1), it is estimated that the winning variation may be implemented if the chance to win is equal to or higher than 95%. In other words, the risk incurred does not exceed 5%.

In the case of an A/B test with two or more variations (the original version, Variation 1, Variation 2, and Variation 3, for instance), if one of the variations (let’s say Variation 1) performs better than the others and you decide to implement it, this means you are favoring this variation over the original version, as well as over Variation 2 and Variation 3. In this case, the risk of loss is multiplied by three (5% multiplied by the number of “abandoned” variations).

A correction is therefore automatically applied to tests featuring one or more variations. Indeed, the displayed chance to win takes the risk related to abandoning the other variations into account. This enables the user to make an informed decision with full knowledge of the risks related to implementing a variation.

Good to know: When the Bonferroni correction is applied, there may be inconsistencies between the chance to win and the confidence interval displayed in the confidence interval tab. This is because the Bonferroni correction does not apply to confidence interval.

Examples

✅ Case #1: High chance to win

In this example, the chosen goal is the revisit rate in the visitor view. The A/B Test includes three variations.

The conversion rate of Variation 2 is 38.8%, compared to 20.34% for the original version. Therefore, the increase in conversion rate compared to the original equals 18.46%.

The chance to win displays 98.23% for Variation 2 (the Bonferroni correction is applied automatically because the test includes three variations). This means that Variation 2 has a 98.23% chance of triggering a positive gain, and therefore of performing better than the original version. The chance of this variation performing worse than the original equals 1.8%, which is a low risk.

Because the chance to win is higher than 95%, Variation 2 may be implemented without incurring a high risk.

However, to find out the gain interval and reduce the risk percentage even more, we would need to also analyze the advanced statistics based on the Bayesian test.

✅ Case #2: Neutral chance to win

If the test displays a chance to win around 50% (between 45% and 55%), this can be due to several factors:

  • Either traffic is insufficient (in other words, there haven't been enough visits to the website and the visitor statistics do not enable us to establish reliable values)

    • In this case, we recommend waiting until each variation has clocked 5,000 visitors and a minimum of 500 conversions.

  • Or the test is neutral because the variations haven't shown an increase or a decrease compared to the original version: This means that the tested hypotheses have no effect on the conversion rate.

    • In this case, we recommend referring to the confidence interval tab. This will provide you with the confidence interval values. If the confidence interval does not enable you to ascertain a clear gain, the decision will have to be made independently from the test, based on external factors (such as implementation cost, development time, etc.).

✅ Case #3: Low chance to win

In this example, the chosen goal is the CTA click rate in visitor view. The A/B Test is made up of a single variation.

The conversion rate of Variation 1 is 14.76%, compared to 15.66% for the original version. Therefore, the conversion rate of Variation 1 is 5.75% lower than the original version.

The chance to win displays 34.6% for Variation 1. This means that Variation 1 has a 34.6% chance of triggering a positive gain, and therefore of performing better than the original version. The chance of this variation performing worse than the original equals 65.4%, which is a very high risk.

Because the chance to win is lower than 95%, Variation 1 should not be implemented: the risk would be too high.

    • In this case, you can view the advanced statistics to make sure the confidence interval values are mostly negative.

User session

An AB Tasty session begins when a visitor first accesses a page on the website and a cookie named ABTastySession does not exist. To determine if a current session is active, the code checks for the presence of this cookie. If the cookie exists, a current session is active. If the cookie is not present, a new session is initiated.

A session ends when a visitor remains inactive on the website for 30 minutes or more. This inactivity is tracked regardless of whether the website is open in a tab or not. Once the session ends, the ABTastySession cookie is removed, and all data stored in the cookie is lost and will not be reused in the browser.

For example:

  • A visitor comes to the website, visits 2 pages, and closes their browser. 30 minutes later, the session will end.

  • A visitor comes to the website, visits 2 pages, and closes their tab. 30 minutes later, the session will end.

  • A visitor comes to the website, visits 2 pages, and stays on the second page for more than 30 minutes. The session will end.

The ABTastySession cookie contains useful information to assist the tag in functioning. The cookie stores:

  • mrasn data: data filled by the tag during a redirection campaign when the "Mask redirect parameters" feature is activated.

  • lp (landing page) data: the URL of the first page of the website viewed by the visitor during their current session.

  • sen (session event number) data: the number of ariane hits sent since the beginning of the session.

  • Referrer data: the value of the document.referrer variable on the first page viewed by the visitor during their current session. This data is only available when the targeting criteria "source" or "source type" is used in an active campaign.

The cookie is only added to the browser if the tag is permitted to do so based on the "restrict cookie deposit" feature. The cookie cannot be moved to another type of storage, unlike the ABTasty cookie.

52.png

This metric is always displayed on all variations except on the one which is used as the baseline. See this to learn how to change the baseline in a report.

53.png

Campaign reporting guide
guide