LogoLogo
PlatformsPricingRessources
  • Home
  • Onboarding
  • User documentation
  • Release Notes
  • Help Center
  • User documentation
  • Dashboard
    • Global Experience Dashboard discovery
    • How to filter the Global Experience Dashboard
    • How to customize your Global Experience Dashboard
    • Uplift card
  • Web Experimentation and Personalization
    • Campaign creation and dashboard
      • Types of campaigns
        • Choosing the right type of campaign with Ally, our assistant
      • How to create a campaign
        • Experimentations
          • How to create an A/B Test
          • How to create a Multipage Test
          • How to create a Split Test/Test by Redirection
          • How to create an A/A Test
          • How to create a Patch / Multipage Patch
          • How to create a Multivariate Test
        • Personalizations
          • How to create a Multi-Experience Personalization
          • How to create a Multipage Personalization
          • How to create a Simple Personalization
      • Campaign duplication
        • How to duplicate a campaign
      • How to set-up Main Information step
      • Understanding campaign duration
      • Campaign loading (deferred/ instant)
      • The "Comment" section of the campaign creation flow
      • Campaign statuses
      • How to exclude IP addresses from your campaigns
      • How to use MDE Calculator
      • Campaigns Dashboard
      • Campaign Scheduler
      • Mutually Exclusive Experiments
      • How to use Campaign Prioritization
        • Prioritization of Personalizations
    • Editors and Widget
      • How to create and edit content in the visual editor
      • How to use our Visual Editor - Interactive demo
      • Discovering the Visual Editor
      • Visual editor - History and review of modifications
      • Code Editor
      • Using the Editor Copilot
      • Prompt Inspiration library for Visual Editor Copilot
      • How to create an Action Tracker in the editor
      • How to use redirection option
      • How to create trackers in the editor
      • Widgets
        • Widgets configuration in the visual editor
        • AB Tasty prebuilt Widgets
          • Action Button Widget
          • Banner Widget
          • Before-After Image Widget
          • CSAT Widget
          • Celebrate Widget
          • Countdown widget
          • Drawer Widget
          • Dwell Time Tracking Widget
          • Element Visible Tracking Widget
          • Iframe Click Tracking Widget
          • IFrame Widget
          • Image Pop-in Widget
          • NPS® (Net Promoter Score) Widget
          • Progress Bar Widget
          • Scratch card Widget
          • Scroll Rate Tracking Widget
          • Search & Replace Widget
          • Simple Popin Widget
          • Snowflake Animation Widget
          • Social proof Widget
          • Social Sharing Widget
          • Spotlight Widget
          • Sticky Element Widget
          • Tooltip Widget
          • Video Pop-in Widget
          • Virtual Click Widget
        • Custom widget
          • Custom Widgets: List of propname Fields and Their Specificities
          • Custom widgets: Global usage and information
        • Custom code widget
      • Chrome extension
      • Shadow DOMs & iFrames
        • Understanding Shadow DOM
        • How to edit Shadow DOM in the visual editor
        • Understanding iframes
        • How to edit Iframes in the visual editor
    • Goals step
    • Targeting step
      • How to set up a campaign Targeting
      • How to create a Segment (Who section)
        • Segments Recommendations
        • Inspiration with Engagement Level segmentation
      • How to define Targeted Pages (Where section)
      • How to create a Trigger (How section)
        • Geolocation criterion
        • Device criterion
        • Screen size criterion
        • IP address criterion
        • Weather criterion
        • Landing Page criterion
        • Source & source type criterion
        • Previous Page criterion
        • URL Parameter criterion
        • Minimum Pages viewed criterion
        • Visited Pages criterion
        • Same day visit criterion
        • Browser language criterion
        • Browser criterion
        • Adblocker criterion
        • JS Variable criterion
        • Code-based criterion
        • CSS Selector criterion
        • DataLayer criterion
        • Visitor attributes criterion
      • How to define a Targeting Frequency (When section)
      • How to set-up targeting verifications
      • How to use the replicate targeting option
      • How to use URL operators to build URL patterns
    • Traffic allocation step
      • Campaign flow: Traffic Allocation step
      • Dynamic allocation
      • Even allocation
    • Advanced Options step
      • Sequential Testing Alerts
    • QA step
      • QA Mode & QA Assistant
        • How to use the QA mode
        • How to use the QA Assistant on desktop
        • How to use the QA Assistant on mobile
      • How to use the browser console to QA a campaign
  • Feature Experimentation & Roll-Out
    • How to create server-side A/B tests
    • How to create feature toggles and manage rollouts
    • How to create server-side personalization
    • FE&R Glossary
    • Managing Flags
    • Managing Context Keys
  • Recommendations & Merchandising
    • Recommendations
      • Web recommendation
        • Most popular recommendation filters
        • Variables by placement
        • Placement
        • Products groups
        • Filters
        • Exceptions
        • Options
      • Recommendation Banner
        • Recommendation banner with tag and custom JS
          • Adding Recos Tag to a website
          • Enriching Recommendations Banner HTML to send events to DataLayer through Recos Tag
          • Setting up GTM to send Recos Datalayer Events
          • Connect Tag Assistant from GTM
          • Check that the events arrive correctly in GA
          • Check the display of recommendations
          • Retrieving recos products from a RECO_ID with Recos Tag
        • Deploying a Recommendations Banner Custom Widget
        • Retrieving a Recommendation RECO_ID
        • How to push recommendations with Adobe Campaign
        • How to build recommendations (recommendation builder variant)
      • Mail recommendations
        • How to push recommendations into your Brevo emails
        • Brevo Feed
        • Products groups
        • Filters
        • Products highlights
        • Exceptions
        • Options
        • Parameters
        • Preview
        • Most popular recommendation filters
    • Merchandising
      • Introduction
      • Category
      • Products groups
      • Filters
      • Products highlights
      • Exceptions
      • Result to refine
      • How to deploy a merchandised product list through Algolia
    • Recos & Merch analytics
      • How to track performances
      • Recos Analytics Event format
      • The impact of BigQuery exports in terms of cost
      • Setup analytics retrieve events with Recos Tag
      • Key indicators over the period & Segment
      • Overview data
      • Evolution metrics
    • Recos & Merch API
      • Getting a product list
      • Accessing Recos API (endpoint)
      • Accessing Recommendations API (only specific accounts)
    • Recos & Merch account management
      • How to manage users in Recommandation & Merchandising module
      • How to manage roles
      • How to manage synchronisations
      • How to manage invoices
    • Security and GDPR
      • Procedure for Reporting Personal Data Breaches
      • Backup Policy
      • Security Configuration of Our Servers
      • PSSI
    • Recos & Merch Algorythms
      • How to manage algorithms
      • How to create your algorithms
      • Better understanding
      • Default algorithms
      • Data used to train Algorithms
    • Recos & Merch integrations
      • Integrations with Recos & Merch
      • Brevo Integration
      • Analytic integrations
        • Google Analytics integration
        • Matomo integration
        • Random data integration
      • Recos & Merch CMS integration
        • PrestaShop integration
        • Shopify Integration
        • Custom integration
  • Emotions AI
    • Understand EmotionsAI
      • EmotionsAI in a nutshell
      • Leverage EmotionsAI to get more value from A/B tests
      • Leverage EmotionsAI to get high-potential A/B test ideas
    • First steps with EmotionsAI
      • Onboarding
      • EmotionsAI functioning
      • Navigating the interface
      • FAQ about EmotionsAI
    • Using EmotionsAI
      • How to activate EmotionsAI features
      • EmotionsAI Website Mapping configuration
      • EmotionsAI Journey Analysis
      • EmotionsAI Page Analysis
      • Target EmotionsAI segments
      • QA a campaign on Desktop with EmotionsAI Chrome Extension
      • Use EmotionsAI targeting on hard-coded personalizations
      • Usage of EmotionsAI in some Push integrations
    • Read EmotionsAI campaigns’ report
  • Library
    • How to set-up your libraries
    • Widget library
      • Creating and managing Widgets
      • How to create a preset Widget
      • How to create a Custom Widget
        • How to configure Custom Widgets Custom Forms
      • How to declare your Widget Theme
    • Assets Library
    • Trackers library
      • Action Trackers
        • How to create and manage Action Trackers
      • How to create a Custom Tracker with custom JavaScript code
      • How to create a Custom Tracker via the DataLayer
    • Creating and managing Segments
      • List of Segment criteria
        • CSAT criterion (Customer Satisfaction)
        • NPS® criterion (Customer Satisfaction)
        • Engagement Level criterion
        • EmotionsAI criterion
        • Content Interest criterion
        • Page interest criterion
        • Device criterion
        • Cookie criterion
        • New/Returning visitors criterion
        • Campaign Exposure criterion
        • Number of sessions criterion
        • Day(s) Since Last Session criterion
        • Day(s) Since First Session criterion
        • Action Tracker criterion
        • Cart Abandonment criterion
        • Last Purchase criterion
        • Purchase Frequency criterion
        • Geolocation criterion
        • DMP, CDP, rd party tool criterion
      • Segments Library - Metrics
    • Creating and managing Triggers
      • List of Trigger criteria
    • Creating and managing Saved Pages
  • Integrations
    • Integrations general information
    • Push integrations
      • Adobe Analytics
      • Air
      • Amplitude (analytics browser)
      • Amplitude
      • AT Internet Smart Tag
      • Contentsquare
      • Google Analytics (Audience creation)
      • Google Analytics (UA)
      • Google Analytics (event only)
      • FullStory
      • Heap Analytics
      • Hubspot
      • Matomo
      • Mixpanel
      • Piano Analytics
      • SalesForce Marketing Cloud (Push)
      • Segment
      • Tealium
    • Pull integrations
      • Air
      • Commander Act
      • EmotionsAI
      • Fullstory
      • Google Analytics
      • Heap Analytics
      • Mixpanel
      • mParticle
      • Piano Analytics (Pull)
      • SalesForce Marketing Cloud (Pull)
      • Segment
      • Tealium : Import Audiences
      • Weborama
    • Other integrations
      • Productivity
        • Microsoft Dynamics Commerce
        • Setting up Microsoft Clarity Integration with AB Tasty
        • Didomi
        • How to Set Up the AB Tasty Connector with Notion
        • How to connect AB Tasty to Google Sheets
        • Connect your Slack Workspace to AB Tasty
      • Data Warehouse
        • Data Warehouse integrations: General information
        • Big Query - Daily exports from AB Tasty to Big Query
        • Snowflake - Daily exports from AB Tasty to Snowflake
        • Redshift - Daily exports from AB Tasty to Redshift
    • Custom integrations
      • Universal Data Connector
      • Custom Integration Connector with a rd party tool (Push data)
      • AB Tasty public API
  • Reporting & Performances
    • Data Explorer
      • How to analyze page traffic via Data Explorer
      • How to get metrics breakdown by multiple dimensions via Data Explorer
      • How to get a list of recent hits via Data Explorer
      • Data Explorer - List of Metrics and Dimensions
    • ROI dashboard
    • Performance Center
      • List of Performance Guidelines
    • Reporting
      • Campaign reporting
        • Reporting Copilot
        • Analysis Copilot
      • Data & reports: generalities and definitions
      • AB Tasty reports Metrics
        • Live hits of the reporting
        • Metrics based on tracking widgets
        • Navigation metrics
        • Metrics based on pageviews
        • Metrics based on clicks
        • Metrics based on transactions
      • Reporting Filters
        • General Reporting filters
        • EmotionsAI Reporting filter templates
      • Using the NPS®️ report
        • How does Feedback Analysis Copilot work
      • Sample ratio mismatch
      • Refresh reporting data using Refresh On Demand
      • Data export from the reporting
      • Frequentist Analysis mode
      • Understand the statistics behind your Reports
        • Statistics for the reporting
        • Statistical metrics
        • Avoiding Pitfalls in AOV Analysis for Conversion Rate optimization
        • Conversion mechanisms & concepts
        • Reporting Readiness
  • Account
    • Tag integration
      • All About Tags
      • How to implement the Generic Tag
        • AB Tasty hosted tag implementation
        • Tag Domain Delegation
          • Tag domain delegation implementation
      • How-to QA the Generic Tag
      • How to implement the Generic Tag via Google Tag Manager
      • How-to join the Next tag program
      • AB Tasty integration with Shopify
        • Understanding Shopify App features
        • How to implement the AB Tasty tag via Shopify
        • How to set-up Shopify Custom Pixel App
      • AB Tasty tag compilation
      • How the AB Tasty tag is designed to handle Single Page Apps (SPA)
    • Technical implementation
      • How to choose your cookies deposit method
      • How to declare my Account Domain?
      • JavaScript in AB Tasty
        • How to configure JavaScript
        • Javascript files execution
        • Campaign JavaScript Execution
      • How to configure jQuery loading
      • How to integrate Product hits
      • Product Hits integration principles
        • How to set-up segment criteria "Content Interest" and "Cart Abandonment"
    • Transaction Tag integration
      • How to implement the Transaction tag
      • How to create a Transaction Tracker via DataLayer
      • How to use the Transaction Tag Generator
      • How to QA the Transaction tag
    • Performance and security
      • How to manage visitor identity
      • How to deactivate AB Tasty
      • Consent policy - cookies, storage and privacy
    • Account management
      • The organization page
      • Subscription page
      • How to manage users?
Powered by GitBook
LogoLogo

AB Tasty Website

  • Home page AB Tasty
  • Blog
  • Sample size calculator
  • Release note

AB Tasty Plateform

  • Login

© Copyright 2025 AB Tasty, Inc, All rights reserved

On this page
  • The Copilot elements
  • Analysis status
  • Variation highlight
  • Analysis dialog box
  • Campaign trends
  • Positive trends
  • Neutral and opposite trends
  • Negative trends
  • Campaign readiness not reached

Was this helpful?

Edit on GitLab
Export as PDF
  1. Reporting & Performances
  2. Reporting
  3. Campaign reporting

Reporting Copilot

PreviousCampaign reportingNextAnalysis Copilot

Last updated 11 days ago

Was this helpful?

The Reporting Copilot is a module aiming at simplifying the analysis process of your campaign data to enable you to make the best decisions. It indicates whether the campaign has a winning variation, what is the winning variation and why.

The Reporting Copilot is available only if your campaign:

  • is an A/B Test or a Multipage test

  • has a transaction goal set as the primary goal

  • has a readiness that has been reached

  • has some traffic on the original version and more than 1000 unique visitors

The Reporting Copilot is statistically valuable since it is based on the confidence interval and chance to win both Average order value and Conversion rate of all variations of the reporting. However, you must still consider other goals’ results to make the good decision. For information on the reporting, refer to .

The Copilot elements

The Copilot is made of the three following elements:

  • The analysis status

  • The variation highlight

  • The analysis dialog box

Analysis status

This is the result of your campaign and is displayed under the main readiness element for the primary goal. You can see at a glance what is the result of your campaign.

Variation highlight

According to the campaign result, the winning variation (or the one with the best trend) is highlighted in the goal spreadsheet with specific background colors and captions:

  • When there is a winning variation: the variation is highlighted in green with the mention “winning variation”

  • When no winning variation has been identified: the original version is highlighted with the mention “baseline is winning” in red to show that the campaign is “losing”.

  • When the campaign is neutral: the variation that has the best performance for the moment is highlighted with the mention “trending variation” in yellow.

Analysis dialog box

The campaign analysis gets automatically displayed in a dialog box over the primary goal spreadsheet of the reporting. It can give seven different analysis results.

You can decide not to display the analysis dialog box again by checking the “Don’t show analysis anymore” box (The preference will apply only to this reporting). You can display the dialog box again by clicking on the Report Copilot status button (“Open analysis”).

Campaign trends

The reporting can be categorized under three trends:

  • Positive trends

  • Neutral trends

  • Negative trends

Those trends are calculated based on the Average order value and the Transaction rate.

Positive trends

Positive trends indicate that we have a winning variation and happens in two situations.

Well performing campaigns

This indicates that the variation has a great optimization potential and can be implemented on production without major risk, as either the average order value or the transaction rate has a good trend while the other has a balanced trend.

Over-performing campaigns

This indicates that both the average order value and the transaction rate have good trends.

Neutral and opposite trends

Neutral trends indicate that there is no variation recommendation, as either both the average order value and the transaction rate have balanced trends, or they have opposite trends: one performing well while the other is not performing well.

Neutral campaigns due to lack of data

Even when the campaigns have reached the readiness threshold, Copilot currently cannot designate a winning variation. If the campaign has been running for an extended period, further monitoring may be necessary. If the campaign is relatively new, we suggest allowing it to run for a few additional days to gather more data for a better evaluation. If the readiness has been reached more than 30 days ago, the reporting copilot won't be able to display the "needs more data" status, and you will see either a positive (winning variation), negative (baseline winning) or neutral (opposite trends) status.

Opposite trends

While a specific variation may be highlighted as trending, we do not recommend implementing it due to the lack of a proper decision-making basis. Despite observing opposite trends in the variations, it's important to note that these fluctuations do not align with the objectives of this particular test.

Negative trends

Negative trends indicate that the original version (baseline) is performing better than the variations, that is, there are no winning variations identified.

No winning variation

In such cases, we recommend pausing the campaign and launching a new one with a different approach, typically by testing a new hypothesis.

Campaign readiness not reached

When the readiness of your campaign (readiness of your primary goal) has not been reached yet, the Copilot can’t analyze your reporting data.

When switching the baseline, the Copilot’s logic remains the same: we keep on identifying the potential of the variations compared to the baseline. Keep in mind that the Copilot’s advice can change if the baseline changes.

In this case, you must wait for the three indicators of readiness to be reached. For more information on reporting readiness, refer to .

Understanding reporting readiness
Campaign reporting