The Advanced options is a mandatory step during your campaign configuration. It enables you to configure advanced settings relate to the integrations, the tag performance or your campaign's future performance.
You can link your campaign to your analytics tool. This way, youâll be able to analyze your campaign data in your own dashboards. Weâre integrated with a lot of analytics tools.
When you create a connector for a third-party tool, you can check a box to automatically add this connector to the Advanced step of all your future campaigns (it's not retroactive). In this case, when arriving on the Advanced step of any new campaign you create, the connector will be automatically selected.
If you leave this box unchecked (keep default behavior), the connector won't be selected in the campaign step. To send campaign data to your connector, go to the Advanced step of your campaign and select the desired connector from the dropdown. Some analytics tools might require additional settings, which appear in a form panel after selecting the tool. Once launched, your campaign will start sending data to the connectors you have selected.
You can also decide to send campaign data to your connector from the connector list of a third-party tool. To do so, go to the Integration Hub, click on the tool card and go to the setup tab, unfold the desired connector, tick the box and save your changes.
For more information on third-party tools, refer to .\
In this section, you can set up advanced functions to optimize tag performance on the campaign. By default, your campaign is set to deferred but you can switch to loading using the toggle button.
To learn more about this option, refer to .
The Sequential Testing algorithm enables you to detect if your experiment will not be successful at all based on your Primary goal results.
Your Primary goal has to be based on a Conversion/Transaction Rate in order to feed the Sequential Testing algorithm.
Only Users and upper roles will receive email alerts if and only if they didnât disable it in their profile, Notification center section.
To learn more about our Sequential Testing Alerts, refer to .





Sequential Testing Alerts offers a sensitivity threshold to detect underperforming variations early and trigger alerts or pause experiments. It enables you to make data-driven decisions swiftly and optimize user experiences effectively. Sequential Testing saves time and resources by stopping experiments that are unlikely to yield positive results.
This alerting system is based on the percentage of conversions lost per variations.
Like any alerting system you need to set a sensitivity to trigger the alarm. And like any alerting system when we speak about sensitivity you need to make a tradeoff between two extreme cases: either you have a high sensitivity but a lot of alerts, so you might not see useful alarms among false alarms; or low sensitivity but some problems may be missed, or detected later.
Please refer to our this article in order to configure the Sequential Testing Alerts feature.
Sequential Testing configuration for Feature Experimentation & Rollout is available here.
#ď¸âŁ Takeaway N° 1
Each service/website/application is unique so you might launch several experiments with different levels of sensitivity before finding the right one adapted to your business. The lower that level will be triggered, the higher is the chance that there is a problem (and that we should raise an alarm)
#ď¸âŁ Takeaway N° 2
if too many false alarm are raised, then lower the sensitivity
If alarms are missed, then raise it
Sensitivity levels will compare the variations with the reference and trigger an alert if one of the sensitivity levels is reached.
They are based on the number of false alarms you would get if all your experiments were in fact neutral tests.
The High sensitivity level will raise a maximum of alerts, and could even detect neutral experiments. Enables you to improve your ressources by allocating the traffic to the experiments that are more worthwhile.
The Low sensitivity level will detect the most impactful loss but youâll not avoid losing traffic as it could be detected later than the High one.
The Balanced sensitivity level is a compromise between the Strong and the Low.
At this point of the article, you know that A/A experiment are edge cases, then there is two cases where alerting may not be a good idea or should be interpreted with care :
When doing A/A test , obviously this will trigger the alarm uselessly
When doing a âneutrality testâ : testing the value of a feature by hiding it. In this case you will hope for neutrality allowing you to get rid of a specific feature. There is a high risk of âfalse alarmâ, but it could also be interesting to know if the business is harmed. In this specific case the alarm should be set quite low, and/or be warned that there might be some false alarmsâŚ
Sequential Testing Alerts will have the following benefits:
Detects harmful variations as soon as possible, protecting your business.
Detects useless variations as soon as possible, helping you to get the best out of your website/app traffic.
Free yourself from daily experimentation checks