Flagship + Fastly Compute@Edge Integration
📘 Github Repository
https://github.com/flagship-io/flagship-fastly-worker-example
Overview
This guide shows how to:
Use KV storage or direct integration for caching bucketing data to improve performance
Initialize the Flagship SDK in a Fastly Compute@Edge application
Create a visitor object with context data from request headers or any other source
Fetch feature flags assigned to this visitor
Retrieve specific flag values for use in the application
Send analytics data back to Flagship
Ensure analytics are sent before the worker terminates
Prerequisites
Node.js (v18 or later)
Yarn (v4 or later)
A Fastly account with Compute@Edge access
A Flagship account with API credentials
Setup
Create a Fastly Compute@Edge project:
Follow this link to setup a Fastly Compute@Edge project
Install dependencies:
Configure your Flagship credentials as Fastly secrets:
Where FLAGSHIP_CONFIG.json contains:
Create a KV store for caching:
Update the fastly.toml file with your service configuration
Use KV storage or direct integration for bucketing data
Bucketing data contains information about your Flagship campaigns and variations, allowing the worker to make flag decisions at the edge without calling the Flagship API for every request.
Development Approach
Option 1: KV Storage
Fetch bucketing data directly from the Flagship CDN:
Configure your local development environment to use this data by adding the following to your fastly.toml file:
\
Option 2: Direct Integration
For direct integration, you'll need to:
Fetch the bucketing data during your build process
Save it as a JSON file in your project
Import it directly in your edge application code
Then import in your code:
Production Approach
For production environments, there are two recommended approaches. Both require setting up webhooks in the Flagship platform that trigger your CI/CD pipeline when campaigns are updated.
Find more details here.
Initialize the Flagship SDK in a Fastly Compute@Edge application
The first step to using Flagship in your Fastly Compute@Edge application is to initialize the SDK. This sets up the connection with your Flagship project and configures how feature flags will be delivered.
With KV Storage
The KV storage approach involves retrieving the bucketing data from Fastly KV at runtime:
With Direct Integration
The direct integration approach involves importing the bucketing data directly:
Configuration Options
decisionMode:
BUCKETING_EDGEis recommended for Workers as it makes decisions locally using bucketing dataAPImode would call Flagship servers for each decision (not recommended for Workers)
initialBucketing:
Pre-loaded campaign data to make local decisions without API calls
Retrieved from KV storage or embedded in your code
fetchNow:
falseDefer fetching campaign data until explicitly needed
Create a visitor object with context data from request headers or any other source
The visitor object represents a user of your application. You need to create one for each request, providing a unique ID and relevant context data that can be used for targeting.
You can include any information in the context object that might be useful for targeting. Fastly Compute@Edge provides access to geolocation information, request headers, and more. Common examples include:
Demographics: age, gender, location
Technical: device, browser, OS, screen size
Behavioral: account type, subscription status
Custom: any application-specific attributes
This context is used by Flagship for targeting rules, so include any attributes that might be useful for segmenting your users.
Fetch feature flags assigned to this visitor
Once you have a visitor object, you need to fetch the feature flags assigned to them based on targeting rules:
This operation evaluates all campaign rules against the visitor's context and assigns flag variations accordingly.
Retrieve specific flag values for use in the application
After fetching flags, you can retrieve specific flag values for use in your application. The SDK provides a type-safe way to access flag values with default fallbacks.
Always provide a default value that matches the expected type. This ensures your application works even if the flag isn't defined or there's an issue fetching flags.
Note: calling getValue automatically activates the flag, meaning it will be counted in the reporting.
Send analytics data back to Flagship
To measure the impact of your feature flags, you need to send analytics data back to Flagship. This includes page views, conversions, transactions, and custom events.
Analytics data is crucial for measuring the impact of your feature flags in A/B testing scenarios. You can track page views, events, transactions, and more.
Ensure analytics are sent before the worker terminates
Fastly Compute@Edge applications can terminate quickly, potentially before analytics data is sent. To prevent this, use waitUntil:
This ensures that all pending analytics are sent before the worker terminates, giving you accurate reporting data.
Production Approach to retrieve and update bucketing data
For production environments, there are two recommended approaches. Both require setting up webhooks in the Flagship platform that trigger your CI/CD pipeline when campaigns are updated:
Common Setup for Both Approaches
Set up a webhook in the Flagship Platform that triggers whenever a campaign is updated
Configure the webhook to call your CI/CD pipeline or serverless function
The primary difference between the approaches is where the bucketing data is stored:
Option 1: Webhook + KV Storage
This approach stores bucketing data in Fastly KV:
Option 2: Direct Integration via Deployment
This approach embeds bucketing data directly in your worker code:
Trade-offs between approaches:
KV Storage Approach:
Performance: Adds KV read latency to each request
Flexibility: Allows updating flags without redeploying code
Reliability: If KV is unavailable, flags might not work correctly
Cost: Incurs KV read costs for each worker invocation
Debugging: Easier to inspect current bucketing data separately from code
Isolation: Clearer separation between code and configuration
Direct Integration Approach:
Performance: Faster initialization with no external calls during startup
Deployment: Requires redeployment for each flag configuration change
Reliability: Fewer runtime dependencies, more predictable behavior
Bundle size: Larger worker bundle due to embedded bucketing data
Caching: Better cold start performance since data is bundled
Choose the approach that best fits your deployment frequency and performance requirements.
Learn More
Last updated
Was this helpful?

