Self-Hosted Decision API

This guide explains the concept, architectural considerations, benefits, and implementation patterns for deploying FE&R's Decision API in your own infrastructure.

Although we provide a cloud-hosted Decision API that is fast and scalable, you can still choose to host the Decision API on your premises instead of using the cloud-hosted one.

The data collection part of the AB Tasty FE&R platform is still hosted on our side, and the Decision API will just batch send the hit tracking calls to our own cloud-hosted data collection on a regular basis.

The Decision API that you can install on-premise is almost the same as the cloud-based Decision API. Because the cloud-based Decision API is multi-tenant, and the on-premise version is single-tenant, there are some added connectors in the cloud-based version such as storage interface for visitor assignments. But the decision business logic is completely identical.

This guide explains the architectural considerations, benefits, and implementation patterns for deploying FE&R's Decision API in your own infrastructure.

Example Implementation

Architecture Overview

The Decision API is a Go based binary or docker image that synchronizes with the use cases you created on the Flagship platform. The decision logic (context targeting & visitor assignments) is done locally, which makes it really fast. A typical self-hosted Decision API implementation follows this architecture:

self-hosted.png

Reasons to Use Self-Hosted

SOA (Service-Oriented Architecture)

Microservices Integration

Self-hosting the Decision API aligns perfectly with Service-Oriented Architecture principles:

Benefits:

  1. Service Independence

    • The Decision API runs as an independent service within your architecture

    • Can be deployed, scaled, and maintained independently of other services

    • Follows the single responsibility principle

  2. Decoupled Communication

    • Services communicate through well-defined APIs

    • Decision API doesn't directly depend on external SaaS endpoints

    • Reduces coupling with external dependencies

  3. Service Discovery

    • Dynamic endpoint resolution

    • Health checks and automatic failover

Cloud-Based API Call Limits

Avoiding External API Throttling

By reducing reliance on external API, you gain more consistent performance, prevent unexpected slowdowns or outages due to throttling, and maintain better control over scalability as your traffic grows.

Issues with Cloud-Based APIs:

  • Rate limits

  • Unpredictable throttling during traffic spikes

  • Additional costs for exceeding quotas

  • Potential service degradation during high load

Self-Hosted Advantages:

  1. Unlimited Internal Calls

    • No rate limiting on requests to your own infrastructure

    • Handle traffic spikes without external throttling

    • Predictable performance under load

  2. Cost Optimization: The self-hosted option offers unlimited requests with no per-request charges, resulting in predictable monthly costs.

  3. Scalability Control

    • Scale horizontally based on your needs

    • No external quotas or limits

    • Add capacity during peak periods

  4. Batch Operations

    • Process bulk operations without API limits

    • Import/export data freely

    • Run analytics without restrictions

Network Centralization and Isolation

Security and Network Benefits:

  1. Network Isolation

It ensures your traffic stay securely contained within your own infrastructure or dedicated environment. By minimizing exposure to external networks.

  1. Limited External Dependencies for Core Operations

    • Campaigns are cached locally after initial sync

    • Continue serving requests even if Flagship CDN is temporarily unreachable

    • External connectivity is needed solely for configuration polling and analytics collection.

Observability

Enhanced Visibility:

  1. Metrics Collection

    • Request rates and latencies

    • Cache hit/miss ratios

    • Error rates by type

    • Resource utilization (CPU, memory, network)

  2. Distributed Tracing

To follow the full request flow across services, from the Decision API to custom key/value store and down to the FE&R CDN. This end-to-end visibility helps you quickly identify where latency, errors, or bottlenecks occur in your infrastructure.

  1. Integration with Observability Stack

Integration with your existing observability stack ensures you can monitor, trace, and debug API usage.

Latency

Latency Improvements:

  1. Network Hops Reduction

    • Fewer network hops between services

    • Direct internal communication

    • No internet routing overhead

  2. Caching Strategy

    • In-memory caching for hot paths

    • Redis for persistent cache (sub-millisecond)

    • CDN polling only for config updates (default: 1min interval)

Considerations for Decision API Self-Hosted

Important Limitations and Trade-offs

1. Configuration Sync Dependency

⚠️ Important: The Decision API requires periodic connectivity to AB Tasty FE&R's CDN to sync configuration updates.

  • Polling Interval: Default 1 minute (configurable)

  • Impact of CDN Unavailability:

    • Existing cached configurations continue to work

    • New campaign launches won't be reflected immediately

    • Changes to targeting rules may be delayed

Mitigation:

2. No Built-in Analytics Dashboard

The self-hosted Decision API doesn't include FE&R's web dashboard. Analytics data is sent to via the activate endpoint.

  • Campaign activation hits are sent to FE&R

  • View reports in dashboard

  • No local analytics storage

3. Infrastructure Management Responsibility

You are responsible for:

  • Server provisioning and scaling

  • Cache management (Redis/Memory/Local/Custom)

  • Monitoring and alerting

  • Security updates and patches

  • Backup and disaster recovery

  • Network configuration

4. No Automatic Updates

Unlike Cloud-based version, you must manually update to new versions

5. Initial Setup Complexity

  • Requires infrastructure knowledge

  • Cache setup and configuration

  • Load balancer configuration

  • Monitoring stack setup

6. Cache Consistency

With multiple Decision API instances:

7. Feature Parity

Some FE&R platform features may require additional configuration:

  • Multi-environment management

  • A/B test allocation changes

  • Advanced features like 1V1T, XPC and Dynamic Allocation

8. Visitor assignments caching

Thanks to our hashing algorithm, a unique visitor (with a unique ID) will always see the same variation of a use case, if the traffic allocation of the variations stays the same. If you're using a high level Flagship feature such as Experience Continuity, 1 visitor 1 experiment or Dynamic Allocation, or if you manually change the traffic allocation of your variations, the hashing algorithm is not enough. You need some sort of assignment cache to store the visitor assignment to the variation. Flagship cloud-hosted Decision API provides such a caching mechanism for you by default using our own high-velocity key-value store. If you want to use those features when hosting the DecisionAPI on premise, you need to configure your own cache to store visitor assignments.

Last updated

Was this helpful?