The Anatomy of an Experimentation Platform at Achieve Debt Relief
Overview
Online controlled experiments, often referred to as A/B tests, are fundamental to the success of data-driven companies. These experiments enable systematic and confident product improvements, significantly enhancing decision-making processes and driving product innovation. This project is a high-level overview of the A/B experimentation framework at Achieve Debt Relief.
The Product
Achieve Debt Relief’s client dashboard is designed to enhance the client experience by providing greater insight into the debt settlement process, offering valuable information and resources, and simplifying administrative control​. This tool is crucial for clients to stay informed and engaged with their debt resolution efforts, ultimately helping them achieve financial freedom more effectively.
About Achieve Debt Relief
Achieve debt relief is a personalized program aimed at helping Americans reduce debt and achieve better financial stability. Achieve has served over 1.5 million customers and has consolidated or resolved $24 Billion dollars in debt. Achieve works with over 3,000 creditors monthly to settle around 45,000 debts totaling over $140 million. The average debt resolution program lasts 24-36 months and requires a single low monthly payment, typically less than the combined monthly payments across all debts.
My Role
Client churn in the debt relief business is high, with 50% of exits occurring within the first 10 days of joining the program and a retention rate of only 55% at the end of the first three months. As sole product owner for the Client Dashboard, I was responsible for leading product optimization to increase post enrollment client retention. It was obvious that the tactics to increase retention were going to be A/B tests for which I needed an experimentation platform. So I set out to build one for the client dashboard at Achieve Debt Relief!
Objectives for the experimentation platform
System Architecture

Core Components
The Experimentation Portal functions as the user interface for experiment owners to effectively manage their experiments. It encompasses critical features such as:
Experiment Management:
​
Creation and Configuration: Furnishing essential tools for setting up experiments, defining variants, and configuring feature flags.
Life-Cycle Management: Offering robust capabilities for initiating, halting, and monitoring experiments.
Results Visualization & Exploration:
Dashboards and Analytical Tools: Equipping users with comprehensive and analytical tools for visualizing experiment results.
Integration with Analysis Templates: Seamlessly integrating with predefined analysis templates and schedules.
Metric Definition & Validation:
Ensuring meticulous definition and validation of metrics before their use in experiments.
The Metric Repository is a centralized storage for all metric definitions and validated results. It integrates with:
​
Metric Definition & Validation:
Ensures metrics are accurately defined and validated.
Reporting Service:
Generates detailed reports based on stored metrics.
​
The Analysis Service performs various analyses on the experiment data to ensure the integrity and usefulness of results. It includes:
​
Pre-Experiment Analysis:
Conducts initial analyses to verify experiment setup.​
​
Scorecard Generation:
​
Creates scorecards to summarize experiment performance.​
​
Alerting:
​
Sends alerts based on analysis results to notify experiment owners of significant findings.​
​
Ad-hoc/Deep-Dive Post-Experiment Analyses: ​
​
Enables detailed investigation of metric changes and their causes.​
The Log Processing Service handles the collection, processing, and preparation of data logs generated by experiments. It includes:
​
Log Collection and Cooking:
Gather raw logs and prepare them for processing.
​
Batch Processing System:
Processes logs in batches for large-scale data handling.
​
Near Real-Time Processing System:
Processes logs in near real-time for timely analysis and feedback.
​
Data Quality Monitoring:
Ensures the integrity and accuracy of the collected data.
​
The Experiment Execution Service is responsible for running the experiments and includes:
​
Variant Assignment:
Manages the assignment of user segments to different experiment variants.
​
Configuration Service:
Handles the distribution and updating of experiment configurations.
​
Experiment Assignment Library:
Interfaces with client/server applications to implement and manage feature flags.
​

Lessons Learnt
01
Testing the A/B Testing Framework Requires Significant Time and Deliberate Discussions
Cross-Functional Collaboration: The core team, including data analysts, engineering, data engineering, and product, met for a cross-functional standup every other day. These meetings were crucial for discussing ongoing workstreams, addressing blockers, and planning next steps.
-
Importance of Deliberation: Thorough and deliberate discussions were essential to ensure the accuracy and reliability of the testing framework.
02
Procurement Takes Time to Sign Up with a Vendor:
-
Third-Party Software: The project involved using third-party software to create the experimentation stack. We encountered significant delays in obtaining appropriate licenses, permissions, and accounts for various software components.
-
Time Management: It was a learning experience in managing procurement timelines and setting realistic expectations for project milestones.
03
Transition to a Central Data Platform Team is Crucial for Long-Term Success:
-
Sustainability and Focus: Over the long term, it is imperative that the core product team for the client dashboard transitions the experimentation platform to a central data platform team at Achieve.
-
Avoiding Resource Strain: Not making this transition could lead to a situation where the core product team, focused on customer and business priorities, lacks the bandwidth to nurture and improve the existing platform. This transition ensures sustained growth and optimization of the experimentation platform.