Jump to: navigation, search

A/B/C Split Testing

From release 8.5.3, A/B/C Split Testing is supported.

If your organization has a complex set of rule packages and rules to define business decision-making, if you want to be able to compare alternative outcomes and scenarios, then rather than just changing a condition and hoping for the best you probably want to phase changes in slowly in order to measure the impact over time.

A/B/C Split Testing (hereafter, Split Testing) allows you to compare the business outcomes of alternative rule scenarios before rolling out significant changes to the way you make your business decisions. With Split Testing you can make, test and review changes incrementally to test their effects before committing to a particular change set.

For example, you might want to test several subtle changes in the way applications for credit cards are treated, in order to identify which has the best business outcome. You could split test for one income level against another, or one age range against another, or between different customer segments, or indeed any conditions in your rule package.

DETAILS

This page was last edited on April 28, 2017, at 08:33.
Comments or questions about this documentation? Contact us for support!