Jump to: navigation, search

Journey Optimization Platform Release Notes

Journey Optimization Platform was renamed to AI Core Services in release

AI Core Services is part of 9.x starting in
Release Date Release Type Restrictions AIX Linux Solaris Windows
12/22/17 General Under Shipping Control X

Helpful Links

What's New

This release contains the following new features and enhancements:

  • The product name has changed from Genesys Predictive Matching to Genesys Predictive Routing. This change is not yet reflected in the application interface or in the documentation.
  • Genesys Predictive Routing now supports both single-site and multi-site HA architectures.
  • Genesys Predictive Routing now supports historical reporting, provided by the Genesys Reporting solution. The following reports are available in Genesys Interactive Insights: Predictive Routing AB Testing Report, Predictive Routing Agent Occupancy Report, Predictive Routing Detail Report, Predictive Routing Operational Report, and Predictive Routing Queue Statistics Report. For details, including a list of the attached KVPs and the associated Info Mart tables, see Deploying: Integrating with Genesys Reporting in the Genesys Predictive Matching Deployment and Operations Guide.
    • This functionality requires Genesys Info Mart or higher, Reporting and Analytics Aggregates 8.5.002 or higher, and Genesys Interactive Insights 8.5.001 or higher.
    • Historical reporting is enabled in Predictive Routing by the following two new options: send-user-event and vq-for-reporting.
  • Two new real-time reporting templates are available for use in Pulse dashboards: Agent Group KPIs by Predictive Model and Queue KPIs by Predictive Model.
  • Two new analysis reports have been added to the Genesys Predictive Routing application: Agent Variance and Lift Estimation.
    • The Lift Estimation analysis report uses simulation to estimate the lift in agent performance that the predictive model might achieve. The evaluation method uses a technique called Doubly Robust Evaluation that accounts for possible errors when using a predicted value as compared to achieved results.
    • The Per Agent Variance analysis report identifies the presence of variance in agent performance for a target metric, which is important for successful deployment of Predictive Routing.
  • The Model creation interface now includes additional model quality and agent coverage reporting. The new model quality report for classification models evaluates quality using the area under the curve (AUC) method. You can analyze model effectiveness using a Receiver Operating Characteristic (ROC) Curve.
  • The Feature Analysis report, the model creation and training functionality, and the dataset import functionality have been improved to handle large datasets.
  • You can now combine simple predictors to create composite predictors. You can use composite predictors for the following use cases:
    • Making routing decisions based on composite metrics rather than just one.
    • Using different simple predictors alternately depending on a context variable passed in a scoring request.
    For details, see About Composite Predictors.
  • Health checks and monitoring have been improved for both Journey Optimization Platform (JOP) and Agent State Connector (ASC). Among the new functionality and improvements are the following:
    • ASC now enables you to set alarms if there are persistent connection issues with Configuration Server or Stat Server.
    • Improved logging for the JOP Tango container when you train a model. The relevant log message now includes the model ID and feature size.
  • The behavior of the agent occupancy control feature was modified. This update includes a new configuration option, agent-occupancy-factor option. In addition, the descriptions of the use-agent-occupancy and agent-occupancy-threshold options have been updated to incorporate the new behavior.
  • The behavior of the time-sliced A/B testing mode (the prr-mode option is set to ab-test-time-sliced) has been improved. Previously, the alternation of time periods when Predictive Routing interaction processing is on or off was restarted each midnight. Now the periods are counted from the midnight of January, 1, 1970, GMT (the epoch time). This change enables you to run Predictive Routing at different times during a day or to run a test over multiple days. In addition, the default value of the ab-test-time-slice option is now set to 1741 seconds, (approximately 29 minutes). Previously, the default value was 60 seconds, which is far shorter than the period recommended for use in a production environment.
  • You can now set a timeout value that enables Genesys Predictive Routing to tell whether URS is overloaded, at which point Predictive Routing turns itself off. This functionality is controlled by the new overload-control-timeout option.
  • When you are training a model, new status indicators immediately inform you of the progress of model training, from "IN QUEUE", to a blinking "IN TRAINING" when the training job starts being processed, to "TRAINED" after job has been completed.
  • Predictive Routing now enforces conversion of non-string ID values into strings in the Agent and Customer Profile schemas.
  • The Predictive Routing strategy integration with URS now automatically deletes interaction scoring data stored in the URS global map once the interaction is routed or abandoned. As a result, the PrrIxnCleanup subroutine is no longer needed. This change in Predictive Routing subroutines is supported in URS version 8.1.400.37 or higher.
  • Apache Kafka is no longer used for triggering the execution of model training or analysis jobs. This functionality has been taken over by MongoDB. As a result, the kafka container is no longer part of the JOP installation package.
  • Labels in the Predictive Routing interface on the Predictor Settings tab, and the Predictors tab which enables you to view and run analysis of your predictors, have been changed to improve usability. The Action Features label is now Agent Features; Context Features is now Customer Features; Action Type is now Agent Identifier; Context Type is now Customer Identifier.

Resolved Issues

This release contains the following resolved issues:

Running feature analysis on large datasets has been improved, optimizing memory and CPU usage and changing the way Predictive Routing calculates feature importance. If you are upgrading to Predictive Routing 9.0.007 and need to work with large (100 columns) datasets, execute the following script in your python shell (python, ipython or "python --mode=prod" depending on needs) to recalculate cardinalities for your existing datasets:

from solariat_bottle import dbconnect
from solariat_bottle.jop.datasets.models import Dataset

[d.compute_cardinalities() for d in Dataset.objects.find()]{code} 


Agent and Customer Profile API GET requests now support batch sizes and start indexes. (PRR-1453)

Added support for updating, deleting, and reading the indexes on Agent and Customer Profile collections through the Predictive Routing API. Detailed documentation is available in the Predictive Routing API Reference. (PRR-1393)

Error handling for Predictive Routing jobs has been improved. These jobs include dataset analysis, predictor analysis, and model training. If these jobs fail for any reason, an informative error message is generated in the Predictive Routing application. (PRR-1306)

The scoring response functionality provides the following additional fields used for Genesys Reporting:

  • median_score
  • mean_score
  • min_score
  • max_score
  • scores_count
  • context_matched

More details are available in the Predictive Routing API Reference. (PRR-1232)

Predictive Routing user data is now correctly populated. (PRR-1181)

Versioning for models has been improved. When a new model is trained or retrained, the integer in the Versions column is increased. In addition, when an activated trained model is copied, it gets the same name as the original model and a suffix indicating it is a copy and a version number is attached to its name. (PRR-1041)

Upgrade Notes

In release, feature analysis on big datasets has been improved, optimizing memory and CPU usage. The new functionality changes how data is collected for the feature importance calculation. To use this new functionality on existing datasets, you must execute the following commands to recalculate your existing data cardinalities:

docker exec -it tango /bin/bash

MODE=prod python

from solariat_bottle import dbconnect
from solariat_bottle.jop.datasets.models import Dataset

[d.compute_cardinalities() for d in Dataset.objects.find()]
This page was last modified on June 7, 2018, at 09:38.


Comment on this article:

blog comments powered by Disqus