Jump to: navigation, search

Creating and Interpreting Analysis Reports

Predictive Routing provides the following reports within the application interface:

  • Feature Analysis reports: Analyze how the characteristics and performance of agents and customers affect the metric you are targeting.
  • Agent Variance report: Analyzes how much difference there is among agents for the different business processes they work on. More variance means better opportunities for optimization.
  • Lift Estimation report: Analyzes how much impact Predictive Routing can have in the specified environment and circumstances.
  • Model Quality report: Provides an analysis of how well the model is performing.
  • Agent Coverage report: Indicates how many agent models were built, as a function of the total agents available.
Important
  • Predictive Routing supports report generation that includes up to 250 features (columns).
  • For all reports, mandatory fields are marked with an asterisk.

Feature Analysis Report Overview

You can run Feature Analysis reports from the Predictors tab and the Datasets tab. The Feature Analysis report is designed to enable you to determine which aspects of agent and customer skills, characteristics, and behavior have the most impact on the target metric.

The Feature Analysis report returns only those features or attributes that are most likely to contribute to the desired outcome (the target metric). As a result, the report might not include all attributes you selected during report configuration. Use this aspect of the functionality to winnow down the number of attributes you need to focus on, leaving you with only those attributes that strongly affect the target metric.

Tip
When a dataset has many fields, you can hide some to view the most relevant fields more easily. Hiding fields only removes them from your view. Hidden fields are still used in Feature Analysis reports for predictors and datasets.

Generate a Feature Analysis Report

1

Use the following procedure to create a Feature Analysis report:

  1. Click the Predictors tab or Dataset tab on the top navigation bar. To analyze a model, click Predictors.
  2. Click Analysis. This button is located on the right side of the top navigation bar.
  3. Select Feature Analysis from the drop-down Report menu.
  4. Choose the parameters you want to include using the selectors on the left side of the window.
    When you create a Feature Analysis report for a predictor, Predictive Routing extracts input data from the predictor, such as the target metric. When you run the report for a dataset, you must manually select the target metric. However, when you click Run Analysis, the algorithm used is identical.
    • Target Metrics <metric_name> Range: After the target metric is selected (automatically, if this is a report on a predictor; by you, if this is a report on a dataset), choose the metric range to report on. For example, you might choose NPS as your target metric and focus only on low-range NPS values, to look for factors that might drive NPS down.
    • Attributes: When you are setting the report parameters, all features/attributes are available for selection. Selected attributes have a check mark next to the name. Click the attribute to toggle the check mark on or off. To add all or remove all, click Select All or Select None.
      • You can select up to 250 attributes.
      • For help configuring this parameter, refer to the section below for an explanation of how the report handles attributes.
  5. Click Run Analysis.

The result appears on the Reports tab for the object you are analyzing. That is, if you are running an analysis of a dataset, the result appears on the Reports tab on the Datasets tab window.

View a Feature Analysis Report

1

To view a report:

  1. Click a report to view it from the list in the Run Analysis window, or click the Reports tab and select it from the list on the left side of the tab.
    • By default, the report opens showing an Overall view of the data. All attributes (features) you selected for the report and which have a relative weight greater than one percent (1%) are listed on tabs under the report name, so you can view analyses of the data for each feature.
    • The Overall view shows a graph listing the features ranked according to how strongly they affect the target metric. The feature that affects the metric most strongly is assigned a value of 1.0 and the remainder are assigned numbers that indicate how influential they are relative to the strongest feature.
      For example, you might have three features, ranked as follows: FeatureA = 1.0, FeatureB = 0.86, and FeatureC = 0.54. These numbers indicate that FeatureB has only 86% as much weight in affecting the target metric as FeatureA, and FeatureC 54% the impact on the target metric. These values are relative to the most impactful feature, not an absolute measure of their impact on the target metric.
    • In the Overall view, the second chart shows the target metric values over time.
  2. Hover over any chart to view a tooltip containing information about that exact data point.
  3. To toggle between a table view and a chart view of the report, click the icon at the top left corner of the top-most chart/table.
  4. To drill down to more granular data about that specific feature, click a feature name from the list above the graphical display. By default, tabs for feature sub-reports are visible only for features with a weight greater than 0.5%. To access sub-reports for features weighted less than 0.5%, click the corresponding bar in the bar chart.
    • The charts change to show data relevant to how that feature affects the target metric.
    • When you are viewing charts for a specific feature, the score for that attribute is provided in a gray oval next to the feature name.
  5. To export the results of a Feature Analysis report, click Export. The export contains all features and the weights determined for them. You can save the file in Excel format.

Agent Variance Report Overview

Success using Predictive Routing depends on the presence of variance in agent performance for a target metric. The more variance between agents, the greater the impact of choosing better agents. The amount of variance is shown using box plots. Note that the target metric for this report must be of a numeric or boolean type.

ExampleBoxPlot.png

A large variation in the mean or median values along the horizontal axis means that the agent performs very differently depending on the interaction/customer context (whatever the operators you enter). A large variation in the vertical axis means the agent performance is going to be noisy for these inputs. This might be because the agent is erratic, or because there are other factors of the input context that are important to the outcome that are not captured in the data. For a single agent data set, tight vertical bounds and large variations horizontally are best, since that type of result means there is a lot of potential for optimization when an agent handles the best interaction.

PerAgentVarianceReport.png

Generate an Agent Variance Report

1

To create an Agent Variance report:

  1. Click Datasets on the top navigation bar.
  2. Click Analysis. This button is located on the right side of the top navigation bar.
  3. Choose the Per Agent Variance report type from the drop-down list.
  4. Select the settings you want to include in the report.
    • Target Metric:Choose the target metric from the drop-down list of metrics included in the selected dataset.
    • Agent ID: The Agent ID should be an identifier that uniquely and precisely distinguishes each agent included in the dataset.
    • Group By: The agent variance tool enables you to show variance between agents grouped by a categorical or a numeric variable, such as agent location or seniority. To create such a report, select the appropriate parameter in the Group By field. Alternatively, you can create an agent performance analysis. In this case, you are analyzing how a specific agent performs in various contexts. To create this type of report, the Group By value should be the same as the Agent ID value.
    • Min Interactions Per Agent: Set this parameter to filter out agents that have too few interactions in the dataset records to give a meaningful picture of their performance. Genesys recommends that the minimum number of interactions per agent should be at least 10.
    • Number of Agents: Specify the number of agents for which you want to run this report. The default is 50 agents. GPR determines which agents to include by first filtering the pool of agents using the parameters you set when configuring the report, then by ranking the agents in descending order according to the number of interactions they have handled. The number of agents you specify here are then selected from the top of the resulting list.
  5. Click Run Analysis.

The result appears on the Reports tab. Information above the graph shows the parameters used to generate it. You can view it on the Reports tab or use the buttons to the upper right of the graph to export or delete the results.

Lift Estimation Report Overview

Once you have created and activated a model, you can use simulation to estimate the lift in agent performance the model might achieve. The evaluation method accounts for possible errors when using a predicted value as compared to achieved results. When you create a model, you specify a percentage of the data to use for model training, with the remainder available for testing. These percentages are allocated using a time-based method, as described in the Settings: Configuring, Training, and Testing Models topic. The Lift Estimation report is generated using the test segment of the dataset.

The Lift Estimation Analysis report estimates the benefit you are likely to get with GPR and how that changes with various agent availability conditions. Since it uses historical data and its associated outcomes, GPR does a hindsight analysis to arrive at this estimate. Using a designated fraction of the dataset (the test dataset) that was not used for building the model, GPR estimates the lift in the KPI if the interactions in the test dataset had been routed by the predictive model instead of the baseline routing. It does assume that the baseline routing used to collect the interaction data chose agents randomly.

To recreate the original routing scenarios as closely as possible, GPR limits the choice of agents to those who handled calls on a given day as found in the test data collected through the existing baseline routing and further assumes the agent-specific features are stable for the given day.

Agent Availability

Agent availability is defined as the fraction of the total number of agents who handled calls on the day of the interaction. To simulate 100% agent availability, for each interaction, GPR scores all agents who are part of the agent pool for the relevant day and picks the agent with the maximum score. 50% agent availability is simulated by randomly selecting half the agents from the daily agent pool, scoring them, and picking the maximum-scored agent among that half. To ensure that the lift produced by routing with GPR in a reduced agent-availability scenario does not happen by chance, GPR repeats the Lift Estimation analysis on a random selection of agents many times (100 runs, for example), and averages the estimate across the runs.

Adjusting the availability of agents shows how different operating constraints can yield different outcomes. As the contact center becomes busier, the ideal agent becomes less often available. Note that only agents with an adequate number of samples should be included.

The diagram below shows the estimated outcome of running a simulation with different assumptions on availability of agents. It assumes we can choose the best agent from only a percentage of the available agents in the pool. The method selects the specified number of interactions, identifies target agent pools based on the day of interaction, and then runs multiple simulations for differing levels of availability.

LiftEstimationResult.png

The y-axis shows the target metric used in the predictor. The x-axis represents the agent availability factor, which is part of the simulation. The agent availability is measured from 0, no agents available, to 1, when 100% of agents are available.

Important
Boolean true/false metrics are converted to the equivalent numeric values, with false = 0 and true = 1. Values between 0 and 1 can be interpreted as the percent chance of a condition being true.

If you select a Group By value when defining the report parameters, target candidate agents for scoring and lift estimation are defined within the group, and the tabs above the chart show Lift Estimation reports for each group. The Aggregate tab displays the weighted average for each availability based on the number of interactions for each group.

Reading a Lift Estimation Graph

The shape of the lift estimation curve depends on the following two factors:

  • The variation in the scores across agents for each interaction. If many agents have similar scores that are close to the maximum score, an increase in agent availability does not significantly impact the lift and the line showing across availability percentages does not increase or decrease as the availability changes.
  • The accuracy of the model. The position of the data points on the curve relative to the base line is determined by the reliability of the predicted scores against the true outcome.
Important
In some cases, overfitting of the underlying models might produce negative lift in scenarios with high agent availability. While this may look counterintuitive, it is a result of negative correlation between the predicted and the actual scores. To reduce overfitting, ensure that individual agent models are trained with enough samples and/or reduce the number of features.

Generate a Lift Estimation Report

1

The Lift Estimation analysis report is generated from the Predictors tab. It uses the metric and parameters as set in the predictor selected on the Trend or Details tab.

Important
To configure this report correctly, see the Lift Estimation Best Practice Recommendations, below.

To configure the report, complete the following fields:

  1. Click Analysis, located on the right side of the top navigation bar.
  2. Select Lift Estimation from the drop-down Report menu.
  3. Select a model from the drop-down list. This model can be active or inactive, but it must have been trained.
  4. In the Number of Simulations field, enter a figure for how many times to repeat subset selection of agents for the availabilities other than 1. By default, this is set to 100. The field accepts any value larger than 0 and less than or equal to 500.
  5. In the Number of Samples field, enter a value for how many samples to use from the test set. By default, this is set to 100. This value should be approximately 30 times the number of agents.
  6. In the Group By field, enter a parameter to use to constrain interactions. Only agents who handled interactions of the specified type are used in the estimation for that group.
    When a high-cardinality feature is specified as a grouping parameter, the top 20 feature values by interaction volume are automatically extracted and a report is generated for each one of those groups.
  7. If you want to use values other than the default top 20 group values for high-cardinality features, select the Advanced check box, and then choose up to 20 group values against which you want to run the List Estimation report. By default, the 20 features with the highest cardinalities are available for selection in the Group Values field when Advanced mode is on.
    If there are no interactions in the test part of the dataset for a selected Group By feature, that feature does not appear in the generated report.
  8. Click Run Analysis.

The result appears on the Reports tab.

Lift Estimation Best Practice Recommendations

  • Number of Simulations
    • This value should be higher for larger numbers of agents. 100 is an appropriate value for most environments. Increase this value to the maximum, 500, if you are scoring for large number of agents (more than 5000).
  • Number of Samples
    • If you specify a number of samples larger than is available in the test section of your dataset, the Lift Estimates analysis is partially run against data used to train the model, which reduces the power of the Lift Estimate analysis. For example, assume a dataset with 1,000 records and a 80/20 training/testing split. That is, the first 800 records are the training dataset, the final 200 records are the test dataset. If you run a lift estimate against this model with 500 samples—300 more than the 200 rows in the test dataset—the most recent 500 rows are used, which includes the 200 records in the test dataset plus 300 records from the training dataset.
  • The Group By Parameter
    • If you are using the Group By functionality, the Lift Estimation dataset is constructed with latest Number of Samples rows for each unique value in the Group By feature selected. The rows are drawn from the predictor dataset, using rows that fit the criteria. If there are insufficient records for a certain Group By value, that set will be undersampled.
      For example, if Group By is set to Queue, which has two values, VQ1 and VQ2, and Number of Samples is set to 1,000, GPR constructs two Lift Estimate datasets (VQ1_Dataset, VQ2_Dataset). Each dataset should be analyzed based on the most recent 1,000 samples from the corresponding queue. However, if there are only 167 rows in the predictor dataset where Queue=VQ2, the VQ2_Dataset is undersampled to a maximum of 167 records.

Lift Estimation vs A/B Testing

Although the Lift Estimation analysis helps to assess how well a model performs and the expected lift in a specified KPI, it does not replace A/B testing. The following table highlights key differences between the two types of reports.

Important
The A/B Testing report is generated from data written to the Genesys Info Mart database. For an explanation of how to create and view this report, see Deploying: Integrating with Genesys Reporting in the Genesys Predictive Routing Deployment and Operations Guide.
Lift Estimation A/B Testing
Offline hindsight analysis using historical interaction data Online testing in the live production environment; considered as the ultimate test
Estimates the scope for KPI improvement under various assumptions Determines the real improvement in KPI with minimal or no assumptions
Being offline analysis, no real-time processing resources involved, which might slow down performance A poor model or incorrect routing strategy could potentially have detrimental effects
Assumes that the baseline routing selects agents randomly No assumption on baseline agent selection
Lift is estimated only for agent-surplus scenarios Can measure performance for both caller-surplus and agent-surplus scenarios
Likely agent availability to be calculated from past data prior to determining the applicable estimate Lift is calculated based on real agent availability during the period of assessment, hence more accurate
Like any other statistical estimate, it is associated with error which could be higher when assumptions are violated When ensured Control and Target routing methods operate in similar conditions, the calculated lift is likely to be unbiased and reliable.

Feedback

Comment on this article:

blog comments powered by Disqus
This page was last modified on 11 May 2018, at 14:13.