Architecture, Security, and Interaction Flows
- AI Core Services Architecture
- Agent State Connector Architecture
- Subroutines Architecture
- Interaction Flows
This topic presents Genesys Predictive Routing (GPR) architecture, first at a high-level overview, followed by more detailed views of the connections used by the AI Core Services (AICS), Agent State Connector (ASC), URS Strategy Subroutines, and Composer Subroutines components.
This topic covers Genesys Predictive Routing architecture, with some additional Genesys components included in the diagrams for completeness. For a full list of required components and versions, refer to System Requirements and Interoperability.
In addition, you need to have adequate data source(s) and construct a well thought-out data pipeline.
AI Core Services Architecture
The following diagram presents an overview of the various components and connections required for GPR in a single-server deployment.
See Scaling AI Core Services for information on how to scale each type of container included in the AICS deployment.
The Tango Container
Contains the Genesys platform that provides the GPR scoring engine, the Predictive Routing REST API, and the web-based user interface.
Contain function-specific processes, identifiable by the descriptive container names.
The Minio Container
Contains Minio, which is a data-upload application. This container is available in releases 9.0.013.01 and higher, where it improves processing times for the initial Dataset upload.
AICS handles Dataset uploading without any need for you to handle configuration of Minio. However, if you are interested in more detailed information about this component, see the Minio web site and documentation.
The MongoDB Container
A highly scalable, highly available no-SQL database which is especially efficient at handling large batches of JSON format data. It also supports fast, efficient queries of that data. Starting in MongoDB 3.2, WiredTiger is the default storage engine for MongoDB.
- Links to additional information about Mongo DB:
- WiredTiger Storage Engine
The NGINX Container
In AICS releases prior to 9.0.013.01, provides an optional reverse proxy (HTTP) for the Tango and workers containers. Can also optionally be used as a load balancer in non-production (test) environments.
Agent State Connector Architecture
Agent State Connector (ASC) is a Java application that reads data from Configuration Server and Stat Server. ASC can be monitored, started, and stopped in Solution Control Interface. It supports a warm-standby high availability architecture.
ASC pulls all configuration data from Configuration Server and saves it to the Predictive Routing database. From the data, you can create the Agent Profile schema.
Once you have set up your Agent Profile, ASC receives updates from Configuration Server (changes to agent configuration, such as a new location or a change to a skill level) and from Stat Server (updates on agent availability), and sends them to AICS.
Predictive Routing supplies out-of-the-box subroutines for environments running either Interaction Routing Designer (IRD) + Universal Routing Server (URS) or Composer + Orchestration Server (ORS) + URS.
- IRD requires you to use the Predictive Routing URS Strategy Subroutines component. Insert the strategy subroutines into the appropriate position in your strategy flow.
- Composer requires the use of the Predictive Routing Composer Subroutines. Insert the subroutines into the appropriate position in your workflow. If you are using Composer, you need Orchestration Server (ORS) as well as URS in your environment.
The Subroutines invoke Predictive Routing in real time. They send a request to AICS, which performs the scoring based on the information you configured in your Predictor and the Model or Models based on it. AICS returns the projected scores for each agent in the target group, indicating how well they would be expected to handle the specific interaction in question given the particular interaction type, customer intent, agent skill level, and whatever other factors you anticipate to be relevant. URS then chooses the optimal routing target.
Genesys Predictive Routing (GPR) is delivered as a set of Docker images. This ensures consistent environments from development to production as Docker containers maintain all configurations and dependencies internally, without depending on software installed on host server. With Docker, upgrades are easier and more predictable. Scaling across multiple hosts requires starting the same Docker containers on multiple host servers. In addition Docker provides isolation; every part of GPR can be scaled separately and has guaranteed access to hardware resources.
Genesys uses the following best practices when it comes to security:
- GPR uses a CentOS 7 Docker image as the base image.
- Genesys supports Security Enhanced Linux (SELinux) on CentOS 7. For a discussion of this functionality and how to configure it, see How to disable SELinux on the Linuxize web site.
- GPR Docker images containing Genesys software are continuously scanned for vulnerabilities as part of the build and test pipelines using anchore.io, among other tools.
- All GPR Docker containers run in unprivileged mode.
- Inside Docker containers, GPR software is executed as a non-root user.
- All ports and volumes that should be exposed by each container are specified in Required Ports for Firewall Configuration.
The measures listed above, combined with properly secured host servers, ensures that GPR deployed using Docker containers is as secure as a deployment using more conventional methods, such as delivery as a set of RPMs.
- GPR delivered as set of Docker containers does not require any additional ports to be open.
GPR uses MongoDB as its database, which is also delivered as Docker image. GPR uses the official MongoDB Docker image at https://hub.docker.com/_/mongo/.
- MongoDB inside the Docker container requires access to the same ports and same hardware resources as MongoDB running outside of a Docker container.
To understand how Docker containers comply with various security regulations and best practices, see the following pages on the Docker site:
To understand how MongoDB databases comply with various security regulations and best practices, see the following page on the MongoDB site:
Predictive Routing supports the following security and connection protocols:
- Transport Layer Security (TLS) 1.2
The following protocols are supported for the specified connections:
- Tango container to MongoDB container: SSL
- Workers containers to Tango: SSL
- Workers containers to MongoDB container: SSL
- ASC to Config Server: TLS 1.2; you can specify an upgrade-mode Configuration Server port by updating the -port command line parameter in the ASC Application object Start Info tab.
- ASC to Stat Server: TLS 1.2
- ASC to AICS: HTTP/S
- URS or ORS to AICS: HTTP/S
To use secure TLS connections between ASC and Stat Server/Configuration Server, you must configure such connections manually following the procedures described in the Genesys Security Deployment Guide.
Basic Predictive Routing Interaction Flow
The graphic below shows a very general interaction flow using Predictive Routing. Refinements to the flow depend greatly on details of your environment. Key aspects that differ in various environments:
- Your data—That is, the interaction types supported and the applications that might have relevant information. Genesys Info Mart is a key data source, but CRM systems and other applications in your environment can also provide important data. See The Data Pipeline for more information.
- Your pre-routing data flow. This depends on the interaction type and the exact architecture in your environment. For example, is this a chat interaction or a call? Do you use an IVR, and if so, what information do you attach?
- The Genesys routing solution you are using. Predictive Routing supports routing with IRD/URS and with Composer/ORS/URS.
- Whether you are reporting on Predictive Routing and, if so, whether you are using GI2 or another solution to present the data stored in Genesys Info Mart.
Interaction Flows within Predictive Routing
Existing Composer or IRD strategies are modified to incorporate Predictive Routing subroutines. Instead of picking the Agent with required skills that has the longest-waiting time or using simple agent-group routing, Predictive Routing can predict the best results for an interaction, based on the customer's intent or other relevant information.
When you are using Predictive Routing to route interactions, there are two main scenarios that affect how this matching plays out:
- Agent Surplus: There are relatively few interactions, which means there could be a number of high-score agents available. You can configure a minimum threshold so that, if the agents available are not very highly ranked, the strategy keeps the interaction in queue until a better-scoring agent becomes available.
- Interaction Surplus: There are many interactions, so that most agents are busy and it might be more difficult to find an ideal agent for each interaction. In such a scenario, you can have agents matched to the interaction for which they have the highest probability of getting a positive result.
Agent Surplus Flow
In this case there are agents logged in and in the Ready state who can respond to interactions immediately. From a Virtual Agent Group that is defined by skill expression, URS first tries to route an interaction to an agent with the best score, using the following process to match agents and interactions:
- An interaction arrives at the routing strategy, which has a target group of agents.
- The ActivatePredictiveRouting subroutine sends a request to the Predictive Routing scoring server via HTTP request.
- Predictive Routing returns scores for each agent in the target group based on the criteria you selected in the active model.
- The ActivatePredictiveRouting subroutine updates a global cache in URS memory, which keeps agent scores for all interactions. When URS tries to route the current interaction to the agent group, it sorts the agents according to their scores, in descending order, and routes to the agents with the best score first.
When URS takes an interaction from the queue:
- URS calls the ScoreIdealAgent subroutine, which reads the agent scores in the target group from global map and ranks the agents by score.
- URS calls the IsAgentScoreGood subroutine, which selects the available agent with the highest score, assuming the agent has a score high enough to be selected for this interaction.
- In an agent-surplus scenario, it is typically not a problem to route to an agent with a good score. For scenarios where this is not the case, see Interaction Surplus Flow, below.
- URS calls the PrrIxnCompleted subroutine, which updates user data with the scoring result for storage in Genesys Info Mart.
- URS calls the PRRLog macro, which logs the result in the URS log file.
Interaction Surplus Flow
This scenario covers situations when all agents are already busy handling interactions and new interactions are queued. When one of the agents becomes ready, the system selects the interaction for which the agent has the best score. This is not necessarily the interaction that has been in the queue longest.
Using Agent Hold-Out
Agent hold-out enables you to have an interaction wait a specified time, even when an agent has become available, if the available agent is has a low score for the interaction and there is a chance a better-matched agent might become available within the configured time window. The interaction flow is as follows:
- URS calls the IsAgentScoreGood subroutine, which determines whether any of the available agents meet the threshold for handling the interaction.
- If available agents have low scores for this interaction and the interaction spent only a short time in the queue, URS waits for a better agent to become ready.
- The minimum acceptable score required for an agent for the interaction is gradually reduced, so if no higher-scored agent becomes available, the lower-scored agent might finally be given the interaction.
After that determination occurs, the remainder of the flow is the same as that given in the agent-surplus flow above. Use the relevant Predictive_Route_DataCfg Transaction List Object configuration options to set up the priority increments.
Dynamic Interaction Priority Increments
To avoid having interactions lingering in a queue for an excessive amount of time, URS can trigger an escalation in interaction priority after a time delay that you set. To speed up interaction handling, you can incrementally relax the minimum skill level required for agents to handle the interaction or expand the pool of agents to consider.
Each time a routing strategy tries to route an interactions, it calls the ActivatePredictiveRouting subroutine. After each failed routing attempt, the strategy checks how long the interaction has been waiting in the queue and, if the time in queue is above a certain threshold, it routes the interaction to the next available agent, no matter their score for the interaction.
Use the relevant Predictive_Route_DataCfg Transaction List Object configuration options to set up the priority increments.