Jump to: navigation, search

New Features by Release

New in 8.5.3

Role-Based Access Control (8.5.302)

Support for Role-Based Access Control at the Rules Package Level

Important
GRS requires Genesys Administrator 8.1.305.04 (minimum) for configuring package level permissions.

You can expand the graphics by clicking on them.


Background

Previously, GRAT used Configuration Server Roles to provide only global access control to all packages in a given node of the business hierarchy. The privileges, like Modify Rule Package, Delete Rule Package, Modify Rule, Delete Rule and so on, are granted to users via roles. With this approach, if a user is granted the Modify Rule Package (for example) privilege, then they can modify all the rule packages defined in a node of the GRAT business hierarchy.

Release 8.5.302 now provides package-level overrides to these global roles—role privileges can be restricted to specific rule packages by applying Role-Based Access Control at the rule package level. The new Rule Package Level Roles (roles created specifically for use with rule packages only) can be mapped to rule packages to override the global-level roles. These Rule Package Level Roles will have no effect if not mapped to a rule package.


New Role Permission—View Rule Package

View access for specific rule packages can now be controlled by using the new role permission View Rule Package. The new permission is applicable to only the rule package level.

Existing Role Permissions

All of the existing role permissions except Create Rule Package and template-related permissions are applicable at the rule package level too.

Example

In 8.5.302 you can now assign role permissions at both global/node level and at rule-package level to achieve the following outcome:

  • Department A
    • Rule package 1
    • Rule package 2
    • Sales
      • Rule package 3
  • Department B
    • Rule package 4


  • User A—Can see Department A but not Department B
  • User B—Can see Department B but not Department A
  • User C—Can see rule package 1, but rule package 2 is hidden

Location

To distinguish these new roles from global-level roles, they are placed in a new folder:

[Tenant] > Roles > GRS Rule Package Level Roles

Package-Level Overrides

Where package-level roles are mapped to a rule package, they override global-level roles.

RBAC.png


Managing the Mapping of Roles

The mapping of the rule packages to Rule Package Level Roles is managed in Genesys Administrator or Genesys Administrator Extensions, in the options under section Rule Package Level of the \Scripts\GRS Access Control\GRS Role Mappings script. The example below is from Genesys Administrator.

Important
Because the delimiter in the list of roles is a comma, you can't use commas in the names of any role.


GA mappings1.png


Viewing GRAT User Permissions

To enable GRAT users to view their current list of permissions, a Check My Permissions button is now also available at the rule-package level and shows the permissions at selected package level.


CheckMyPermissions3.png



Support for Deployment to GWE Clusters (8.5.302)

In the case of GRE (Genesys Rules Engine) cluster deployment, GRAT gets the GRE nodes' information from the cluster and deploys the rule package individually to each of the cluster nodes.

But a GWE (Genesys Web Engagement) cluster is different—GWE cluster nodes are not available as connections. So the deployment must be targeted to the cluster host itself—that is, a host in the Configuration Server information of the GWE cluster. See the Deployment Guide for more detail.


Support for GRAT Clustering (improved in 8.5.302)

Background

Before release 8.5.301, Genesys recommended maintaining a cold-standby backup GRAT server, connected to the same database repository as the primary, which could be initialized in the event of a primary GRAT server failure or disconnection.

A configuration option called clear-repository-cache could be set to force GRAT to clear and rebuild the local cache/search indices on startup. This allowed the backup server to synchronize with the repository, even if it had been shut down for months. Depending on the number of rule packages deployed, synchronization might be fast or slow.

What's New?

In release 8.5.301, you can now configure clusters of GRAT servers, which deliver much greater resilience and availability, enabling almost instant switchovers between GRAT nodes that are members of the cluster. All cluster members connect to the same database repository.

No single GRAT node is considered primary—they are all equal partners in the n-node cluster.

Each node maintains a journal of changes, which resides on a separate repository database table. Nodes can periodically poll the database to ensure that they mirror updates made on the other nodes. You can also add new nodes to the cluster and synchronize them with their peers.

  • New GRAT Cluster Application Template
  • The GRAT cluster configuration object must be based on a new application template—GRAT_Rules_Authoring_Tool_Application_Cluster_<version>.apd—that is delivered with this release. GRAT instances become members of the cluster when their Application object is added to the Connections tab in the cluster's Application object.

  • High Availability
  • An n-node cluster configuration can be used to deliver High Availability—for example, if a customer wants to use one GRAT server as the primary server and another as the secondary/backup warm-standby server, a load balancer could be configured to send all traffic to the primary server. The warm-standby instance periodically polls the journal table and keeps its cache/index files synchronized. In the event of a failure (or planned maintenance) of the primary GRAT server, the load balancer switches and sends traffic to the hot-standby secondary/backup GRAT server. The user would have to log in again, and any unsaved changes from their old session would be lost. However, they could resume in seconds instead of waiting for the cold-standby GRAT to become available.

  • Load Balancing
  • A load balancer can evenly distribute the load across the available GRAT nodes in the cluster, or it can distribute the load based on other criteria, such as the geographic location of the browser. However, once a node has been selected, the load balancer must ensure that the same node is used for the duration of the session (session stickiness).

    The load balancer must provide session stickiness for the GRAT user interface requests by sending all the requests pertaining to a session to the same GRAT node that initiated the session after successful login. Similarly, in the case of the GRAT REST API, after successful login, the load balancer must send the subsequent requests to the same GRAT node that handled the login request. More information on the GRAT REST API Authentication mechanism is available here.

  • Scheduled Deployments of Rules Packages
  • Scheduled deployments of rules packages can now be viewed, edited and cancelled by GRAT users on GRAT instances that did not originally perform the scheduling—previously, only the original GRAT user/instance that performed the scheduling could do this. Any GRAT instance that modifies the scheduled deployment takes over the responsibility for handling that deployment. The deployment history, including a new Deployed From field that indicates which node last scheduled the deployment, is now available to all the nodes in the cluster (visible in the Deployment History tab).

    GRATDepHistTab.png
Important
The new Deployed From field is also visible in standalone GRAT instances—it will display the ID of the standalone GRAT instance.

Deploying the GRAT Clustering Feature

  1. Import the GRAT_Rules_Authoring_Tool_Application_Cluster_<version>.apd template into your Genesys configuration environment.
  2. Create a GRAT cluster Application object based on the new template. Adjust the configuration options as needed.
  3. For each GRAT node to be added to the cluster, add its Application ID as a connection in the cluster's Application object.
  4. Add the DAP (Database Access Point) Application object to be used by the GRAT cluster as a connection in the cluster's Application object.
    Important
    An existing standalone GRAT database can be used.
  5. If you are re-using an existing standalone GRAT instance in the cluster, re-deploy the GRAT application to ensure that its local cache is re-initialized.
    Important
    • Any change in the cluster configuration will only take effect upon re-start of the GRAT servers.
    • For high availability, GRAT instances must have a high-speed connection to the database. Slow connections may result in the type of issues commonly seen when the local repository cache is corrupted.

Removing a GRAT Node from the Cluster

  1. Remove the GRAT node's Application ID from the list of connections in the cluster's Application object.
  2. Manually remove the GRAT's entry in the LOCAL_REVISIONS table in the GRAT database (this is a single row, identified by the GRAT Application's DBID in the JOURNAL_ID column).
    Warning
    If this step is not performed, the clean-up thread (janitor) will not be effective.



Removing a GRAT Node from the Cluster (post-8.5.302)

In release 8.5.302, the manual cleanup of LOCAL_REVISIONS table (by the clean-up thread (janitor)) after node removal is not required. The cleaning task is now automated and controlled by the following new configuration options in the settings section of the GRAT cluster application:

  • local-revisions-janitor-enabled
    • When this is enabled, at regular intervals, the clean-up task cleans the local revisions table by removing the entries for cluster nodes which are no longer part of the cluster. If this is not done then journal table Janitor will not be effective.
  • local-revisions-janitor-sleep
    • Specifies the sleep time of the clean-up task in days (only useful when the clean-up task is enabled, default is 15 days)
  • local-revisions-janitor-first-run-hour-of-day
    • Specifies the hour at which the clean-up task initiates its first run (default = 2, which means 02:00)

CONFIGURATION OPTION DETAIL


Support for A/B/C Split Test Scenarios

[+] FULL DETAILS


New in 8.5.2


Support for GRE Clustering

Background

Before release 8.5.2, successful deployment to a GRE cluster required a successful deployment to every node in the cluster, otherwise the deployment was rolled back and none of the nodes was updated.

What's New?

  • Partial deployments—The deployment process can now handle scenarios in which nodes are down or disconnected. GRAT continues deploying directly to the clustered GREs, but now the deployment continues even if it fails on one (or more) of the cluster nodes. A new deployment status—Partial—will be used for such deployments. Users will see the failed/successful deployment status for each node by clicking on the status in GRAT deployment history.
    [+] CONFIG OPTION
  • A new "smart cluster" Application template—A new Application template—GRE_Rules_Engine_Application_Cluster_<version>.apd—is implemented to support the new functionality. To configure a cluster with the new features, use this template. Members of the cluster must be of the same type (Genesys Rules Engine applications—the new features are not applicable to Web Engagement engines) and must have minimum version numbers of 8.5.2. Genesys recommends not creating clusters of GREs with mixed 8.5.1/8.5.2 versions.
    A new shared deployment folder from which rule packages can be synchronized can also be defined. When the cluster is configured to auto-synchronize, the GREs will auto-synchronize when newer rule packages are detected in the shared deployment folder. Auto-synchronization is enabled or disabled using configuration options in the GRE_Rules_Engine_Application_Cluster object in the Genesys configuration environment.
  • Auto-synchronization of cluster nodes—Newly provisioned nodes in the cluster, or nodes that have disconnected and reconnected, can be auto-synchronized with other nodes in the cluster.
    For a clustered GRE:
    • Where the cluster has the new option auto-synch-rules set to true (new option), a cluster shared folder is now used to store rules package data. Each clustered GRE node has its own deployment folder in the cluster shared folder. The shared folder will enable synchronization of the cluster GREs after network or connection disruption or when a new GRE is added to the cluster.
    • Where the cluster has the new option auto-synch-rules set to false (default), the deployed rules files will be stored in the location defined in deployed-rules-directory. In such cases a manual redeployment will be required if deployment status is partial or if a new node is joining the cluster.
    [+] DIAGRAM auto-synch-rules=true
    [+] CONFIG OPTIONS
    [+] CONFIGURING SHARED FOLDERS

If required, for example in cloud deployments, Customer/Professional Services must make sure that the shared folder are set up in HA mode.

Folder Sharing Schema

Below is an example of how clustered GREs see other GRE node's deployed rules folders. In the example, below /sharedOnGre1, /sharedOnGre2 and /sharedOnGre3 all are pointing to the same shared folder, but the shared folder is mapped/mounted differently on each machine.

GRE1

shared-root-directory = /sharedOnGre1
deployed-rules-directory = /GRE1_DEPLOYDIR

GRE2

shared-root-directory = /sharedOnGre2
deployed-rules-directory = /GRE2_DEPLOYDIR

GRE3

shared-root-directory = /sharedOnGre3
deployed-rules-directory = /GRE3_DEPLOYDIR

GRE1 will see other GREs (GRE2 and GRE3) deployed rules folder by using paths as below:

GRE2     /sharedOnGre1/GRE2_DEPLOYDIR
GRE3     /sharedOnGre1/GRE3_DEPLOYDIR

GRE2 will see other GREs (GRE1 and GRE3) deployed rules folder by using paths as below:

GRE1     /sharedOnGre2/GRE1_DEPLOYDIR
GRE3     /sharedOnGre2/GRE3_DEPLOYDIR

GRE3 will see other GREs (GRE1 and GRE2) deployed rules folder by using paths as below:

GRE1     /sharedOnGre3/GRE1_DEPLOYDIR
GRE2     /sharedOnGre3/GRE2_DEPLOYDIR


[+] CONFIGURATION STEPS


Configuration Notes

If GRAT’s CME Application ID is replaced (such as in the scenario in Important below), you must do one of the following for auto-synchronization to work correctly. Either:

  • Redeploy all the rule packages to the cluster; or;
  • Update the configuration—this may be preferable to redeploying all rule packages (for example, because of a large number of rule packages)
Important
Changing of GRAT’s configuration Application ID will occur when you have a previous configuration using GRAT 8.5.1 with deployed rule packages and you upgrade to GRAT 8.5.2, and as part of that, create new application objects in CME for GRAT 8.5.2.

Redeploy all the rule packages to the cluster

If auto-synchronization is enabled and deployment to the cluster cannot be performed, follow the steps below to deploy to the GREs individually:

  1. Temporarily disable auto-synchronization in the GREs by setting option deployed-rules-directory-is-relative-to-shared-root to false.
  2. Redeploy all the rule packages to the GREs.
  3. Once the rule packages have been deployed to all the GREs, reset deployed-rules-directory-is-relative-to-shared-root to true.

If auto-synchronization is disabled and deployment to the cluster cannot be performed, the rule packages can be deployed to all the GREs individually without requiring any additional settings.

Update the configuration

In the Tenant configuration, update option next-id, which is available under the Annex settings section in a Script Schedule-XXXX ( where XXXX is GRAT’s configuration Application ID) corresponding to the new GRAT Application, with the value from script corresponding to the previous GRAT Application.

Option path in Configuration Manager:

Configuration > [Tenant Name] > Scripts > Rule Deployment History > Schedule-[Id of GRAT App] > Annex > settings > “next-id”

Option path in Genesys Administrator:

PROVISIONING > [Tenant Name] > Scripts > Rule Deployment History > Schedule-[Id of GRAT App] > Options (with Advanced View (Annex)) > 
settings > “next-id”

Example

If the Tenant name is Environment, the new GRAT configuration Application ID is 456 and the old GRAT configuration Application ID is 123.

Using Configuration Manager:

Copy the value of option:

Configuration > Environment > Scripts > Rule Deployment History > Schedule-123 > Annex > settings > next-id

into:

Configuration > Environment > Scripts > Rule Deployment History > Schedule-456 > Annex > settings > next-id

Using Genesys Administrator:

Copy the value of option:

Configuration > Environment > Scripts > Rule Deployment History > Schedule-123 > Options (with Advanced View (Annex)) > settings > next-id

into:

Configuration > Environment > Scripts > Rule Deployment History > Schedule-456 > Options (with Advanced View (Annex)) > settings > next-id

Limitations in the Initial 8.5.2 Release

  • The auto-synchronization feature does not include undeploy functionality.
  • A GRE cannot be a member of more than one cluster. This is because GRE checks all the clusters in the Genesys configuration environment to see which one has a connection to the GRE. If there are multiple such clusters, only the first one found is considered; any others are ignored.
  • GRE can operate either singly or as part of a "smart cluster", but not both.
  • High Availability (HA) for the cluster shared folder is not currently implemented. If HA is required, for example in multi-site deployments, Professional Services must make sure that the shared folder is set up in HA mode.



Support for WebSphere Clustering

Background

Before release 8.5.2 of GRS, it was not possible to configure multiple cluster nodes running on the same machine and controlled by the same cluster manager because separate entries for the same host could not be created in bootstrapconfig.xml to represent different GRE nodes. The pre-8.5.2 format of the bootstrapconfig.xml allowed for a single node to be defined per host. The xml format was as follows:

<xs:complexType name="node">
      <xs:sequence>
       <xs:element name="cfgserver" type="cfgserver" minOccurs="1" maxOccurs="1"/>
       <xs:element name="lcaserver" type="lcaserver" minOccurs="0" maxOccurs="1"/>
       <xs:element name="application" type="application" minOccurs="1" maxOccurs="unbounded"/>
      </xs:sequence>
      <xs:attribute name="host" type="xs:string"/>
      <xs:attribute name="ipaddress" type="xs:string"/>    
</xs:complexType>   

What's New?

In GRS 8.5.2, an additional attribute called servername has been added to the node definition. This makes it possible to define multiple nodes for the same host. The server name is defined via the WebSphere Application Server (WAS) Deployment Manager when the cluster node is created.

For example, you can replicate the “node” definition for each GRE that is running on the same host. Then, by adding servername=, you can make the entry unique. Each entry then points to the corresponding Configuration Server application for that GRE instance. In this way, a single bootstrapconfig.xml file can be used to define all nodes in the Websphere cluster, whether or not there are multiple GRE nodes defined on a given host.

To ensure backward compatibility, if no node is found within the bootstrapconfig.xml that matches both the hostname and serverName then the node that contains the hostname with no server name defined serves as the default.

Editing the bootstrapconfig.xml file

To edit this file, manually extract the bootstrapconfig.xml file from the .war file, edit and save the bootstrapconfig.xml file, then repackage the bootstrapconfig.xml file back into the .war file.

Sample bootstrapconfig.xml files

Important
Terminology—In the bootstrapconfig.xml files, the <node> element corresponds to an individual member of a WebSphere cluster.

For a cluster with one host and two server instances on that host

Below is a sample bootsrapconfig.xml definition for a GRE cluster running on one host, GenSrv1000, with server instances server01 and server02 on that host:

Webspherebootstrap1.png

For a cluster with two hosts and two server instances on each host

Below is a sample bootsrapconfig.xml definition for a GRE cluster running on two hosts, GenSrv1000 and GenSrv2000, with server instances server01 and server02 on each host:

Webspherebootstrap2.png



New in 8.5.1


Support for Conversation Manager Templates in Test Scenarios

In the initial 8.5.001 release of GRS, the Test Scenario feature did not support rules that were created using the Conversation Manager (CM) template. This is because the Test Scenario feature in release 8.5.001 works by taking the input data (a set of one or more facts with different fields) that is configured by the user and building the appropriate Fact model, then running the rules under GRAT using that set of data. In release 8.5.1, the Test Scenario feature now supports rules based on the CM template.

Data Structure in CM

With Conversation Manager, the data is in a hierarchical JSON format of Customer -> Service -> State -> Task. Any given Customer may have one or more Services. Each Service may be in at most one State at a time. Each State may have one or more Tasks. Tasks may also be associated directly with Services.


CRSSTModel.png

So the Customer, Services, States and Tasks Facts have now been added the lists of Facts that can be defined as Given fields, and the RulesResults Fact has been added to the list of Facts that can be defined as an Expectation.

Important
The current CM Template is only interested in the Type, Start Time, and Completion Time (if any) of Services, States, and Tasks.

Each of the new values is represented by a JSON string which will be the value for that field.

Now, when the type of rule for which you want to create a test scenario is a Conversation Manager rule (based on the Conversation Manager template), a series of different values for the Given and Expectation elements that reflect these more complex data structures are available. In the example below you can see the Customer > Service > State > Task structure is reflected by the four @class entries in the drop-down list of Givens and the @class:RulesResults entry in the drop-down list of Expectations.


CMTest1.png

When you select an @class entry, a new column is added. Click on a grid cell under the new column to bring up the edit dialog for that entry. The additional data listed below can be selected as either a Given or an Expectation.

Additional CM Template Objects

Givens

The list below shows the additional provided data.

  • Available by selecting one of the @class entries:
    • Add Customer Attribute
    • Add Service
    • Add Service Type
    • Add Service Start Time
    • Add Service Completion Time
    • Add State
    • Add State Type
    • Add State Start Time
    • Add State Completion Time
    • Add Task
    • Add Task Type
    • Add Task Start Time
    • Add Task Completion Time
  • Available for direct selection from Givens:
    • Add Interaction Media Type
    • Add Contract End Date

Expectations

The list below shows the additional expected results:

  • Update Customer Attribute
  • Request Specific Agent
  • Request Agent Group
  • Request Place Group
  • Request Skill
  • Send Communication to Customer
  • Block Communication to Customer
  • Offer Service Resumption
  • Offer Survey to Customer

Edit Dialogs

To create entries for the Givens and Expectations of your Conversation Manager test scenario, select the relevant @class item and use the sample additional edit dialogs shown below.

Givens

CMtest3.png

CMtest4.png

CMtest5.png

CMtest6.png

Expectations

CMtest7.png



Nested Solution Business Hierarchy

In release 8.5.1 of Genesys Rules Authoring tool, if you have permission to create a new rule (Rule Package - Create) you can now add a new Rule Package at any node in the business hierarchy (a nested solution), rather than just at the first level.

[+] FULL DESCRIPTION



New in 8.5.001.21


Business Calendar Enhancements

Business calendars have been enhanced in GRS release 8.5.2 to allow:

  • Dynamic Timezone Support
  • Differentiation between holidays and non-working days.

[+] FULL DESCRIPTION

This page was last edited on August 1, 2017, at 09:53.
Comments or questions about this documentation? Contact us for support!