Welcome to UCS 9.1
Universal Contact Server (UCS) 9.1 interfaces with a cluster of Cassandra databases that store the following:
- Contact Information—Names, addresses, phone numbers
- Context Services—Profiles and extensions
- Contact History—Previous interactions with this contact
To handle large amounts of contact data in a scalable way, UCS uses three technologies in the database and reporting layers:
- Apache Cassandra is an open-source distributed database designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.
- Elasticsearch is a search server that provides a distributed, multi-tenant-capable full-text search engine.
- Apache Spark is an open-source cluster computing framework.
Cassandra and Elasticsearch clusters are used in the Operational Cluster that stores data for real-time processing. This Cassandra data is indexed by Elasticsearch to provide flexible searches and full-text search.
The Genesys Data Processing Server (GDPS) cluster uses a Spark cluster that massages data from UCS 8.5.2+ to migrate data from the SQL database to UCS 9.1.
UCS 9.1 has the following new features and functionality:
High availability and scalability in UCS 9.1 are now provided by an active/active N+1 architecture, described in more detail below.
Genesys requires the installation of a Cassandra cluster and the installation of the Genesys Elasticsearch plugin for Cassandra in your deployment architecture. This will give you dedicated management of memory and other hardware resources.
Clients can connect to any UCS node—all data is available on any node. Each UCS node connects to the Cassandra cluster and, depending on the consistency level and replication factor required, UCS can connect to one or several Cassandra nodes. The goal is to enhance both the reliability and the scalability of UCS solution by providing active/active N+1 architecture. UCS nodes have a connection to UCS 8.5 for Knowledge Management and Personally Identifiable Information (PII) functionality. UCS 9.1 relies on Cassandra and Elasticsearch. MORE DETAILS
UCS Node Failure
If a UCS node fails, clients can reconnect to another live node and resume work. UCS clients reconnect to other live UCS nodes. With a PSDK cluster application block or a provisioned HTTP load balancer for the REST API, this is transparent, but the failed request must be re-sent.
Cassandra Node Failure
Both UCS nodes and Cassandra nodes can fail individually without data loss or long outage. If a Cassandra node fails and downtime is less than 10 hours, missed data will be streamed from other nodes when the node is up again. If the downtime is more than 10 hours, a repair on that node is needed in order to have consistent data again.
If a Cassandra node fails, depending on the required consistency level, the UCS client might need to re-send one or more requests.
Cassandra is linearly (or horizontally) scalable, meaning that to increase capacity or scalability, you only need to add new nodes to the cluster and they will automatically be utilized by the cluster without reconfiguration, downtime or loss of performance.
Recommendations and resilience
Genesys' minimum recommendation are as follows:
- Cassandra Replication Factor set to RF=3.
- Elasticsearch replicas is 1 (default).
- UCS number of nodes is 3 plus an additional node for monitoring.
This allows production with:
- One Cassandra node down.
- One Elasticsearch node down.
- Two UCS nodes down.
Support for Geo-Replication
UCS 9.1 supports geo-replication by using the Datacenter Replication feature in Cassandra. Geo-replication systems are designed to improve the distribution of data across geographically distributed data networks.
This feature enables the following:
- Better performance to geographically distributed clients of UCS.
- Instant disaster recovery (DR).
- Support for Workload Separation with specialized datacenters (for example, operational or analytical).
- Live backups.
A replication link is established between Cassandra nodes to replicate the data across different regions. Elasticsearch nodes are local to the data center—as the data is written to Cassandra, it is also written to the local Elasticsearch nodes, so there is no need for additional replication at the Elasticsearch level.
New search features
UCS uses the Elasticsearch plugin for Cassandra to execute complex queries or Full Text Search queries. In order to provide a better understanding of the plugin functionality and its safety, Genesys has open-sourced the plugin. While you can use the plugin for other projects, the plugin is only supported in the scope of its usage with UCS 9.1. Modifications of the plugin are not supported.
UCS 9.1 now features support for the following search features:
- A unified search API with distributed cursors and attribute selection.
- No need for custom attributes any more.
- Searchable user data with level 1 key/value pair support: AllAttributes.MyKey:ABDC*
- Full text search support on Interaction/Contact*ListGet, sorting and segmentation.
- Support for Lucene query syntax: ESQuery=‘Subject:hello*’.
- Support for Elasticsearch JSON query syntax.
- Full text search and database queries both support cursors (*ListNext) from all UCS nodes in all Data Centers.
- Cursors have a 1-hour time-to-live value, so are not a limited resource any more.
- Customizable text highlights in search results.
- The Index service RequestSearch has been deprecated in favor of *ListGet requests.
Learn more about new search features in UCS 9.1 in the following Genesys articles:
- Searching the UCS database
- Real-time searching using Lookup tables
- Near-real-time searching using Elasicsearch
- Sort order in interaction searches
There is a good introduction to this topic in the vendor documentation at this location.
Genesys PSDK and RESTful API
Clients can connect to UCS 9.1 using either the Genesys PSDK or the new HTTP RESTful/JSON API. For more details refer to:
The UCS 9.1 API covers the full UCS feature set. Previously in UCS 8.5, API support was limited to only Context Services Profiles.
Data expiration allows you to keep contacts, interactions, and profiles for a limited period of time. Data expiration relies on a data retention policy that uses the time-to-live (TTL) parameter in Cassandra. When an entity is inserted, it is assigned a data retention policy. The data related to this entity, interaction attachments, contact attributes, lookup data, and extensions also share the same data retention policy.
When an entity is updated, the data retention policy is not updated. Some lookups are deleted and some new ones are inserted. The new ones are inserted with the remaining data retention policy. For example, if contact retention policy is set to one month, the contact will be deleted one month after it was first created, regardless of whether it was updated or is still in use.
When the data retention policy has expired, Cassandra deletes the entity and its lookups as well as related data in Elasticsearch. See the data retention configuration options here.
Support for Different TTL for Different Media
Interactions can now have different time-to-live (TTL) values depending on their media type. For each media type you want to use in this way, you need to create a media-specific variant of the retention-entity-interaction-<media-type> configuration in the [cassandra-keyspace] section of the UCS application or application cluster object.
UCS still supports Tenants, but now also supports "segments". Segments reduce the amount of data you need to view. Data Segmentation provides support for the logical segmentation of data by type, such as transactional data (contacts, interactions) and operational data (standard responses). For example, you might segment your data by Line of Business (LoB) or subsidiary, or a combination of both.
Rather than use a formal "data type", each table in Cassandra has a Segment column of type text. This new column is able to store hierarchical segmentation by using the \ delimiter— for example; Finance\Banking\XYZ.
You can have as many segments as you need, and you can still use wildcard characters; for example: segment: accounting/europe/*.
Support for Rolling Upgrade without Service Interruption
Rolling upgrade, including data migration, without service interruption is now supported. Each node must be stopped, upgraded and restarted in sequence. Once the node is upgraded and started it requests all missed changes from the other nodes. In a primary/backup pair, you can now upgrade the backup UCS node, perform a failover, then upgrade the other node in the pair. A Cassandra Quorum must be maintained during the entire procedure.
Flexible Data Extraction
UCS 9.1 enables you to do the following data extraction tasks:
- Create comprehensive data selection criteria and combinations of selection criteria.
- Schedule the data extraction.
- Protect data by utilizing stringent security, including:
- Role-based authorization
- Comprehensive audit logs
- You can use these data export format options:
- Use UCS in test mode to provide details of how many records and how much data will be exported.
Please see Flexible Data Extraction in the UCS 9.1 Administrators Guide for more details.
In release 9.1, migration from UCS 8.5.3 can now be interrupted and resumed from the point of interruption. Please see Pausing, Resuming and Canceling Migration in the UCS 9.1 Migration Guide for details.
Support for Cross-Origin Resource Sharing (CORS)
Support for CORS is controlled by several new configuration options in the [authentication] section of UCS.
Support for CJK languages in search
UCS 9.1 now supports search for languages in the CJK (Chinese, Japanese, Korean) language group.
UCS 9.1 requires Oracle Java 8—Genesys advises using the latest version for better performance and security.