Known Issues and Recommendations
AI Core Services
The Known Issues and Recommendations section is a cumulative list for all 9.0.x releases of AI Core Services. This section provides the latest information on known issues and recommendations associated with this product. It includes information on when individual items were found and, if applicable, corrected. The Resolved Issues section for each release describes the corrections and may list additional issues that were corrected without first being documented as Known Issues.
Models created and trained prior to AICS release 9.0.007.00 are obsolete cannot be upgraded. If you still have old Models in your environment, the upgrade script that installs release 9.0.013.01 and higher generates an error message referring to sklearn Models failing to upgrade. You can safely ignore this message.
Some of the roles that the Predictive Routing interface might offer for selection are not supported. The supported roles are Staff, Reviewer, and Admin.
If you upload Agent Profile or Customer Profile data from a very large CSV file (around 10,000 records), GPR generates the following error message even though it uploads the data correctly: pymongo.errors.DocumentTooLarge: BSON document too large (50969283 bytes) - the connected server supports BSON document sizes up to 16793598 bytes. This error message appears because the audit functionality cannot handle the quantity of data. You can safely disregard this error message.
|ID: PRR-5284||Found In: 9.0.015.04||Fixed In:|
When you upload Datasets or the Agent or Customer Profile from a zipped .csv file, the total size of the zip archive plus all the extracted data should not be larger than 10 GB, which is the size of the /tmp folder that the dataset_upload worker uses to process files. If you upload files that are too large, you might experience issues with that and subsequent data uploads, such as having the dialog box that opens when you try to append data be inactive and unusable. If this happens, check the /tmp folder to see whether there is more than one .csv file in it. If so, a previous upload job failed because it took up too much space and now the dataset_uploads worker cannot process further uploads until you clean up the folder. To clean up the /tmp folder, restart the data_upload worker container.
|ID: PRR-5218||Found In: 9.0.015.03||Fixed In:|
If you set the use-action-filters option to false and send a scoring request for an agent whose name contains an opening or closing parenthesis [‘(’ or ‘)’], GPR returns an error response similar to the following: No valid operator found in node <node_name> from filter employeeId in <employee_ID>. Valid operators are: ' in ', '>=', '<=', '=', '>', '<'.
|ID: PRR-5168||Found In: 9.0.015.03||Fixed In:|
The same dataset_upload_worker that handles Dataset imports also handles the Apply Sync and Accept Sync operations for Agent and Customer Profiles. As a result, you cannot create an Agent or Customer Profile schema and upload agents to the Profile at the same time as a Dataset import.
Workaround: Genesys recommends that you plan data uploads so as to initialize the Dataset before or after creating Agent or Customer Profile schemas and uploading agents.
|ID: PRR-5160||Found In: 9.0.015.03||Fixed In:|
The introduction of a new numeric datatype for schemas that replaces both floats and integers represents a breaking change with earlier releases of AI Core Services and Agent State Connector (ASC). If you upgrade either component you MUST also upgrade the other. The following versions are compatible:
- AICS 9.0.015.03 and higher + ASC 9.0.015.04 and higher.
- AICS 9.0.015.00 and lower + ASC 9.0.015.01 and lower.
|ID: PRR-4834||Found In: 9.0.015.03||Fixed In:|
When you use the GPR API Reference, if you copy and paste the cURL example code provided there into a Windows-based code editor, you must remove the \ characters at the ends of the lines. This is an known issue in Windows-based software. After upgrading to release 9.0.014.02, you might notice that old jobs (such as data uploads or analysis report creation) are in the Pending state, preventing new jobs from running. To clear out the old jobs, use the following workaround:
- Connect to the MongoDB database using the following command:
$ docker exec -it mongo mongo --ssl --sslAllowInvalidCertificates localhost:27017/solariat_bottle
- Drop the jobs collection using the following command:
|ID: PRR-4221||Found In: 9.0.014.02||Fixed In:|
Features with names that include Japanese characters do not appear in Feature Analysis report graphs if you generate the report using the GPR API. This issue does not occur if you create the report in the GPR web application.
|ID: PRR-4211||Found In: 9.0.014.02||Fixed In:|
If you try to open an older report from the Recent Reports list (available when you click Analysis on the Datasets or Predictors window) you receive a message saying No report. This issue affects reports created approximately in the previous month or earlier. Newer reports open correctly.
|ID: PRR-4191||Found In: 9.0.014.02||Fixed In:|
Appending a single, too-large chunk of records to the Agent Profile or Customer Profile causes the append job to fail. Genesys recommends that you upload appended data in chunks containing no more than 100,000 records.
|ID: PRR-4179||Found In: 9.0.014.02||Fixed In:|
When running the start.sh and restart.sh scripts to deploy an updated version of AI Core Services, you might receive a misleading error message instructing you to run the install.sh script first, even if you have already run it.
Workaround: This message actually indicates that you need to set the value of the S3_ENDPOINT environment variable, located in the tango.env file, to the public IP address used to access AI Core Services.
|ID: PRR-4124||Found In: 9.0.012.01||Fixed In: 9.0.015.00|
If an error occurs when syncing a Dataset with non-ASCII column names or column names with spaces, the error message displays the hash string used internally to manage the column name rather than the column name itself.
|ID: PRR-4031||Found In: 9.0.014.00||Fixed In: 9.0.014.02|
You cannot use the data filter to purge data from GPR if feature names contain non-ASCII characters or spaces. ASCII-character column names without spaces are purged correctly.
|ID: PRR-4026||Found In: 9.0.014.00||Fixed In: 9.0.015.00|
To add another simple predictor to an existing composite predictor, use this two-step process to avoid an error that occurs when compiling the expression for the composite predictor:
- Add the new simple predictor to the existing composite predictor and save it.
- Add the new simple predictor to the expression and save again.
|ID: PRR-4010||Found In: 9.0.014.00||Fixed In: 9.0.014.02|
When the GPR API receives malformed JSON in a request (for example, an extra quotation mark) or receives a request with non-UTF-8 encoding in the request header, it returns an Internal Server Error. When this happens, the logs for the AI Core Services application display a BadJsonException error.
|ID: PRR-3997||Found In: 9.0.014.00||Fixed In: 9.0.015.00|
In rare cases, a Dataset might have an IN SYNC status but its cardinalities are not computed. As a result, GPR generates an unclear error message containing only the metric name when you try to generate Predictor data based on that Dataset. To resolve this issue, send an API request to compute cardinalities on the Dataset.
|ID: PRR-3951||Found In: 9.0.014.00||Fixed In:|
In the GPR web application, Agent and Customer features with very long names can either extend beyond the margins of the text boxes allocated for them or have some of the feature name truncated where it extends past the allotted text area.
|ID: PRR-3943||Found In: 9.0.014.00||Fixed In: 9.0.014.02|
The parameter to limit the number of scored agents to be returned by a scoring request, which should be called predictor_cutoff, is misspelled as predictor_cutoof.
|ID: PRR-3882||Found In: 9.0.006.08||Fixed In:|
If you upload a Dataset containing non-ASCII characters or spaces in the column name, GPR handles it in one of the following ways:
- When a column name contains a mix of ASCII and non-ASCII characters, or contains spaces, GPR removes the non-ASCII characters and spaces from the column name as though they had not been entered and correctly uploads all column values.
- When a column name contains only non-ASCII characters, the column name is entirely omitted. All the column values are preserved, but you cannot modify or save the schema. In this scenario, GPR generates the following error message: An unhandled exception has occurred: KeyError('name').
|ID: PRR-3763||Found In: 9.0.011.00||Fixed In:|
AI Core Services (AICS) does not support dots (periods) in Agent ID strings.
|ID: PRR-3750||Found In: 9.0.012.01||Fixed In: 9.0.015.03|
If you have uploaded and saved an Agent Profile schema but have not yet synced or accepted it, and you try to view an agent record from the Agents tab (located on the top navigation bar), a blank record window opens without any error message or explanation. To resolve this issue, sync and accept the Agent Profile schema, and then return to the Agents tab to view the agent record.
|ID: PRR-3622||Found In: 9.0.013.01||Fixed In:|
Cardinalities are displayed differently for Datasets and Agent Profile/Customer Profile schemas. Dataset cardinalities show the exact number through 1001. If the cardinality is higher, the display continues to show 1001. Agent Profile/Customer Profile schema cardinalities display the exact number only up through 19; 20 and higher are displayed as 20+.
|ID: PRR-3597||Found In: 9.0.013.01||Fixed In:|
You might notice that, if your dataset is extremely large (1,000,000 rows), it takes approximately one minute to open the window where you can create a new Predictor. This performance issue is not resolved by adding additional CPUs to your environment.
|ID: PRR-3510||Found In: 9.0.012.01||Fixed In:|
When you are using the GPR web application to upload a very large Dataset (1 million rows), the user interface status messages might not update to show the actual status of Dataset processing. For example, after you click Accept Schema, you might continue to see status messages that relate to earlier points in the Dataset creation process. This does not indicate a problem or delay in the actual Dataset processing. It is simply caused by a delay in updating the web application to reflect the actual status of Dataset processing.
|ID: PRR-3453||Found In: 9.0.013.01||Fixed In:|
When you are creating a Dataset using the GPR API, if you try to append data before you have saved and synchronized the dataset schema, no error message appears but data is not correctly appended. To avoid data issues, do not append data before syncing the Dataset.
|ID: PRR-3447||Found In: 9.0.013.01||Fixed In:|
If you receive an XGBRegressor has no attribute 'n_jobs' error when you try to create a Lift Estimation report, it indicates that your model was trained on a version of GPR prior to 9.0.008.00. To resolve this issue, retrain your model, and then re-run the Lift Estimation report.
|ID: PRR-3421||Found In: 9.0.013.01||Fixed In:|
If you run a Feature Analysis report on a dataset that uses a low cardinality field as the target metric, the report displays the sub-reports in alphabetical order. It should display them ordered by feature rank.
|ID: PRR-3356||Found In: 9.0.012.01||Fixed In: 9.0.013.01|
Non-ASCII characters are not properly displayed in the pop-up window that displays the actual cardinality values in Agent Profile and Customer Profile schemas if the feature type is "dictionary".
|ID: PRR-3334||Found In: 9.0.012.01||Fixed In: 9.0.013.01|
GPR supports only ASCII characters as Agent and Customer IDs.
|ID: PRR-3329||Found In: 9.0.012.01||Fixed In: 9.0.014.00|
If you run a Feature Analysis report using the GPR API and it fails for any reason, the spinning icon in the web application that indicates report generation is underway continues to spin indefinitely. To resolve this, remove the failed report manually from the Reports list in the web application.
|ID: PRR-3324||Found In: 9.0.012.01||Fixed In: 9.0.013.01|
When running a Lift Estimate report or a Feature Analysis report, either from the web application or the GPR API, the title field includes a timestamp showing when the report was run. This timestamp is the server time zone, which might differ from the user's time zone.
|ID: PRR-3303||Found In: 9.0.012.01||Fixed In:|
The Feature Analysis report requires that numeric values for the target metric in your dataset be in the form of integers. It does not support float values for the target metric.
|ID: PRR-3268||Found In: 9.0.011.00||Fixed In: 9.0.012.01|
If disk space usage on your AICS servers is continually growing, you might need to perform a clean up process on a regular basis. See Clean Up Disk Space to view instructions for maintaining your AICS servers.
|ID: PRR-3249||Found In: 9.0.008.00||Fixed In: 9.0.013.01|
When you create a login message, which is done by setting the LOGIN_MESSAGE environment variable in your tango.env file, you might experience various usability issues:
- Special characters must be "escaped" (converted to HTML symbolic codes). For details, see Set Values for Environment Variables.
- The text is not centered and might overlap the page footer.
- You must reset this message each time you install a new version of AI Core Services, because the installation process overwrites the tango.env file.
|ID: PRR-2981||Found In: 9.0.011.00||Fixed In:|
The endpoint enabling you to generate predictor data using the Predictive Routing API does not accept a time range parameter.
|ID: PRR-2962||Found In: 9.0.011.00||Fixed In: 9.0.012.01|
If your environment contains a number of predictors based on large datasets, you might encounter an out-of-memory error message when you try to open the Predictors Settings page. For example, having six predictors, each based on a dataset with 500 columns, triggered this error message.
|ID: PRR-2945||Found In: 9.0.008.00||Fixed In: 9.0.013.01|
You must perform a hard refresh of the browser page to see updates to the cardinalities and number of agents or customers on the Agent and Customer Profiles windows.
|ID: PRR-2799||Found In: 9.0.011.00||Fixed In:|
The Read and Delete functionality implemented to enable users to comply with General Data Protection Regulation (EU) (GDPR) requirements has the following known issues:
- The filter cannot search by integers saved as strings. All IDs are required to be saved as strings in the Agent Profile and Customer Profile schemas. As a result, you cannot use numeric IDs in filters. However, all other filter types, such as, name, email, and so on, do work for data retrieval and deletion.
- Users and accounts support only soft removal via the DELETE request.
|ID: PRR-2513||Found In: 9.0.010.01||Fixed In: 9.0.011.00|
Accounts cannot be converted from non-LDAP authentication to LDAP authentication. To make the change, delete the non-LDAP account and add the desired LDAP account.
|ID: PRR-2504||Found In: 9.0.009.01||Fixed In:|
GPR generates an error stating that Actions data collection is not indexed when you use a composite predictor for scoring.
|ID: PRR-2461||Found In: 9.0.010.01||Fixed In: 9.0.013.01|
After you delete data from a dataset or from predictor data, a cached copy remains visible in the drop-down lists (facets) used to configure the Customer Details tab in the Predictive Routing application and in the Cardinality context menu that displays all of the unique values for a facet. This cache is updated every two hours, at which point the deleted data is entirely removed.
|ID: PRR-2440; PRR-2441||Found In: 9.0.010.01||Fixed In: 9.0.011.00|
AI Core Services does not support non-ASCII characters for use in passwords (when using a user name/password login) or External IDs (when using LDAP authentication).
|ID: PRR-2264; PRR-2260||Found In: 9.0.009.01||Fixed In:|
The Agent Variance report generates an error and terminates processing if the target metric contains any non-numeric (NaN) or NULL values.
|ID: PRR-2261||Found In: 9.0.008.00||Fixed In:|
Stress, performance, and load testing have not been done at full scale. The limited testing done showed that there was no degradation in response time when the override feature, configured with a simple expression, was used in a scoring request.
|ID: PRR-2149||Found In: 9.0.008.00||Fixed In: 9.0.012.01|
Existing models that you created and trained before installing release 9.0.008.00 might cause errors when running the Lift Estimation report.
Workaround: To resolve this issue, perform the following steps:
- Run the following upgrade script:
- python jop/common/scripts/versioning/upgrade_38a_model_default_algo.py --mode=prod
- Retrain the models. To retrain an activated (locked) model, make a copy of it and then retrain it, as explained in the "Editing Models" section of Configuring, Training, and Testing Models in the Predictive Routing Help.
|ID: PRR-2148||Found In: 9.0.008.00||Fixed In: 9.0.009.01|
Because of the large amount of data to be processed, the ROC curve, which indicates model quality, takes time to display. If progress appears stalled, close and then reopen the ROC display window.
|ID: PRR-2121||Found In: 9.0.008.00||Fixed In:|
AICS does not support spaces in attribute label names and ignores fields containing spaces in label names and records the skipped field in the log. For example, if you create an Agent Profile field containing the expression skill > 6, and also use the expression as the field name, AICS disregards the field. To correct this issue, simply remove the spaces, so that the field name becomes skill>6.
|ID: PRR-2110||Found In: 9.0.007.01||Fixed In:|
The format of reports changed in release 9.0.007.05 because of improvements made to improve memory usage. Reports generated in earlier releases, which use the old format, no longer display under the Reports tab, although their thumbnail previews are still visible in the Analysis panel. To resolve this misleading inconsistency, remove all invalid reports from the database by executing the following script:
|ID: PRR-2007||Found In: 9.0.007.05||Fixed In:|
All numeric data types are identified as "integers" in the AICS interface, including float values. Float values are handled correctly; only the label might appear misleading.
|ID: PRR-1777||Found In: 9.0.007.01||Fixed In: 9.0.008.00|
When a user whose credentials are used for an API request belongs to multiple accounts, AI Core Services (AICS) returns data for the account marked "current" for that user, regardless of the API key specified in the request. Genesys recommends that you create a separate user with the ADMIN role for making API requests, and not to make API calls with a user having the STAFF or SUPERUSER role.
|ID: PRR-1748||Found In: 9.0.007.03||Fixed In:|
In the Feature Analysis report, data is filtered based on the feature index. The report algorithm ignores all feature columns with an index greater than 200, irrespective of their importance. In addition, the top-N is hardcoded to 15.
|ID: PRR-1690||Found In: 9.0.007.03||Fixed In: 9.0.007.05|
The Export functionality in the Lift Estimation report produces only empty files.
|ID: PRR-1615||Found In: 9.0.006.13||Fixed In: 9.0.013.01|
The Predictive Routing Groups window is supposed to be used for sharing account objects between different user roles, but all roles that are currently supported, Staff, Reviewer, and Admin, have access to all account objects by default.
|ID: PRR-1106||Found In: 9.0.006.05||Fixed In:|
When drilling down from Trends graphs or Distribution charts, the results in the Details fields might be incorrect because filter values are not always applied properly.
|ID: PRR-1091||Found In: 9.0.006.05||Fixed In:|
In the Agent Variance report, you must select either numeric or boolean attributes for target metrics. The UI incorrectly allows you to select attributes of other types, which might result in an error or empty report results.
|ID: n/a||Found In: 9.0.007.05||Fixed In:|
Information in this section is included for international customers. Release numbers in the Found In and Fixed In fields refer to the English (US) release of AI Core Services unless otherwise noted in the issue description.
There are no internationalization issues for this product.