Jump to: navigation, search

Troubleshooting

Log

The simplest way to monitor is by monitoring the log file on the docker host.

All logs are located under the ./cloudbilling-prem.local/log/ directory.

Sample log file:

2015-08-04 22:20:54,133 ERROR 139991851017984 config.py:add_dict: Failed to add node metrics

2015-08-04 22:20:54,135 DEBUG 139991851017984 config.py:initialize: 

2015-08-04 22:20:54,137 DEBUG 139991851017984 config.py:initialize: Reading from existing file ./test_extract_cfg.pkl

The format is:

Date, Time, Log Level, Thread ID, Module Name, Function Name, Message

Possible log levels: CRITICAL, ERROR, WARNING, INFO, DEBUG. The log must be monitored for CRITICAL and ERROR level messages.

File Storage

Important
By default, local data on docker host is located under ./cloudbilling-prem.local/data/ directory.

BDS keeps both extracted data and transformed data in the S3 bucket long-term to support billing inquiries. Thus, in addition to logs, functioning of the system should be monitored to ensure that new files appear in the S3 bucket, both for extract and transform storage.

Directory structure for extract data (the root directory for extract files is provisioned in gvars.py “./local_cache/premise_extract_path”):

For Genesys Info Mart (GIM) datasets where one dataset results in one csv file per day: 

/<tenant_id>/<dataset_name>/<year>/<month> (MM)/<date_label>.csv.gz

For GVP CDRs there are as many files as there are locations. The path is the following: /<tenant_id>/<dataset_name> (gvp_cdrs)/<region>/<location>/<year>/<month> (MM)/<day> (DD)/<date_label>.csv.gz

Directory structure for transformed data (the root directory for transformed files is provisioned in gvars.py “./local_cache/premise_transform_path”):

With region:

  • {region_part}_PES_{rms_file_prefixes}_{tenant ID}_{filename_base}Z.CSV
  • CNT_PES_{rms_file_prefixes}_{tenant ID}_{filename_base}ZT.CSV

Without region:

  • {file_specific}_{rms_file_prefixes}_{tenant ID}_{filename_base}Z.CSV
  • CNT_PES_{rms_file_prefixes}_{tier3_id}_{filename_base}ZT.CSV

Legend:

filename_base—a date time section

Changing in timestamp encoding:

Non-concurrent case

In filename_base timestamp is 000000.

Concurrent case

In filename_base timestamp is 000001 (plus one second).

Examples

  • US_PES_ASR_PORT_1000_2016_10_20T000000Z.CSV—US region, premise False, gvp_asr_ports metric, ID (must be more complex, unique for each tenant), time with 000000 timestamp - it's not concurrent
  • US_PES_AGENT_COBROWSE_1000_2016_10_11T000001Z.CSV—US region, premise False, seats_cobrowse metric, ID, time with 000001 timestamp - it's a concurrent

DB configuration

BDS works with the following DB types:
db_type—sql_server, postgre, and oracle

Corresponding drivers from the configuration are used:
driver_name—FreeTDS, PostgreSQL, and OracleODBC-12.1

Feedback

Comment on this article:

blog comments powered by Disqus
This page was last modified on 11 May 2018, at 01:42.