Jump to: navigation, search

Troubleshooting

Log

The simplest way to monitor the Billing Data Server (BDS) application is by reading the log file on the Docker host.

All logs are located under the ./cloudbilling-prem.local/log/ directory. The following is a sample main log file:

Sample main log file.png

There are three kind of logs:

  • Bds.log: main log file, includes log-records of daily BDS runs
  • bds_stats.log: includes records with statistical information in key=value format
  • brsctl.log, brs_config_snapshotter.log, db_utils.log, control_validation.log, premise_loader.log, and sbc_brs_comparator.log: log records of bds-utilities that run manually

The log file format is:

Date Time, Log Level, Thread ID | Module Name, Function Name - <Processing date, Tenant_id, Tenant name> Message

The possible log levels are:

  • CRITICAL
  • ERROR
  • WARNING
  • INFO
  • DEBUG

The log must be monitored for CRITICAL and ERROR level messages.

File Storage

BDS stores results of the extraction and transformation steps locally as defined by the following variables in the gvars.py file:

  • local_cache
  • premise_extract_path
  • premise_transform_path

Directory structure for extract data

Extracted data is stored locally. The root directory for extract files is provisioned in gvars.py “./local_cache/premise_extract_path”.

For Genesys Info Mart (GIM) data sets where one data set results in one csv file per day, the following is the path: 

/<tenant_id>/<dataset_name>/<year>/<month> (MM)/<date_label>.csv.gz

For GVP Call Detail Record (CDR), there are as many files as there are locations. The following is the path:

/<tenant_id>/<dataset_name> (gvp_cdrs)/<region>/<location>/<year>/<month> (MM)/<day> (DD)/<date_label>.csv.gz

Directory structure for transformed data

The root directory for transformed files is provisioned in gvars.py “./local_cache/premise_transform_path”.

For region aware metrics:

  • Summary file: CNT_<US | EU | ...>_PES_<METRIC_NAME>_<GARNCODE_tenantID>_<datetime_with_timestamp>.CSV
  • Data file: <US | EU | ...>_PES_<METRIC_NAME>_<GARNCODE_tenantID>_<datetime_with_timestamp>.CSV

For global metrics:

  • Summary file: CNT_PES_<METRIC_NAME>_<GARNCODE_tenantID>_<datetime_with_timestamp>.CSV
  • Data file: PES_<METRIC_NAME>_<GARNCODE_tenantID>_<datetime_with_timestamp>.CSV

Legend:

filename_base—a date time section

Changing in timestamp encoding:

  • Non-concurrent case : In filename_base, timestamp is 000000.
  • Concurrent case : In filename_base, timestamp is 000001 (plus one second).

Examples

  • US_PES_ASR_PORT_1000_2016_10_20T000000Z.CSV—US region, premise False, gvp_asr_ports metric, ID (must be more complex, unique for each tenant), time with 000000 timestamp - it's not concurrent
  • US_PES_AGENT_COBROWSE_1000_2016_10_11T000001Z.CSV—US region, premise False, seats_cobrowse metric, ID, time with 000001 timestamp - it's concurrent

Feedback

Comment on this article:

blog comments powered by Disqus
This page was last modified on 27 June 2018, at 13:26.