Jump to: navigation, search

Workbench Installation - Windows - Additional Node

As per the Sizing section, if Workbench data and configuration redundancy and service high availability is required, Genesys recommends a 3 (Multi/Cluster) Node/Host Workbench deployment.

Warning
  1. Before commencing these Additional Node instructions, ensure the Workbench Primary Node has been successfully installed
  2. Workbench only supports a 1 or 3+ Node architecture; deploying only a Workbench Primary and Workbench Node 2 architecture will cause future upgrade issues

Workbench Additional Node - Installation


Please use the following steps to install Workbench Additional Nodes on Windows Operating Systems.

  1. On the respective 2nd Workbench Additional Node/Host, extract the downloaded Workbench installation compressed zip file.
  2. Within the extracted folder, right click on the file install.bat and select Run as Administrator; alternatively, open a command prompt As Administrator and run install.bat..
  3. Click Next on the Genesys Care Workbench 9.0 screen
  4. Review and click Accept on the Term's & Condition's screen
  5. Select Additional Node on the Workbench Installation Type screen
    1. If required change from the default Default installation to Custom
  6. Click Next on the Base Workbench Properties screen
    1. If prompted, click Yes to on Create Folder screen
  7. On the Additional Components To Be Installed screen:
    1. Ensure Workbench Elasticsearch and Workbench ZooKeeper options are checked (for Workbench data and Workbench configuration H/A)
    2. Workbench Agent is checked by default; it's a mandatory requirement for any hosts running Workbench 9 components
    3. Provide the Primary Node ZooKeeper IP and Port - i.e. 10.20.30.1:2181
Important
  • Provide the Primary Workbench ZooKeeper Hostname <IP Address>:<Port> - avoid using the Hostname
  • The ZooKeeper component prefers IP Address as opposed to Hostname resolution
  1. Click Next
  2. Click Next on the Service Account screen
    1. unless Network Account is required
  3. Click Install
  4. Click OK on the Finshed dialog
  5. Click Exit

Repeat the above for the respective 3rd Workbench Additional Node/Host

Checkpoint

Important
Based on the previous instructions above, within Workbench Configuration there should now be 3 x Hosts and 9 x Applications
  • Within Workbench Configuration there should now be
    • 3 x Hosts in Workbench Configuration\Hosts
      • i.e.
        • WB-1
        • WB-2
        • WB-3
    • 11 x Applications in Workbench Configuration\Applications
      • i.e.
        • WB_IO_Primary
          • The initial Primary Workbench IO application on the 1st WB host
        • WB_Agent_Primary
          • The initial Primary Workbench Agent application on the 1st WB host
        • WB_Elasticsearch_Primary
          • The initial Primary Workbench Elasticsearch application on the 1st WB host
        • WB_Kibana_Primary
          • The initial Primary Workbench Kibana application on the 1st WB host
        • WB_ZooKeeper_Primary
          • The initial Primary Workbench ZooKeeper application on the 1st WB host
        • WB_Agent.2
          • The Additional Workbench Agent Node 2 application on the 2nd WB host
        • WB_Elasticsearch.2
          • The Additional Workbench Elasticsearch Node 2 application on the 2nd WB host
        • WB_ZooKeeper.2
          • The Additional Workbench ZooKeeper Node 2 application on the 2nd WB host
        • WB_Agent.3
          • The Additional Workbench Agent Node 3 application on the 3rd WB host
        • WB_Elasticsearch.3
          • The Additional Workbench Elasticsearch Node 3 application on the 3rd WB host
        • WB_ZooKeeper.3
          • The Additional Workbench ZooKeeper Node 3 application on the 3rd WB host

Workbench ZooKeeper Cluster - Configuration

  1. Determine and note the Unique Id of the:
    1. "WB_ZooKeeper_Primary" applications (likely 1)
    2. "WB_ZooKeeper.2" applications (likely 2)
    3. "WB_ZooKeeper.3" applications (likely 3)
  2. Navigate to the Primary ZooKeeper application, i.e. WB_ZooKeeper_Primary
    1. Expand Configuration Section 4.Cluster Configuration
    2. In the Node 1 field enter the Primary Workbench ZooKeeper Hostname <IPAddress>:2888:3888
    3. In the Node 2 field enter the Workbench Additional ZooKeeper Node 2 Hostname <IPAddress>:2888:3888
    4. In the Node 3 field enter the Workbench Additional ZooKeeper Node 3 Hostname <IPAddress>:2888:3888
    5. Click Save
  3. Navigate to the 2nd Additional ZooKeeper application, i.e. WB_ZooKeeper.2
    1. Expand Configuration Section 4.Cluster Configuration
    2. In the Node 1 field enter the Primary Workbench ZooKeeper Hostname <IPAddress>:2888:3888
    3. In the Node 2 field enter the Workbench Additional ZooKeeper Node 2 Hostname <IPAddress>:2888:3888
    4. In the Node 3 field enter the Workbench Additional ZooKeeper Node 3 Hostname <IPAddress>:2888:3888
    5. Click Save
  4. Navigate to the 3rd Additional ZooKeeper application, i.e. WB_ZooKeeper.3
    1. Expand Configuration Section 4.Cluster Configuration
    2. In the Node 1 field enter the Primary Workbench ZooKeeper Hostname <IPAddress>:2888:3888
    3. In the Node 2 field enter the Workbench Additional ZooKeeper Node 2 Hostname <IPAddress>:2888:3888
    4. In the Node 3 field enter the Workbench Additional ZooKeeper Node 3 Hostname <IPAddress>:2888:3888
    5. Click Save
  5. Navigate to the Primary Workbench IO application, i.e. WB_IO_Primary
    1. If not already, expand Configuration Section General
    2. In the ZooKeeper Nodes field enter the Workbench ZooKeeper Host Name associated <IP_Address>:<Workbench ZooKeeper Port> for ALL ZooKeeper applications, e.g. 10.20.30.1:2181,10.20.30.2:2181,10.20.30.3:2181
    3. Click Save
    Important
    Note: The ZooKeeper component prefers IP Address as opposed to Hostname resolution
  6. Stop the "WB_ZooKeeper_Primary" on the respective Workbench hosts.
  7. Stop the "WB_ZooKeeper.2" application Service on the respective Workbench host
  8. Stop the "WB_ZooKeeper.3" application Service on the respective Workbench host
  9. Start the "WB_ZooKeeper_Primary" application Service' on the respective Workbench host
  10. Start the "WB_ZooKeeper.2" application Service on the respective Workbench host
  11. Start the "WB_ZooKeeper.3" application Service on the respective Workbench host
Important
Workbench 9 should now have a Workbench ZooKeeper clustered environment providing HA of Workbench Configuration

Workbench Elasticsearch Cluster - Configuration

Update Workbench Elasticsearch Applications

  1. Navigate to the Primary Elasticsearch application, i.e. WB_Elasticsearch_Primary
    1. Expand Configuration Section 6.Workbench Elasticsearch Discovery
    2. In the Discovery Host(s) field enter value from associated Section 5 - [Workbench Elasticsearch Identifiers/Network Host] of ALL Elasticsearch applications, e.g. WB-1,WB-2,WB-3
    3. In the Initial Master Node(s) field value from associated Section 5 - [Workbench Elasticsearch Identifiers/Node Name] of ALL Elasticsearch applications, e.g. node-WB-1_9200,node-WB-2_9200,node-WB-3_9200
    4. Click Save
  2. Navigate to the 2nd Additional Elasticsearch application, i.e. WB_Elasticsearch.2
    1. Expand Configuration Section 6.Workbench Elasticsearch Discovery
    2. In the Discovery Host(s) field enter value from associated Section 5 - [Workbench Elasticsearch Identifiers/Network Host] of ALL Elasticsearch applications, e.g. WB-1,WB-2,WB-3
    3. In the Initial Master Node(s) field value from associated Section 5 - [Workbench Elasticsearch Identifiers/Node Name] of ALL Elasticsearch applications, e.g. node-WB-1_9200,node-WB-2_9200,node-WB-3_9200
    4. Click Save
    Important
    Please ensure the Discovery Host and Initial Master Node order matches the Primary Discovery Host and Initial Master Node order
  3. Navigate to the 3rd Additional Elasticsearch application, i.e. WB_Elasticsearch.3
    1. Expand Configuration Section 6.Workbench Elasticsearch Discovery
    2. In the Discovery Host(s) field enter enter value from associated Section 5 - [Workbench Elasticsearch Identifiers/Network Host] of ALL Elasticsearch applications, e.g. WB-1,WB-2,WB-3
    3. In the Initial Master Node(s) field value from associated Section 5 - [Workbench Elasticsearch Identifiers/Node Name] of ALL Elasticsearch applications, e.g. node-WB-1_9200,node-WB-2_9200,node-WB-3_9200
    4. Click Save
Important
Please ensure the Discovery Host and Initial Master Node order matches the Primary Discovery Host and Initial Master Node order


Update the Workbench IO Application

  1. Navigate to the Primary Workbench IO application, i.e. WB_IO_Primary
    1. If not already, expand Configuration Section General
    2. In the Elasticsearch Nodes field enter the combinated of <[Elasticsearch Application/Workbench ElasticSearch Identifiers/Network Host]:[Elasticsearch Application/Workbench ElasticSearch Identifiers/HTTP Port]> of ALL Elasticsearch applications, e.g. WB-1:9200,WB-2:9200,WB-3:9200
    3. Click Save

Update the Workbench Kibana Application

  1. Navigate to the Primary Workbench Kibana application, i.e. WB_Kibana_Primary
    1. Expand Configuration Section 4.Workbench Kibana Identifiers
    2. In the Workbench Elasticsearch Host field enter the combinated of <[Elasticsearch Application/Workbench ElasticSearch Identifiers/Network Host]:[Elasticsearch Application/Workbench ElasticSearch Identifiers/HTTP Port]> of ALL Elasticsearch applications with a HTTP:// prefix, e.g. http://WB-1:9200,http://WB-2:9200,http://WB-3:9200
    3. Click Save
Important
Note the "http://" prefix in the Workbench Elasticsearch Nodes field

Stop the Workbench IO and Workbench Kibana Applications Services

  1. Logout of Workbench and Close the Workbench Chrome Browser session
  2. Stop the "WB_IO_Primary" application Service on the respective Workbench host (i.e. "WB-1").
  3. Stop the "WB_Kibana_Primary" application Service on the respective Workbench host (i.e. "WB-1").

Delete the ".kibana_task_manager" Index from the Primary Node/Host

  1. On the Primary Workbench Elasticsearch Node/Host execute a HTTP request to delete the .kibana_task_manager Index
    1. Use the Windows Powershell or Linux CURL commands below to delete the .kibana_task_manager Index:
      1. Windows Powershell curl
        1. Execute curl -Method DELETE -Uri "http://<WB-1>:9200/.kibana_task_manager"
      2. Linux CURL
        1. Execute curl -X DELETE "http://<WB-VM-1>:9200/.kibana_task_manager"
  2. Check the Indices have been deleted using the commands below - there should be no listing of the .kibana_taks_manager Index:
    1. Windows Powershell curl
      1. Execute curl -Uri "http://<WB-1>:9200/.kibana_task_manager | Select-Object -Expand Content"
    2. Linux CURL
      1. Execute curl "http://<WB-1>:9200/.kibana_task_manager"

Stop the Workbench Elasticsearch Nodes

  1. Stop the Additional "WB_Elasticsearch.2" application Service on the respective Workbench host.
  2. Stop the Additional "WB_Elasticsearch.3" application Service on the respective Workbench host.

Delete Nodes Folders for Additional Elasticsearch Nodes

Important
  • The next 2 steps are for Additional Nodes ONLY
  • DO NOT DELETE the nodes folder on the Primary Elasticsearch Node/Host
  • If you delete the Primary Elasticsearch 'nodes' folder a complete Workbench re-install of ALL Nodes will be necessary.
  1. Delete the Node 2 nodes folder available in the <WORKBENCH_HOME_INSTALL\Elasticsearch\data\ path
  2. Delete the Node 3 nodes folder available in the <WORKBENCH_HOME_INSTALL\Elasticsearch\data\ path

Stop and Start the Workbench Elasticsearch Primary Node

  1. Stop the Primary "WB_Elasticsearch_Primary" application Services on the respective Workbench host.
  2. Start the Primary "WB_Elasticsearch_Primary" application Services on the respective Workbench host.
Warning
It's recommended that Workbench Elasticsearch Primary be started, followed by a pause of approx. 3-4 minutes, before starting the Elasticsearch 2nd and 3rd Additional Nodes.

Start the Workbench Elasticsearch Additional Nodes

  1. Start the Primary "WB_Elasticsearch.2" application Services on the respective Workbench host.
  2. Start the Primary "WB_Elasticsearch.3" application Services on the respective Workbench host.

Start the Workbench IO and Workbench Kibana applications

  1. Start the Primary "WB_IO_Primary" application Services on the respective Workbench host.
  2. Start the Primary "WB_Kibana_Primary" application Services on the respective Workbench host.
  3. Login to Workbench
Important
Workbench Elasticsearch Clustering/HA now available

Test Health of Workbench Elasticsearch Cluster Status


Check the health status of the Workbench Elasticsearch Cluster:

In a Chrome Browser navigate to:

http://<WB-VM-X>:9200/_cluster/health?pretty

or

  1. Windows Powershell curl
    1. Execute curl -Uri "<WB-VM-X>:9200/_cluster/health?pretty"
  2. Linux CURL
    1. Execute curl "http://<WB-VM-X>:9200/_cluster/health?pretty"

Where <WB-VM-X> is the Workbench Primary, Node 2 or Node 3 Host.

Elasticsearch Cluster health should be reporting Green.

Typical expected output:

{

 "cluster_name" : "GEN-WB-Cluster",
 "status" : "green",
 "timed_out" : false,
 "number_of_nodes" : 3,
 "number_of_data_nodes" : 3,
 "active_primary_shards" : 29,
 "active_shards" : 58,
 "relocating_shards" : 0,
 "initializing_shards" : 0,
 "unassigned_shards" : 0,
 "delayed_unassigned_shards" : 0,
 "number_of_pending_tasks" : 0,
 "number_of_in_flight_fetch" : 0,
 "task_max_waiting_in_queue_millis" : 0,
 "active_shards_percent_as_number" : 100.0

}

This page was last edited on October 21, 2020, at 12:15.

Feedback

Comment on this article:

blog comments powered by Disqus