Configure a Cluster of Co-browse Servers
Genesys Co-browse supports load balancing using Stickiness.
Load balancing is enabled by configuring a cluster of Co-browse Servers. Cassandra is embedded in Genesys Co-browse Server, so when you set up a cluster of Genesys Co-browse servers, each server may also act as a Cassandra node. You configure the Cassandra nodes by setting configuration options in the cassandraEmbedded section of the Co-browse Server application.
Complete the following steps to implement load balancing:
You must set up a cluster of Co-browse Nodes to enable load balancing. To do this, complete the procedures to create Application objects for a Co-browse Cluster and Co-browse Nodes. Follow the installation steps outlined in the Creating the Co-browse Server Application Object in Genesys Administrator section.
If the Co-browse Nodes reside on the same machine, you must configure the applications to use different ports for Jetty and the embedded Cassandra. This will prevent port conflicts among your Co-browse Nodes.
Configure the Cassandra cluster
Prerequisite: You have completed Step 1.
External Cassandra Cluster Setup
External Cassandra cluster deployment is thoroughly described in the official Cassandra documentation. See the following:
Embedded Cassandra Cluster Setup
An Embedded Cassandra cluster is setup similarly to an external Cassandra except that embedded Cassandra node settings are provided either through Configuration Server options or an external cassandra.yaml file.
Start of procedure
Complete the steps below for each Co-browse application you created in Set up 2 or more Co-browse Servers:
- Open Genesys Administrator and navigate to PROVISIONING > Environment > Applications.
- Select the Co-browse application and click Edit.
- In the Options tab, locate the cassandraEmbedded section and update the following options:
- listenAddress — Set this value to the IP of the node (listenAddress in cassandra.yml).
- rpcAddress — Set this value to the IP of the node (rpcAddress in cassandra.ymal).
- seedNodes — Set this value to the IP of the first node (seedNodes in cassandra.yaml).
- clusterName (optional) — This name should be the same for each node (name in cassandra.yaml).
- Click Save & Close.
End of procedure
By default, Co-browse server activates NetworkTopologyStrategy as a replication strategy. NetworkTopologyStrategy is recommended for production Cassandra deployments and is supported by GossipingPropertyFileSnitch. GossipingPropertyFileSnitch relies on the cassandra-rackdc.properties file. Make sure the data center data center names defined in this file (one for each Cassandra node) correspond to data center names defined in the replicationStrategyParams option.
The cassandra-rackdc.properties file location depends on the type of Cassandra cluster:
- External Cassandra—see the conf subdirectory of the Cassandra installation directory. For more information, see http://docs.datastax.com/en/cassandra/2.0/cassandra/architecture/architectureSnitchGossipPF_c.html.
- Embedded Cassandra—see the resources subdirectory of the server directory. For example, <Co-browse Server Install Directory>/server/resources.
Replication factor configures the number of copies of data to keep in the cluster. Typically, three copies is enough for most scenarios (provided you have at least three nodes in your cluster). The replication factor can be increased to achieve higher redundancy levels. Set the replication factor in the replicationStrategyParams option to a number less than or equal to the number of nodes.
Prerequisite: You have a separate installation of Cassandra 2.x (2.1.3+, the same version used in Co-browse).
- Start the first node and wait until it starts listening.
- Start all other nodes in your cluster.
- Open a command line and run <cassandra home>\bin\cassandra-cli.bat -h <ip of first node> -p <cassandra rpcPort>, where <ip of first node> is the IP of the first node in your cluster and <cassandra rpcPort> is the value you configured for rpcPort.
- Enter the following command: describe cluster. The output should look similar to the following:
[default@unknown] describe cluster; Cluster Information: Snitch: org.apache.cassandra.locator.SimpleSnitch Partitioner: org.apache.cassandra.dht.RandomPartitioner Schema versions: 6c960880-1719-11e3-0000-242d50cf1fbf: [192.168.00.1, 192.168.00.2, 192.168.00.3]
The list of IP address in square brackets ([192.168.00.1, 192.168.00.2 ...]) should match all the nodes in your cluster.
End of procedure
You must modify the URLs in your Co-browse instrumentation scripts to point to your configured load balancer. See Website Instrumentation for details about modifying the script.
If you are using the Co-Browse proxy to instrument your site, you will need to modify the URLs in the in proxy's map.xml file. See Test with the Co-browse Proxy for details about modifying the xml file.
Configure the Co-browse Server applications
See the cluster section for details.
You must also set up a similar configuration for the Genesys Co-browse Plug-in for Workspace Desktop Edition. To support this, you might consider setting up two load balancers:
- public — This load balancer should have a limited set of Co-browse resources. For example, it should not include session history resources.
- private — This load balancer should have all Co-browse resources and it should be placed in the network so that it is accessible only from the corporate intranet. It should only be used for internal applications, such as Workspace Desktop Edition.
Complete the procedure below to configure the plug-in to support the Co-browse cluster:
Configure the Co-browse Plug-in for Workspace Desktop Edition
If you use Workspace Web Edition on the agent side, you must configure it to work with Co-browse. For instructions, see Configure Genesys Workspace Web Edition to Work with Co-browse.
To test your set-up, create a Co-browse session, join it as an agent and do some co-browsing. If you can do this, your configuration was successful.