Jump to: navigation, search

Deploying: High Availability

Both AI Core Services (AICS) and Agent State Connector (ASC) support High Availability (HA).

High availability (HA) is configured differently for each Predictive Routing component:

  • AI Core Services (AICS) uses a multi-server architecture. It can be installed at a single site, or in a multi-site architecture. Genesys recommends that you install AICS on three to five servers. More servers mean higher availability: with three servers, the system can survive the failure of only one machine; with five servers, the system can survive the failure of two machines; and so on.
Important
  • AICS is installed in Docker containers. Genesys does not ship Docker as a part of AICS. You must install Docker in your environment before you can load the AICS containers.
  • You might need an active internet connection to download additional libraries when installing Docker.
  • Agent State Connector (ASC) is deployed in warm-standby mode, with primary and backup servers.
  • The strategy subroutines run as part of your routing solution, and therefore use the HA architecture established for that solution.

HA for AICS

The HA deployment and operating information for AICS is divided into the following sections:

Installing HA AICS - Single Data Center Architecture

Important
The following instructions enable you to set up a new AICS HA deployment in a single data center. If you already have a single-server deployment of AICS installed, contact Genesys Customer Care for help migrating to an HA architecture.

Hardware Requirements

  • AICS HA requires a cluster of at least three servers. Genesys recommends that you deploy an odd number of servers to be used for hosting highly-available AICS system (3, 5, 7).
  • Every server must meet preconditions stated in single-host installation. This will be verified during installation.
  • All servers must have networking set up between them, with the ports opened stated in Required Opened Ports for Firewall Configuration.
  • Servers must have host names configured in following format node-1, node-2, node-3 ... node-X. The numbers should start at one and be in increasing order.
    To change the host name, execute the following command on each server in the cluster (and replace X with appropriate numerical value):
    sudo hostnamectl set-hostname --static node-X
  • On every target server, port 3031 must be reachable by the load balancer.
Important
If you are running VMWare VXLAN, you might encounter a port conflict between VMWare VXLAN and Docker, both of which require port 4789. If you encounter this issue, Genesys recommends that you use a networking application such as Weave Net to manage networking among Docker containers. For additional information, consult the documentation for the respective products:

Installation Procedure

  1. Copy the installation binary file (*.tar.gz) to every server in the cluster. Make sure you follow recommendations about user and location as described in single-host installation.
  2. Unpack the installation binary file on every server in the cluster. To unpack, follow these steps:
    1. Copy the IP_JOP_PRR_<version_number>_ENU_linux.tar.gz installation binary file to the desired installation directory. Genesys recommends that you use the PR_USER home directory as the destination for the AICS installation package.
    2. From a command prompt, unpack the file using the following command to create the IP_JOP_PRR_<version_number>_ENU_linux directory:
      tar -xvzf IP_JOP_PRR_<''version_number''>_ENU_linux.tar.gz
      Note the following points:
      • All scripts for installing and operating AICS in an HA setup can be found in the IP_JOP_PRR_<version_number>_ENU_linux/ha-scripts/ directory.
      • You must previously have configured the server host names for every server in the cluster (as detailed above, under "Hardware Requirements") before starting the installation procedure.
    3. Create a Docker Swarm cluster.
      AICS uses Docker Swarm technology to ensure high availability of all its components. In order for AICS to be deployed in highly available manner, you must properly format the Docker Swarm cluster on your target servers.
      1. On the target server with the host name node-1, execute following command to initiate the Docker Swarm cluster:
        docker swarm init
        Important
        If the system has multiple IP addresses, specify the --advertise-addr parameter so the correct address is chosen for communication between all nodes in the cluster. If you do not specify this parameter, an error similar to the following is generated: Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (10.33.181.18 on ens160 and 178.139.129.20 on ens192) - specify one with --advertise-addr.
        Example of the command to initiate the Docker Swarm cluster, specifying the address that is advertised to other members of the cluster:
        docker swarm init --advertise-addr YOUR_IP_ADDRESS
        You can also specify a network interface to advertise the interface address, as in the following example:
        docker swarm init --advertise-addr YOUR_NETWORK_INTERFACE
      2. After that, still on node-1 execute the following command:
        docker swarm join-token manager
        The output of this command should look similar to the following:
        docker swarm join --token SWMTKN-1-4d6wgar0nbghws5gx6j912zf2fdawpud42njjwwkso1rf9sy9y-dsbdfid1ilds081yyy30rof1t 172.31.18.159:2377
      3. Copy this command and execute it on all other nodes in cluster. This ensures that all other nodes join the same cluster and coordinates AICS deployment.
      4. Now execute following command on node-1 in order to verify that cluster has been properly formed and that you can continue with installation:
        docker node ls
        The output of this command MUST show you all target servers in the cluster (node-1, node-2... node-X). If you do not see a complete list of servers, do not proceed with AICS installation. The following is an example of output where all nodes joined the cluster and are all reachable:
        IDHOSTNAMESTATUSAVAILABILITYMANAGER STATUS
        vdxn4uzuvaxly9i0je8g0bhps *node-1ReadyActiveLeader
        908bvibmyg9w87la6php11q96node-2ReadyActiveReachable
        ersak4msppm0ymgd2y7lbkgnenode-3ReadyActiveReachable
        shzyj970n5932h3z7pdvyvjesnode-4ReadyActiveReachable
        zjy3ltqsp3m5uekci7nr06tljnode-5ReadyActiveReachable
    4. Label MongoDB Nodes in the Cluster
      Follow the steps below to define your MongoDB nodes:
      1. Decide how many MongoDB instances to install in your deployment. This can be only 3 or 5. A higher number means higher availability.
        Important
        Only one MongoDB instance can run per target server.
      2. On the server with the host name node-1, execute following command to see all the nodes currently in the cluster:
        docker node ls
      3. Choose the servers where MongoDB instances will run. In single data center deployment it does not matter which servers you choose as long as they have fast disks (SSD) and enough disk space.
        The examples assume you chose the servers with the host names node-1, node-2, and node-3 to run MongoDB instances.
      4. Label the selected nodes appropriately. To do this, execute following commands on node-1:
        docker node update --label-add mongo.replica=1 $(docker node ls -q -f name=node-1)
        docker node update --label-add mongo.replica=2 $(docker node ls -q -f name=node-2)
        docker node update --label-add mongo.replica=3 $(docker node ls -q -f name=node-3)
      5. For a cluster with five MongoDB instances, you would also run these two additional commands (and you would have to have at least five servers in the cluster):
        docker node update --label-add mongo.replica=4 $(docker node ls -q -f name=node-4)
        docker node update --label-add mongo.replica=5 $(docker node ls -q -f name=node-5)
    5. Label the Worker Nodes in the Cluster
      Decide how many workers you want to run and on which servers.
      • The minimum number of worker instances to run in a cluster is two, but you can have more for increased scalability and high availability. This configuration is verified during AICS installation.
      • You can not have more than one worker per server.
      • Workers can be co-located with other containers (such as MongoDB).
      1. Execute following commands on node-1 to ensure that worker instances will run on nodes node-1, node-2, and node-3:
        docker node update --label-add worker=true $(docker node ls -q -f name=node-1)
        docker node update --label-add worker=true $(docker node ls -q -f name=node-2)
        docker node update --label-add worker=true $(docker node ls -q -f name=node-3)
        You can choose to label more nodes and make them available to run worker instances. You cannot label fewer than two nodes with worker = true
    6. Note the Tango Instances
      There is automatically one Tango instance running on every node (server) in the cluster. As you expand the cluster, new Tango instances are installed and started on the newly-created nodes.
    7. Install AICS in HA Mode
      Your Docker Swarm cluster is now ready for AICS installation.
      1. To make the Docker images needed by AICS available on every server in the cluster, execute the following command on every server in the cluster:
        bash ha-scripts/install.sh
      2. To start the HA AICS deployment, execute the following command on node-1:
        bash ha-scripts/start.sh -l -p YOURPASSWORD
        This command deploys the AICS Docker containers on the various nodes with proper labels. It also initializes the database and assigns the password value YOURPASSWORD to the newly-created default user, which has the username super_user@genesys.com.
    8. Access AICS in HA Mode
      Once your fully-installed AICS deployment has started up correctly, you can access AICS by using the IP address of any server in the cluster on port 3031.
      Important
      Genesys recommends that you install a load balancer in front of the cluster to make it easier to access AICS. See Load Balancing for HA AICS for details.

(Optional) Map local volume on container

Local directories or files can be mapped on any of the container used by the application: tango, workers or mongo.

To mount a volume we need to update file corresponding the container we want to a directory or file by editing the volumes declaration:

  • tango: <IP_JOP_PRR_<version_number>_ENU_linux/ha-scripts/tango-swarm.yml
  • mongo: <IP_JOP_PRR_<version_number>_ENU_linux/ha-scripts/mongo-swarm5.yml / <IP_JOP_PRR_<version_number>_ENU_linux/ha-scripts/mongo-swarm.yml
  • workers: <IP_JOP_PRR_<version_number>_ENU_linux/ha-scripts/worker-swarm.yml

Mapping a directory or file on a node makes it available only on that host. It doesn't imply any kind of file replication.

Assuming we want to mount local directory /some_local_directory/ into /custom_mount_point of mongo container on node-1 we would need to edit file <IP_JOP_PRR_<version_number>_ENU_linux/ha-scripts/mongo-swarm.yml as follows:

   volumes:
     - mongodata1:/data/db
     - mongoconfig1:/data/configdb
     - ../conf/mongodb.pem:/etc/ssl/mongodb.pem
     - /some_local_directory:/custom_mount_point


To make the changes take effect restart the application:

 bash <IP_JOP_PRR_<version_number>_ENU_linux/ha-scripts/restart.sh

Notes:


Installing HA AICS - Multiple Data Center Architecture

Important
The following instructions enable you to set up a new AICS HA deployment in a multiple data center environment. If you already have a single-server deployment of AICS installed, contact Genesys Customer Care for help migrating to an HA architecture.

The basic procedure for installing AICS in multiple data centers is the same as installing AICS in single data center. However, when deploying AICS in an environment with multiple data centers, there are some considerations and requirements in addition to those for a single data center.

  • Before starting, ensure that you have a fast LAN/WAN that connects all of the servers and that all ports are open.
  • Plan to spread all instances of the AICS components (Workers, MongoDB, Tango) evenly across your data centers to ensure that AICS continues to operate correctly if a single data center fails. This is most important for servers running MongoDB.

Special Considerations for MongoDB Instances

  • Spread labels evenly across the data centers when labeling servers to run MongoDB replica set members.
    Important
    The AICS installation procedure does not validate whether MongoDB instances are spread evenly across data centers. Failing to ensure this even distribution can compromise overall availability of the AICS deployment.
  • Every data center should have similar hardware capacity (RAM, CPU, disk).
  • No data center should have a majority of the MongoDB servers running in it when using three data centers.

Using Only Two Data Centers

You can use only two data centers when installing AICS in HA mode, but this reduces overall availability of AICS. In this scenario, one data center always has the majority of the MongoDB servers running in it. If that data center fails, the second data center goes into read-only mode. You must then execute a manual recovery action, using the following procedure:

Execute Manual Recovery

To recover if your system enters read-only mode:

  1. Find the current status of the MongoDB cluster by entering the following command:
    docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=mongo_mongo3) mongo --ssl --sslCAFile /etc/ssl/mongodb.pem --sslAllowInvalidHostnames --eval "for (i=0; i<rs.status().members.length; i++) { member = rs.status().members[i]; print(member.name + \" : \" + member.stateStr) }"
    For example, you might enter:
    [pm@node-3 ha-scripts]$ docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=mongo_mongo3) mongo --ssl --sslCAFile /etc/ssl/mongodb.pem --sslAllowInvalidHostnames --eval "for (i=0; i<rs.status().members.length; i++)="" {="" member="rs.status().members[i];" print(member.name="" +="" \"="" :="" member.statestr)="" }"<="" tt="">
    And receive back the following:
    MongoDB shell version: 3.2.18
    connecting to: test
    mongo_mongo1:27017 : SECONDARY
    mongo_mongo2:27017 : SECONDARY
    mongo_mongo3:27017 : PRIMARY
    [pm@node-3 ha-scripts]$
    The primary MongoDB node is mongo_mongo3. The following command:
    rs.status().members.length;
  2. Remove any unreachable MongoDB members. If necessary, use the following command to change to the primary node:
    com.docker.swarm.service.name=mongo_mongo3
  3. Run the following command on the primary MongoDB node:
    docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=mongo_mongo3) mongo --ssl --sslCAFile /etc/ssl/mongodb.pem --sslAllowInvalidHostnames --eval "members = rs.status().members; cfgmembers = rs.conf().members; for (i=members.length; i>0; i--) { j = i - 1; if (members[j].health == 0) { cfgmembers.splice(j,1) } }; cfg = rs.conf(); cfg.members = cfgmembers; printjson(rs.reconfig(cfg, {force: 1}))"
    For example, you might enter:
    [pm@node-3 ha-scripts]$ docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=mongo_mongo3) mongo --ssl --sslCAFile /etc/ssl/mongodb.pem --sslAllowInvalidHostnames --eval "members = rs.status().members; cfgmembers = rs.conf().members; for (i=members.length; i>0; i--) { j = i - 1; if (members[j].health == 0) { cfgmembers.splice(j,1) } }; cfg = rs.conf(); cfg.members = cfgmembers; printjson(rs.reconfig(cfg, {force: 1}))"
    And receive back the following:
    MongoDB shell version: 3.2.18
    connecting to: test
    { "ok" : 1 }
    [pm@node-3 ha-scripts]$

The minority members in the reachable data center can now form a quorum, which returns the running data center to read-write mode.

For other useful commands, including commands for checking node status and removing non-functional nodes, see Troubleshooting Your HA AICS Deployment, below.

Load Balancing for HA AICS

Once AICS has been installed and started, you can access it using the IP address of any node in the cluster on port 3031. To enable load balancing:

  1. Your load balancer should have its health-check functionality turned on.
  2. The load balancer should check for HTTP code 200 to be returned on IP:3031/login.
Important
Genesys recommends a third-party highly-available load balancer, such as F5, to ensure all requests to AICS platform are spread evenly across all nodes in the AICS cluster.

If you need SSL you can set it up on the third-party load balancer.

Using the NGINX Load Balancer

Genesys ships the NGINX load balancer as part of AICS. It is intended for use in prototype scenarios.
Important
The NGINX load balancer is a single point of failure and should not be used in production deployments.
To use the NGINX shipped with AICS, follow the procedure below:
  1. Edit the ha-scripts/nginx/nginx.conf file by putting the IP addresses of all nodes in your cluster into the upstream tango section using syntax such as IP1:3031, IP2:3031, IP3:3031. For example, your command might look similar to the following:
    upstream tango {
    server 18.220.11.120:3031;
    server 18.216.235.201:3031;
    server 13.59.93.192:3031;
    }
  2. Execute the following command in order to start the NGINX container:
    bash ha-scripts/nginx/start.sh
  3. Verify that you can access AICS by pointing your browser to IP address where NGINX is running.

To stop NGINX, run the following command:

bash ha-scripts/nginx/stop.sh

Scaling the AICS Deployment

The first step in increasing the size of the AICS deployment is to add new servers and configure them. You then can allocate instances of the various containers among the new servers. As with the initial deployment, use special consideration when planning how to distribute your instances of MongoDB.
Important
There is no need to shut down your AICS deployment. You can add servers while AICS is running.

Add New Servers

To add servers, follow these steps:

  1. Complete the Installation steps and unpacking steps as given above on the new servers to make them available to the AICS deployment.
  2. On node-1 execute the following command:
    docker swarm join-token manager
    The output of this command should look something like this:
    docker swarm join --token SWMTKN-1-4d6wgar0nbghws5gx6j912zf2fdawpud42njjwwkso1rf9sy9y-dsbdfid1ilds081yyy30rof1t 172.31.18.159:2377
  3. Copy this command and execute it on the new servers. This adds the new nodes to your existing cluster.
  4. On the new servers, execute the following command:
    bash ha-scripts/install.sh

This completes the increase to your hardware capacity. You can now scale the services to start using it.

Scaling MongoDB

You can only have 3 or 5 MongoDB instances in HA deployments. You can scale MongoDB only if you currently have 3 MongoDB instances in replica set.

  1. If you have not already done so, complete the steps in Adding New Servers (above) to provision additional hardware capacity.
  2. Label the newly-created servers so that they can run additional MongoDB instances. Use the procedure given in Label MongoDB Nodes in the Cluster (above).
  3. Execute procedure for restarting Predictive Routing.

Scaling Workers

To increase the number of worker nodes, perform the following steps:

  1. If you have not already done so, complete Adding New Servers steps on your server in order to increase capacity of your cluster.
  2. Label the newly-created nodes so that they can run worker instances, as described in Label the Worker Nodes in the Cluster (above).
  3. Update the number of replicas in the docker-swarm.yml file. By default there are two replicas per AICS deployment. To update this number, run the following commands on node-1:
    cd ha-scripts
     docker stack deploy -c worker-swarm.yml workers

    Installing into an Existing HA AICS Deployment

    It is easy to install a different version of AICS on your servers (nodes). You can use the steps here to install either a newer or an older version of AICS.
    Important
    There is no downtime during this process and no data is lost.
    1. Copy the new AICS release package (the *.tar.gz file) to all servers in the cluster. Use the same user and rules as if you are installing AICS for the first time. All the recommendations about the user that performs the installation and operates AICS are still valid.
    2. After unpacking the new version of AICS in the PR_USER home directory of ALL target servers, you will have multiple different subdirectories named IP_JOP_PRR_<version_number>_ENU_linux. For example you might have two subdirectories:
      • IP_JOP_PRR_<old_version_number>_ENU_linux
      • IP_JOP_PRR_<new_version_number>_ENU_linux
    3. Assuming you are installing new_version of the application and removing old_version, execute following command in the IP_JOP_PRR_<new_version_number>_ENU_linux directory on ALL target servers:
      bash scripts/upgrade_tango.sh
      This command loads a new Docker image for AICS.
    4. Find the newly-loaded image by executing following command on any node in the cluster:
      docker images | grep tango
      This command lists all available versions of the Tango Docker image. You can then choose the one you want to run and upgrade (or downgrade) to.
    5. After choosing the Tango version, execute following command on any node in the cluster:
      docker service update --image jop_tango:NEW_VERSION tango_tango

    This command executes the upgrade of Tango (AICS) on all nodes in the cluster, one by one, and rolls back the change if there is a problem. There is no downtime during this upgrade, and no data loss.

    Checking the Logs for the AICS Containers

    To access AICS logs when it is running in a HA architecture, execute the following commands on any node in the cluster:

    • For Tango logs:
      docker service logs tango_tango
    • For MongoDB logs:
      docker service logs mongo_mongo1
      docker service logs mongo_mongo2
      docker service logs mongo_mongo3

    And so on, for however many MongoDB nodes you have configured.

    • For Workers logs:
      docker service logs workers_workers

    To return only the last N lines of a log file, use the same commands as above, appending the command --tail N, as in the following example:

    docker service logs workers_workers --tail 100
    To continuously stream output of a log, use the same commands as above, appending the command -f, as in the following example:
    docker service logs workers_workers -f

    Troubleshooting a AICS HA Deployment

    The following sections offer information that can help you identify issues without your deployment.

    Handling Server Failure

    If a server (node) restarts, the HA deployment recovers automatically as long as the server keeps its previous IP address and the data on the disk is not corrupted.

    The following command identifies a non-working node as unreachable node:

    docker node ls

    If a server needs to be decommissioned and replaced with new one, the following manual step is necessary to preserve the health of the cluster. After shutting down the server that is to be decommissioned, execute the following two commands, where NODE_ID is the unique node identifier of the server to be decommissioned:

    docker node demote <NODE_ID>
    docker node rm <NODE_ID>

    After this, you can add a new server to your environment. Label it the same way as the decommissioned server and execute the procedure for joining that server to the cluster as described in Installation Procedure, above.

    Handling Failover

    When a server hosting MongoDB and the AICS application (the Tango container) experiences a failover, a certain number of API requests to AICS might fail during the few seconds it takes for the system to recover. The routing strategy attempts to resend any failed request, but Agent State Connector (ASC) does not have this capability. As a result, there is a risk of a small data loss.

    Note that error messages appear in the logs for both MongoDB and the AICS application when a failover occurs.

    Health Checks for Your Deployment

    To check the health of your Predictive Routing HA deployment, perform the following steps:

    1. Verify that all nodes are up and running. On any node in the cluster, execute the following command:
      docker node ls
      You should receive output similar to the following:
      [pm@node-2 ~]$ docker node ls
      IDHOSTNAMESTATUSAVAILABILITYMANAGER STATUS
      mc0bgyueb3c0h9drsy3j0i2tynode-1ReadyActiveLeader
      vm1csljly66vwguxzaz8ly98r *node-2ReadyActiveReachable
      z2vlnldcyh0y57jwns0bz9jxenode-3ReadyActiveReachable
      All nodes should be reachable.
    2. Check that all services are running by executing the following command on any node in the cluster:
      docker service ls
      You should receive output similar to the following:
      [pm@node-1 ~]$ docker service ls
      IDNAMEMODEREPLICASIMAGEPORTS
      jzjitn8lp78tmongo_mongo1replicated1/1mongo:3.2
      iqntp5eabfnw mongo_mongo2replicated1/1mongo:3.2
      whw05twosi9smongo_mongo3replicated1/1mongo:3.2
      1jp3sgt16czwtango_tangoglobal3/3jop_tango:2017_12_12_15_17
      hu3kvkzxn88rworkers_workersreplicated2/2jop_tango:2017_12_12_15_17
      • The important column here is REPLICAS.
      • The Tango service should always be global and reachable on port 3031 on every node in cluster
      • The MongoDB service is replicated, and should show 3/3 or 5/5 replicas (or however many are actually present in your environment). See Checking the Health of MongoDB (below) for how to check health of MongoDB database.
      • The Workers service is replicated and should show as many replicas as there are nodes labeled with the Workers label. See Label the Worker Nodes in the Cluster (above) for how to label nodes.

    Checking the Health of MongoDB

    All the commands listed below should show your MongoDB cluster with one PRIMARY instance and all other instances should be healthy SECONDARY instances.

    • To check the health of the MongoDB cluster while logged into node-1 execute following command on node-1:
      [pm@node-1 ~]$ docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=mongo_mongo1) mongo --ssl --sslCAFile /etc/ssl/mongodb.pem --sslAllowInvalidHostnames --eval 'rs.status()'
    • To check the health of MongoDB cluster while logged into node-2 execute following command on node-2:
      [pm@node-2 ~]$ docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=mongo_mongo2) mongo --ssl --sslCAFile /etc/ssl/mongodb.pem --sslAllowInvalidHostnames --eval 'rs.status()'
    • To check the health of MongoDB cluster while logged into node-3 execute following command on node-3:
      [pm@node-3 ~]$ docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=mongo_mongo3) mongo --ssl --sslCAFile /etc/ssl/mongodb.pem --sslAllowInvalidHostnames --eval 'rs.status()'

    Similarly, you can check the health of MongoDB cluster from any other node where a MongoDB replica is running.

    Other Useful Commands

    Here are few more useful commands to troubleshoot MongoDB:

    To find out the status of all members in the replica set, use the following command:

    docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=mongo_mongo3) mongo --ssl --sslCAFile /etc/ssl/mongodb.pem --sslAllowInvalidHostnames --eval "rs.status().members"

    To remove an unreachable member, execute the following command (this has to be repeated for each unreachable member in a failed data center):

    docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=mongo_mongo3) mongo --ssl --sslCAFile /etc/ssl/mongodb.pem --sslAllowInvalidHostnames --eval 'rs.remove("HOST:PORT")'

    (Optional) Backing Up Your Data

    This section applies specifically to backing up and restoring in an HA environment. For instructions to back up and restore MongoDB in a single-site/single-server AICS deployment, see Backing Up and Restoring Your Data.

    Although HA greatly reduces the likelihood of data loss, Genesys recommends that you back up your data to safeguard it. This section explains how to back up and restore your data in an HA environment.

    Important
    All MongoDB backup and restore operations should be performed on the PRIMARY MongoDB instance.

    Backing Up

    On every server where MongoDB is running, there are two important directories:

    • The /data/db directory in every MongoDB container is mapped to the /var/lib/docker/volumes/mongo_mongodata1/_data/ directory on the server file system.
    • The /data/configdb directory in each MongoDB container.

    Use the mongodump command from inside the container to back up your MongoDB data, using the following command:

    mongodump --out /data/db/`date +"%m-%d-%Y"`
    This command backs up all databases in the /data/db/<date +"%m-%d-%Y"> directory located in the container. For example, you might back up the /data/db/12-18-2017 directory.

    The backed-up data is located in the /var/lib/docker/volumes/mongo_mongodata1/_data/<date +"%m-%d-%Y"> directory on the server host computer. For the example backup command above, the output would be located in the /var/lib/docker/volumes/mongo_mongodata1/_data/12-18-2017 directory.

    Restoring

    In order to restore data you must first make data files available in the appropriate directory on the server host computer.

    Use the following command inside of container:

    mongorestore /data/db/''PATH_TO_SPECIFIC_BACKUP_DIRECTORY''
    For example, you might run the command:

    mongorestore /data/db/12-18-2017

    For extra information about backing up MongoDB and data preservation strategies, see the following site: https://docs.mongodb.com/manual/core/backups/.

    Disable MongoDB SSL for AICS

    MongoDB uses SSL encryption by default (TLS 2.0). To disable encryption start the cluster using the "-x" flag:

    cd IP_JOP_PRR_<version_number>_ENU_linux/ha-scripts/
    bash ha-scripts/start.sh -x
    
    Important
    This is only recommended if you consider that encryption may be affecting the performance of the application.

    (Optional) Turn on SSL on NGINX

    To turn on SSL on NGINX, perform the following steps:

    1. Stop the application using the following command:
      bash scripts/stop.sh
    2. Create a certificate or add the certificate and key using a command in the following syntax:
      openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 365 -out server.crt openssl dhparam -dsaparam -out dhparams.pem 4096
    3. Update the docker-compose.yml file using the following commands:
      nginx:
      image: nginx:1.11.9-alpine
      container_name: nginx
      restart: always
      ports:
         - 80:80
         - 443:443
      volumes:
         - ./nginx-ssl.conf:/etc/nginx/nginx.conf
         - ./server.crt:/etc/nginx/server.crt
         - ./server.key:/etc/nginx/server.key
         - ./dhparams.pem:/etc/nginx/dhparams.pem
    4. Uncomment (remove the pound sign from) the entire second section of the nginx.conf file. This sections contains the SSL configuration.
    5. To enable HTTPS on NGINX, replace the following line in the nginx.conf file:
      proxy_set_header X-Forwarded-Proto $scheme;
      with: proxy_set_header X-Forwarded-Proto https;
    6. Restart AICS using the following command. This is required to make the changes take effect:
      bash scripts/start.sh -n</source>
    7. Verify that you can access Predictive Routing via HTTPS by opening the following URL in your browser:
      https://<SERVER_IP_ADDRESS>/

    Required Opened Ports for Firewall Configuration

    The following ports are those required for communication between all target servers in the cluster:

    • Protocol: TCP
    • Port: 2377
    • Type: Inbound/Outbound
    • Description: Cluster management communications
    • Protocol: TCP/UDP
    • Port: 7946
    • Type: Inbound/Outbound
    • Description: Communication between target servers
    • Protocol: UDP
    • Port: 4789
    • Type: Outbound/Inbound
    • Description: For overlay network traffic
    • Protocol: TCP
    • Port: 27017
    • Type: Inbount/Outbound
    • Description: Default port for mongodb

    HA for ASC

    Agent State Connector (ASC) has a standard primary-backup warm-standby high availability configuration. The backup server application remains initialized and ready to take over the operations of the primary server. It maintains connections to Configuration Server and Stat Server, but does not send agent profile updates to AICS.

    To configure a primary-backup pair of ASC instances, create two ASC Application objects. Open the Server Info tab for the backup ASC set warm standby as the redundancy mode. When Local Control Agent (LCA) determines that the primary ASC is unavailable, it implements a changeover of the backup to primary mode.

    ASC-HA-Arch.png

Feedback

Comment on this article:

blog comments powered by Disqus
This page was last modified on 21 May 2018, at 15:07.