Scale API Portal

API Portal
is scalable to meet your business needs.
API Portal
is scalable to meet your business needs.
High Availability for Analytics is not supported in Docker Swarm.
Horizontal scaling is accomplished by deploying API Portal in a Docker Swarm cluster, then adding or removing manager and worker nodes as required to meet demand. The Docker Swarm capability deploys, scales, and manages the API Portal services. Replicated service tasks are automatically distributed across the nodes in the swarm as evenly as possible. Node labels provide service constraints and determine how services are spread across nodes.
Because API Portal is Docker native, the application can be installed on any Linux hosts capable of running Docker.
We strongly recommended that you use the external MySQL database for a multi-node environment.
The following tasks relate to the horizontal scaling of API Portal:
Create a Swarm Cluster
The scaling instructions on this page assume that a Docker Swarm cluster has already been deployed. See Create a Swarm Cluster.
How to Add Nodes to a Swarm
In a multi-node swarm, manager nodes handle cluster management tasks such as scheduling services.  Worker nodes are instances of Docker Engine that execute containers running services.
Tasks include:
Add manager nodes to make the cluster more fault tolerant and available.
Add or remove worker nodes to scale the capacity of your cluster.
Do not run the
script when adding a manager or worker node to an existing swarm. This script is only required to run on the first node where you install API Portal. This script initializes the swarm and defines the first node as the manager node.
  • A new VM provisioned by the CA hardened VM, or by your provisioned VM is required.
  • Create a user that is part of the docker group. The user must be able to execute docker commands.
Prepare Nodes for Inclusion
The following instructions for horizontal scaling prepare new systems to be added as nodes to an existing Docker swarm:
  1. Provision systems with the API Portal hardware and software requirements. See Hardware and Software Requirements for more information.
Online Package Horizontal Scaling
  1. Access the system where API Portal is installed.
  2. Copy the following scripts found in the
    <portal installation>util/
    directory to the new system:
    • – This script installs docker on your system.
    • – This script assigns a node type and allows the node to communicate in a swarm
  3. Install the current version of Docker on each new system using the following command:
Offline Package Horizontal Scaling
  1. Place the tarball on all nodes using the following command:
    scp apim-portal-VERSION-final-offline.tar.gz [email protected]:/tmp/
    is an example of a directory. You need to select a directory with enough space to store the extracted file. You will need approximately 15 GB of storage space.
  2. Extract the tarball using the following command, where
    is an example of a directory (see above note):
    sudo tar zxvf /tmp/apim-portal-VERSION-final-offline.tar.gz -C /opt/
  3. Load the images on all worker nodes using the
    script as shown in the following command:
    sudo ./
Add Worker Nodes to the Swarm
Add worker nodes to scale the capacity of your cluster.
To create a worker node and add it to the swarm:
  1. Run the following command to designate the system as a
    worker node
    ./ -n worker
    The firewall on the CentOS system is updated and allows the node to communicate within a swarm.
  2. Add the
    label to all worker nodes using the following command:
    docker node update --label-add portal=true <node-id>
    Obtain the
    node id
    using the following command:
    docker node ls
  3. On the
    manager node
    (the system where you installed API Portal), retrieve the join token for a worker node: docker swarm join-token worker
    Docker outputs the command to execute on the new system:
    To add a worker to this swarm, run the following command: docker swarm join --token <tokenContent>
  4. Copy the output of this command.
  5. Access the
    worker node
    that you want to add to the swarm and execute the output command.
  6. To check how many services have started, run the following command:
    docker service ls
If the services are not starting, check the
service logs using the following command:
docker service logs -f portal_portal-data
An issue exists If you see the following message:
nc: bad address 'portaldb'
portal-data: attempt (11 / 60) to wait for portaldb to become available: portaldb:5432...
Follow theses steps if the above message displays:
  1. Figure out which node is having the issue. You have to log in to each worker node and look in the docker logs for the
  2. To look at the containers on each node, run the following command:
    docker ps
  3. To obtain logs for the container, run the following command:
    docker logs <container-d>
    If you see the same message as before in the containers logs, then this node is affected. Remove the portal label from this node. You can do this from the manager node using the following command:
    docker node update --label-rm portal <node-id>
  4. Restart the docker daemon on those nodes using these commands:
    systemctl restart docker
    eadd portal label
Remove a Node from the Swarm
Scaling down, reducing the capacity, is performed by removing a node from the Swarm.
To remove a node from the Swarm, complete the following:
  1. Log in to the node you want to remove. If the node is a manager node, it must first be demoted to a worker node before removal.
  2. Run the following command:
    docker swarm leave
  3. On the manager node, list the nodes in the cluster:
    docker node ls
    The node that you removed has the following attributes:
    The status of the node in the swarm is
    and the availability is
  4. Copy the node ID of the node you want to remove from the cluster.
  5. Run the following command to remove the node, replacing <NODE_ID> with the copied ID:
    docker node rm <NODE_ID>
How to Scale Services Across Nodes
Node labels provide service constraints and determine how the API Portal services are scaled across nodes in a swarm. See Node Labels for a list of labels that are used by API Portal.
label is primarily used for scaling API Portal services.
To scale API Portal services across new nodes:
  1. Add one or more new nodes to an existing swarm by using the join-token command. See Add Nodes to the Swarm.  
  2. Access your Swarm Manager node using SSH.
  3. List all nodes in the swarm using the following command:
    docker node ls
    A list of the available nodes in the cluster displays.
    ### output of command ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS fe9kgbgntm57io25jq0g0brxi * Ready Active Leader nbnyw79v3qyzknjuw5wcp2jwg Ready Active
    character indicates that you are currently connected to this node.
    column shows whether the scheduler can assign tasks on the node. Only nodes with an Active value can be assigned tasks.
  4. Copy the ID of a new node you have added.
  5. Use the
    node update
    command to add the
    label to the node identified by the NODE_ID:
    docker node update --label-add portal=true <NODE_ID>
    For example:
    docker node update --label-add portal=true nbnyw79v3qyzknjuw5wcp2jwg
    All API services that are associated with the portal label are automatically scaled onto the new node.
Node Labels
Node labels are used to manage service constraints.
label indicates API Portal services that can be automatically scaled.  For information about the services, see the "API Portal Containers" section on API Portal Architecture.
Service Name
Automatic Scale
Swarm Master Nodes
Yes, is deployed on all Swarm Master nodes.
No, has one instance running only. This is the search engine.
No, has one instance running only.
No, has one instance running only.
No, has one instance running only.
No, has one instance running only.
Verify the Update
To view the updated node count:
  1. Access your Docker Swarm Manager node.
  2. Issue the following command:
    docker stack services portal
    The following code is an example output:
    fe9kgbgntm57io25jq0g0brxi portal_pssg global 1/2
    The example output indicates that the service PSSG in the portal namespace has one running instance of two replicas. Because a new node was added, the new services are brought up.