Deployment Topology

The aan deployment topology is flexible and can be deployed in single-node or multiple-node configurations.
apip42
The
API Portal
deployment topology is flexible and can be deployed in single-node or multiple-node configurations.
The following graphic is the deployment topology for 
API Portal
:
Portal Deployment Topology
Portal Deployment Topology
The following table summarizes various topology sizes and their typical use cases:
 
POC/Demo
Production
Number of Nodes
1
3
Use Cases
Low traffic or test environments
Production environments
Single-Node Deployment
In a single-node deployment topology, a single Docker Swarm cluster with a single Docker Swarm Manager node is running the 
API Portal
 application. The topology does not offer any fault tolerance. If the node fails for any reason, there may be production downtime and data loss.
Multi-Node Deployment
A multi-node deployment has the following requirements:
  • All systems in the Docker Swarm must be able to communicate with each other. Systems that are spread across multiple data centers must also be able to communicate with each other. The network latency between the systems should be low.
  • The Docker Swarm cluster resides in a trusted zone in each data center and is not exposed publicly.
  • Docker Swarm manager nodes
    must
    have static IP addresses. 
    See https://docs.docker.com/engine/swarm/admin_guide/#configure-the-manager-to-advertise-on-a-static-ip-address for more information about static IP addresses.
Three-Node Deployment
In a three-node deployment topology, the one Manager node runs only the dispatcher service. The two worker nodes runs the rest of the services for 
API Portal
 and CA Jarvis Analytics Engine. This scenario increases processing capabilities because the services are load balanced by your global load balancer or by DNS round-robin at the DNS server. When traffic reaches the Docker Swarm Manager node, further service load balancing is achieved by an internal application load balancer.