Production Network Architecture

The unique topology of a production network determines the exact configuration of the API Gateway and other common components or services.
The following diagram represents a reference architecture that your organization can adopt to implement a
cloud-based
Container Gateway solution. Note that this is one of many potential variations that you may adopt for your enterprise and is largely dependent on how you configure your Kubernetes architecture prior to installing the Container Gateway in your Kubernetes cluster(s).
Production network architecture
2
From the Untrusted Network to the Load Balancer
Beginning from the top of the diagram, the Internet (as symbolized by the cloud) represents the public or 'untrusted' network of client systems that are attempting to access the backend resources or services that the API Gateway is protecting. Prior to reaching the Gateway, a client request must pass through the corporate firewall, and then a TCP-level load balancer. These load balancers typically reside between the clients and the Gateway, and between the Gateway and the back-end servers. Load balancing can help maintain High Availability, and effectively distribute computing tasks over a number of Gateway nodes, thus reducing overload of an individual Gateway node.
Ingress
In order to connect to the Gateway cluster in Kubernetes, the client request must next go through an Ingress, an API object that manages external access to the Gateway services in the cluster via the popular HTTP(S) protocol. Note that any other services exposed via other protocols (i.e., TCP) must use a Kubernetes service such as 'NodePort' or 'LoadBalancer'. In order for an Ingress to be operational, you must have an Ingress controller in place.
As required, the Policy Manager may connect to a running Gateway via Ingress for the purposes of viewing logs, audits, and service metrics; however, see the 'Logging, Auditing, and Service Metrics' section for other recommended options that are more scalable and cloud-native.
The API Gateway in Kubernetes Cluster(s)
The number of Kubernetes clusters you wish to design and deploy for your Gateway solution is entirely up to your enterprise's unique requirements. You may want to have all your backend services hosted in the same cluster, or you may want to separate them in their own individual clusters. The final number of clusters you design or plan will depend on how you prioritize criteria such as cost efficiency, ease of manageability, application security, and application resilience. For the purposes of the reference architecture, we are showing a one-large shared cluster approach.  High Availability strategies revolving around pods, Gateway node replicas, zones, and regions are discussed here.
The following describes some of the major components of the API Gateway that live in the Kubernetes cluster.
Gateway Traffic Processing
The Gateway traffic processing node or runtime component is installed in a Docker container and lives in a Kubernetes pod. Recall that when a client request message is received, the Gateway processing node executes a service resolution process that attempts to identify the targeted destination service. When a published service is resolved, the Gateway executes the policy for the service. If the policy assertions succeed, then the request is routed. As required, you may scale up or down the number of Gateway pods to handle a change in traffic load.
Gateway Database
The Gateway stores policies, processing audits, Internal Identity Provider, keystore, configuration details, and other information in a MySQL database. There are two choices in the implementation of the MySQL database:
  • Cloud-Based MySQL
    : Adopting a cloud-based MySQL solution will enable your API Gateway solution to be completely 'cloud-native'. While there are a number of cloud-based MySQL solutions available on the market ,each with its own unique offerings, the general benefits of hosting Gateway artifacts in the cloud include database management via a cloud-based console, passing off administrative tasks to the provider (e.g., applying MySQL patches and updates or creating backups and replications via automation), and enhanced database availability spread across multiple regions.
  • External MySQL
    : Alternatively, you may source and install a MySQL database externally on a separate server.
The database depicted in the architecture diagram represents the external MySQL option. In the event that you come across a MySQL container instance in a sample Gateway Helm Chart - this is
not recommended
for a production environment and is used for testing purpose only.
Logging, Auditing, and Service Metrics
The Gateway generates service metrics to provide  you performance insight on message processing rates and response times in real time and lets you filter that information by cluster node, published service, or resolution. Service metrics can also highlight policy violations and routing failures.
The Gateway also generates console log and audit records to let users monitor the activity and health of the Gateway, and the ongoing success or failure of service policy resolution. Auditing is provided for all system events, and is configurable for individual service policies. Gateway console logging is performed during runtime.
For the appliance form factor, data related to Gateway service metrics and auditing is typically stored in the Gateway SSG database while Gateway logs are stored in the console log or log file system. This data is then viewed via the Policy Manager tool. While this setup is still available for the cloud-based Container Gateway, it can be difficult to accommodate horizontally scaling Gateways in the cloud, thus potentially creating a large performance bottleneck for the database that should primarily store policies, environment configurations, and other artifacts critical to the operation of the Gateway. For a 'database-less' cloud reference architecture, Layer7 strongly recommends the 'off-boxing' of logs, audits and service metrics to cloud-ready external tools dedicated to the collection, indexing, and/or analysis of such data. More importantly, these tools are more adept at horizontal scalability and high availability in a cloud infrastructure. Examples include:
  • InfluxDB for Gateway service metrics
  • Elasticsearch and Logstash/Fluentd for logs and audits
In the reference architecture, logging, auditing, and service metrics stores are configured and discovered as a Kubernetes service in the cluster.
To learn more about how you may 'off-box' or externalize this data from the Gateway, see:
In-Memory Data
In-memory data grid services, such as a Hazelcast grid, can help augment scalability and high availability for your Gateway pods in a Kubernetes cluster. An optional Gateway component in the reference architecture, in-memory data is configured and discoverable as a Kubernetes service in the cluster.
The following Gateway features or policy assertions rely on in-memory data to operate: You can also read more about Hazelcast and how it can be connected to the API Gateway in a number of cloud platforms (as of Gateway Version 10.0 CR2, any platform besides Kubernetes now falls into Layer7's best-effort support category) here. For the latest sample configuration of Hazelcast for the Container Gateway deployed to Kubernetes, see the Layer7 Gateway Helm Chart GitHub repository.
Dashboard Service
After adopting a cloud-based methodology to collect and process Gateway audits, console logs, and service metrics as described here, the next step your enterprise may take is to adopt a cloud-friendly dashboard tool to analyze, query, and visualize this data with a centralized interface. An optional Gateway component in the reference architecture, a dashboard is configured and discoverable as a Kubernetes service in the cluster. Third-party dashboard solutions come in many different forms with different specializations. The Layer7 reference architecture demonstrates the use of some common examples.
Analyzing Service Metrics
The Grafana dashboard can be used as to visualize and analyze service metrics for the Gateway and can serve as an overall monitoring system for your Container Gateway in the cloud. In order for Grafana to ingest service metrics data from your Gateway pods, that data must first be collected by and stored in a time series database such as InfluxDB.
A sample configuration of Grafana and InfluxDB can be found in the Layer7 Gateway Helm Chart GitHub repository.
Analyzing Logs and Audits
Kibana, a data visualization dashboard, can be used to visualize and analyze log and audit data for your Container Gateway in the cloud. In order for Kibana to ingest log and audit data from your Gateway pods, the data must first be stored and indexed by Elasticsearch and processed by either Logstash or Fluentd  - Kibana is then used to process and visualize that data in a meaningful way for your enterprise.
Technologies and best practices for enterprise-level logging and monitoring are always evolving. If you decide to integrate a cloud-friendly dashboard service to your Gateway solution architecture, Layer7 strongly recommends that you work with the vendors directly to understand the current options and trends. For example, and while not described in Layer7's reference architecture, you may find simpler solutions for viewing logs in the cloud, such as adding Fluent Bit to the InfluxDB and Grafana combo (i.e., 'FIG').
Secure Token Service
An API Gateway deployment can be complemented with a Secure Token Service (STS), such as the Layer7 OAuth Toolkit (OTK), and can be discovered as a service in the Kubernetes cluster. The Layer7 Helm Chart GitHub repository contains a sample configuration that deploy both the Gateway and OTK together (see the ./gateway-sts folder and README for more information).