Pre-Installation Tasks

This section provides information that is required for setting up the environment to install DX Platform:
dxp10
This section provides information that is required for setting up the environment to install DX Platform:
You must execute all the
kubectl
commands from the master node.
Deploy the Kubernetes Cluster :
Ensure that a compatible version of Kubernetes is deployed. To deploy Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (Amazon EKS), contact the
Broadcom Customer Success Team
.
Obtain a Wildcard DNS :
A wildcard DNS (For example, *.apm.fs.ag.com) allows accessing an on-premise DX APM deployment (say through browser or for agent connectivity) using a meaningful URL. Point the wildcard DNS to the static IP's of all the nodes that are running the Ingress Controller.
Run the following command from the Kubernetes master to get the IP's of all the nodes running the Ingress Controller.
Obtain Ingress Nodes
kubectl get pods -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE default-backend-76c6b954d9-674sq 1/1 Running 3 2d 10.233.76.235 cho4-04 none> ingress-nginx-controller-sj4x4 1/1 Running 4 73d 10.233.83.95 cho4-01 none> ingress-nginx-controller-zkmv4 1/1 Running 3 73d 10.233.119.236 cho4-02 none>
Configure the Load Balancer :
For a production deployment, we recommend configuring the load balancer to point to the static IP's of the nodes that are running the Ingress Controller. In this case, the wildcard DNS will be pointing to the load balancer. To configure a load balancer, contact your network administrator.
For the DX Platform installation using the
secure routes
, ensure that the load balancer is listening on
port 443
. For
non-secure routes
, ensure that the load balancer is listening on
port 80
.
(Optional) Obtain an SSL Certificate :
For a secure communication between DX APM and the outside world, ensure that you have the SSL certificate (.crt) from your certificate authority and the private key (.key) that are not password protected. If your key and cert are in a different format (for example, .pfx or .cer), contact the
Broadcom Support
for further assistance.
For non-production environments, you can install a self-signed certificate using the DX Platform installer.
Prepare the Network File System (NFS) Service :
The
DX Platform
services stores the application related configurations, metrics, topology, and log files in a common NFS storage that is accessible from all the nodes. Run the NFS Server preferably on
any one node
that is non-master and non-Elastic. For small deployments, the NFS server can be deployed on the master node.
  • Ensure to disable antivirus for the NFS share.
  • Ensure to synchronize clocks on the cluster systems with Network Time Protocol (NTP). Systems with unsynchronized time cause unexpected behavior of the software.
Perform the following steps on
all the nodes
including the Elastic and Master nodes if the Master node is schedulable.
Follow these steps:
  1. Install the
    nfs-utils
    package and its dependencies on all the nodes.
    yum install -y nfs-utils
  2. Enable NFS.
    systemctl enable nfs
  3. Start the NFS service after the installation.
    systemctl start nfs
(Only on the NFS Server) Open the Ports on the NFS Server :
Before you start the installation, ensure that the following ports on the NFS server are open to allow access to the NFS services between the nodes:
Choose any one node that is non-master and non-elastic as your NFS server and perform the following steps on that server.
firewall-cmd --permanent --add-port=111/tcp firewall-cmd --permanent --add-port=111/udp firewall-cmd --permanent --add-port=2049/tcp firewall-cmd --permanent --add-port=20048/tcp firewall-cmd --permanent --add-port=20048/tcp firewall-cmd --permanent --add-service=mountd firewall-cmd --permanent --add-service=nfs firewall-cmd --permanent --add-service=rpc-bind firewall-cmd --reload firewall-cmd --list-services
For more information about ports, see the
Ports Reference
section.
You must restart the
ingress-nginx-controller
pod after every firewall port change:
kubectl -n ingress-nginx delete pod $(kubectl -n ingress-nginx get pod | fgrep ingress-nginx-controller |awk '{print $1;}') --force --grace-period=0
(Only on the NFS Server) Configure the NFS Base Directory
Perform the following steps to configure the base directory of the NFS server.
Choose any one node that is non-master and non-elastic as your NFS server and perform the following steps on that server.
Follow these steps:
  1. Create and expose a directory on your NFS server for the
    DX Platform
    data. For example:
    /nfs/ca/dxi
    .
    mkdir -p /nfs/ca/dxi
  2. Note this NFS base directory name to provide during installation.
  3. Run the following command on the master to get the list of nodes:
    kubectl get nodes
  4. Note the internal IP addresses of all the hostnames that are listed under the NAME column including the master node. For the internal IP addresses, contact your Administrator.
  5. Run the following command:
    vi /etc/exports
  6. Add the following entry to
    /etc/exports
    in the format shown to add the NFS base directory:
    <basedir> <node1>(rw,sync,no_root_squash,no_all_squash) <basedir> refers to the base directory of the NFS directories. <node1> refers to the Kubernetes node.
    For example,
    # contents of /etc/exports /nfs/ca/dxi/ 172.31.25.210(rw,sync,no_root_squash,no_all_squash) /nfs/ca/dxi/ 172.31.45.207(rw,sync,no_root_squash,no_all_squash) /nfs/ca/dxi/ 172.31.46.84(rw,sync,no_root_squash,no_all_squash) /nfs/ca/dxi/ 172.31.13.15(rw,sync,no_root_squash,no_all_squash) /nfs/ca/dxi/ 172.31.8.184(rw,sync,no_root_squash,no_all_squash)
  7. Export the base directory:
    Important!
    Export the Base Directory to all the nodes in the cluster. Every server in the cluster must be able to read/write to this base directory.
    exportfs -ra
    Once you export the base directory, the NFS directories that you create after the installation are available to all of the Kubernetes nodes.
  8. Verify that the base directory is exported successfully. Run this command on each of the nodes in the Kubernetes cluster:
    showmount -e <node/IP Address> <node> is the node where you created the base directory (typically, the master node).
To use a different network storage type, contact the
Broadcom Customer Success Team
.
Label Node to Deploy Elasticsearch :
The DX Platform installer configures Elasticsearch, Kafka, and Zookeeper pods to run on nodes with specific labels. Label every node where Elasticsearch is to be deployed. Run the following command to check the label:
kubectl get nodes --show-labels
Do not label any of the Kubernetes master nodes.
For Single Elasticsearch Node Deployment (Small Deployment):
kubectl label nodes node_name_1> dxi-es-node=master-data-1 Replace <node_name_1> with your Kubernetes node name.
For 3 Elasticsearch Nodes Deployment (Medium Deployment):
kubectl label nodes <node_name_1> dxi-es-node=master-data-1 kubectl label nodes <node_name_2> dxi-es-node=master-data-2 kubectl label nodes <node_name_3> dxi-es-node=master-data-3
Modify the max_map_count Parameter for Elasticsearch
Elasticsearch has certain kernel parameter requirements and uses local storage (directory /dxi) for performance reasons. Perform these steps for Elasticsearch to start successfully. As a
root user
, set the
vm.max_map_count
parameter to
262144
on every node
where Elasticsearch is to be deployed.
Follow these steps:
  1. Update the
    /etc/sysctl.conf
    file with the following value:
    vm.max_map_count=262144
  2. Run the following command to apply the changes without restarting the node.
    sysctl -q -w vm.max_map_count=262144
Increase the Max Open Files Limit to 65536 :
Perform the following steps as a
root user
on every node
where Elasticsearch is to be deployed. Ensure that ulimit is set to 65536 for Elasticsearch to start successfully.
Follow these steps:
  1. Open the
    sysctl.conf
    file and add the following line at the end of file. This file is available in the
    /etc
    directory.
    fs.file-max=65536
  2. Run the following command to apply the sysctl limits:
    $ sysctl -p
  3. Add the following lines to the
    /etc/security/limits.conf
    file:
    * soft nproc 65536 * hard nproc 65536 * soft nofile 65536 * hard nofile 65536
  4. Log out and log back in for the changes to take effect.
  5. Run the following command to check the soft limits.
    $ ulimit -a
  6. Run the following command to check the hard limits.
    $ ulimit -Ha
  7. Also, verify that the value of the max user processes is (-u) 65536.
(Optional) Prepare SMTP server credentials :
The
DX Platform
installation configures SMTP server for sending email notifications. This SMTP server must be accessible from all of the nodes in the Kubernetes cluster. Note the credentials and hostname of your SMTP server to be used during the installation.
You can enter the SMTP credentials anytime after installation.
(Optional) Verify if the Environment Dependencies are Running
Run the following commands to check the status of the services. The status must be Active (Running). If the status is not active, then check the logs for these services to troubleshoot the issue.
On Each Kubernetes Node:
Check Kubernetes. The status must be Active (Running).
systemctl status kubelet
On Master Node:
Run the following command to check the status of the nodes. The status of the nodes must display as
Ready
.
kubectl get nodes
If the status is
Not Ready
, run the command to check the status of the service and then check the logs before you restart the service.
Set Up NTP on All Nodes :
To ensure that there are no clock skew issues between APM MOM and Collectors, you must configure NTP on all the nodes.
Follow these steps:
  1. On all the nodes, install NTP and set the timezone to UTC.
  2. Choose one node as the main time server.
  3. Setup the sync clocks on the remaining nodes in your cluster with the node designated as the main time server in the cluster.