Install on a Single Node
You can install on a single node for demonstration purposes. To install in a production environment, see .
You can install
Digital Operational Intelligenceon a single node for demonstration purposes. To install
Digital Operational Intelligencein a production environment, see Install on Multiple Nodes.
Digital Operational Intelligenceon a Single Node
Note the following points before you begin the installation.
- You can run the installer as a non-root user. However, all prerequisite steps, including installing Docker and OpenShift, must be completed asroot.
- To change a response in the installer, typeback. To cancel the installation, typequit.
- If the installation stops, kill the process, then restart the installation. Run the following command:ps -aef | grep installer kill -9 <process ID from above process>
Follow these steps:
- Download the installer and image TAR from the CA Support site.
- Verify that the DNS service is running by entering the following command on each node in the cluster, including the master node:systemctl status dnsmasqThe DNS service should be Active.
- Unzip the file as follows:tar -xvf<filename>.tar.gz
- Copy the installer to a Linux system.
- To run the installer as non-root user, run the following commands on the master node in the OpenShift cluster:
- Create an user:useradd <username>Example:useradd DOIuserGive this user access to the NFS base directory or Host Path.
- Switch to that user:su -<username>Example:su - DOIuser
- Run the following commands on the OpenShift node:chmod u+x digital_oi_installer.bin./digital_oi_installer.bin
- Provide the following information during installation:
- Project URL: Provide the URL to the OpenShift(OS) instance where the project and user exist. Specify the fully qualified domain name (FQDN) in the URL. The FQDN is case-sensitive.
- Project User Name,ProjectUser password: Specify the username and password for the account that owns the project.The project name, user name, and user passwordmustmatch the information that you provided when you created the OpenShift project.
- Specify N when prompted to select the type of registry where product images are stored:
- N- Retrieve product images from one of the following registry types:
Do not select the Y option. The option to retrieve product images from theOpenShift Container Registry(Default) is not supported in this release.
- CA Technologies public registry: doi.packages.ca.com/<version>The product images are large (over 11-GB), and can take a long time to download from the public registry over slow networks. If network access is slow, consider downloading the product images.
- Local Docker registry: Specify the location of the registry when prompted
- SelectSingle Nodeas the installation size.
- Specify if you want to enableSelf Monitoring.By default, Self Monitoring is disabled. Small, non-production environments can store data in the Elasticsearch and Kibana components that are installed with CA Digital Operational Intelligence. For production environments, install Elasticsearch and Kibana on a separate system for self monitoring.
- If you are using a separate Elasticsearch instance for monitoring data, enter the following information:
- Elasticsearch IP address
- Elasticsearch portFor more information, see the Install Elasticsearch Externally for Self Monitoring section.
- Specify a user with Cluster Admin privileges.
- Provide the hostname or IP address of the node that hosts the OpenShift router. Typically, the master node hosts the OpenShift router.
- Specify the following details for the PostgreSQL database that stores tenant data, and information for the data science platform:
- Postgres Database Password
- Postgres Database Port(Default: 5432)- The Postgres Pod is not deployed until you run the script to configure Persistent Volumes. See Create NFS Directories and OpenShift Persistent Volumes.- If you specify a PostgreSQL port that is allocated to another application during installation, the PostgreSQL template fails to deploy.
- Specify the following information for the Agile Operations Analytics - Base Platform (AO Platform).
The Agile Operations Analytics Base Platform provides common services to CA Technologies products. These common services include Data Studio and Jarvis Data Lake/Analytics, which are based on Elasticsearch, Kibana and Apache Spark.
- AO Platform database name(Default: aoplatform)
- AO Platform database user(Default: aopuser)
- AO Platform database user password
- Specify the following tenant and administrator information when prompted:
Check the status of the installation in/<installation folder>/digital_oi_installer.log.More log files are written in the following locations:/opt/CA/digital_oi//opt/CA/digital_oi/_CA digital_oi <version>_installation/Logs/
- Master Administrator Password
- Global Administrator ID, Password, and Email
- Tenant Name:Specify the name of the initial tenant that the deployment process creates. You specify this tenant name when you log intoDigital Operational Intelligencefor the first time.
- Tenant Administrator ID, Password, and EmailPasswordlength should be 6-25 characters with at least four letters, one number, and one special character. The following special characters are supported: !, @, #, $, %, ^, . ,&, *, (, ), _, +
Create NFS Directories and OpenShift Persistent Volumes
The installer adds a script called
Default: /opt/CA/digital_OI/bin). This script creates required directories that are mapped to Persistent Volumes, and sets the appropriate permissions.
Run this script before you deploy the OpenShift template.
Create NFS Directories
Create NFS directories for use by Persistent Volumes on the master node of the OpenShift cluster.
If you want to create the NFS directories on a different node, copy the scripts to that node before you run them.
- From/opt/CA/digital_oi/bin, copy createNFSDirsForPV.sh to the NFS server that you want to use by running this command:scp createNFSDirsForPV.sh root@nfserver:/root/
- Provide the root password when prompted.
Follow these steps:
- Log in as a user with cluster-admin privileges.
- Navigate to<install>/bin (Default: /opt/CA/digital_oi/bin).
- Run the following command:./createNFSDirsForPV.sh
- Verify this procedure by completing the following steps:
- Verify that the following directories exist in the NFS Base Folder that you specified during installation:
- dspintegrator-logsThe dspintegrator-logs directory is available withCA Digital Operational Intelligence 1.3.1 release.
- In the OpenShift web console, go toStorage. Verify that the Persistent Volume Claims are successfully bound to the Persistent Volumes.
- Verify that the correct Persistent Volumes were created by running the following command:oc get pvThe following table lists the Persistent Volumes:ComponentPersistent Volume NameSizeAccess ModeReclaim PolicyUsed for...acn-correlation-logs1 GiRWORetainLog filesAdmin UI (AXA-AdminUI)adminui-logs1 GiRWORetainLog filesadminui1GiRWORetainMappings between tenants and Kibanaadminui-tomee-logs1GiRWORetainLog filesActiveMQamq1GiRWORetainActiveMQ databaseanalyticsjobs-config1GiRWORetainConfiguration data for the analyticsjob podaxa-dxc-logs1GiRWORetainLog filesaxa-transformer-logs1GiRWORetainLog filesSelf Service Dashboardscouch-data1GiRWORetainData used by the self service dashboardsldds-web-logs1GiRWORetainLog filesCapacity Analytics (CPA)cpa-logs1GiRWORetainLog filescpa-security1GiRWORetainLocation of the uim.jks keystore, which is used in Capacity Analytics configurationdoireadserver-logs1GiRWORetainLog filesdoireadserver-tomee-logs1GiRWORetainLog filesData Science Platform (DSP)dsp-logs100MiRWORetainLog filesdspintegrator-logs100MiRWORetainLog filesThe dspintegrator-logs pv is available with 1.3.1 release.dsp-maturation-data100MiRWXRetainLog filesdsp-model-data100MiRWXRetainLog filesSelf Monitoringelastalert-config1GiRWORetainConfiguration for the ElastAlert component.elastalert-rules1GiRWORetainElastAlert rulesfilebeat-config1GiRWORetainFilebeat configurationfilebeat-data1GiRWORetainParsed log datafilebeat-logs1GiRWORetainLog filesmetricbeat-modules1GiRWORetainMetricbeat configurationElasticsearch(*)elasticsearch-data-11GiRWORetainShared volume for all Elasticsearch nodesGeneric API Connectorgenericapiconnector-data1GiRWORetainData source profilesgenericapiconnector-logs1GiRWORetainLog filesHadoophadoop-data-02GiRWXRetainVolume for client node manager 0hadoop-data-12GiRWXRetainVolume for client node manager 1incidentmanagement-logs1GiRWORetainLog filesPredictive Insightsintegrationgateway-logs1GiRWORetainLog filesCA Jarvisjarvis-api-logs1GiRWORetainLog filesjarvis-elasticsearch-logs1GiRWORetainLog filesjarvis-indexer-logs1GiRWORetainLog filesjarvis-kafka-logs1GiRWORetainLog filesjarvis-kron-logs1GiRWORetainLog filesjarvis-utils-logs1GiRWORetainLog filesjarvis-verifier-logs1GiRWORetainLog filesLog Collectorlog-collector-logs1GiRWORetainLog fileslogcollector1GiRWORetainLog Collector dataLog Parserlog-parser1GiRWORetainLog and configuration fileslog-parser-logs1GiRWORetainLog filesTopologyngtas-backup1GiRWORetainBackup topology data filesngtas-data1GiRWORetainTopology data filesAlarmsnormalized-alarm-logs1GiRWORetainLog filesPostgrespg-data1GiRWORetainPostgreSQL database server data filesreadserver-logs1GiRWORetainLog filesreadserver-tomee-logs1GiRWORetainLog filesservicealarm-logs1GiRWORetainLog filesservicemanagement-data1GiRWORetainservicemanagement-logs1GiRWORetainLog filesservicetemplate-logs1GiRWORetainLog filessoacorrelationengine-data1GiRWORetainsoacorrelationengine-logs1GiRWORetainLog files
- Verify that the Persistent Volumes Claims are bound to the NFS Persistent Volumes as follows:
- In the OpenShift Web Console, go to<project>,Storage.
- View the Status column. Each Persistent Volume Claim should say be bound to a volume.
You can access files in a Persistent Volume by using one of these methods:
- Use the secure File Transfer Protocol (SFTP) to access files.
- Open a terminal window from the pod in OpenShift.
- To open a terminal window in OpenShift, go toApplications, Pods,.pod_name, Terminal
After successful installation and creating NFS directories, deploy
Digital Operational Intelligencetemplate to the OpenShift project. For more information see, Create OpenShift Objects.
Verify PostgreSQL Deployment
The installer installs and configures a PostgreSQL database to store tenant data, and information for Capacity Analytics and the data science platform. Before you deploy the OpenShift template, verify that the PostgreSQL database is running. If the PostgreSQL database is not running, template deployment may fail.
Follow these steps:
- In the OpenShift Web Console, go to<project>,Applications,Deployments,doi-postgres.
- Verify that the status of the doi-postgres pod is Active.
- Click the link in the Deployment column to view more details.
- Verify that the circle around the number of Pods is blue:
- Go toBuilds,Image Streams.
- Verify that the Updated column for the doi-postgres pod includes a time, such as2 hours ago.If the Updated column is blank, the Postgres pod did not start successfully.
We recommend that you take daily backups of the PostgreSQL databases. For more information, see Backup and Restore PostgreSQL Database.
Please save at least the last three (day's) backup on a physical drive on the node. This allows you to restore the DBs if the Persistent Values are corrupted.
We recommend saving the backups on a physical drive, instead of a pod, so that you do not lose the backed up data in case you restart pods.
Troubleshoot PostgreSQL Deployment
PostgreSQL deployment can fail for the following reasons:
The installer starts to retrieve the product images but the network connection is slow or fails.
Retrieving the product images takes longer than 15 minutes.
Symptom: The status in
- Verify network status and correct any issues.
- In<project>,Applications,Deployments,doi-postgres,clickDeployin the upper right corner to retrieve the PostgreSQL image manually.
The Postgres Pod has the status Error
Symptom: In the OpenShift Web Console, verify the status of the Postgres Pod in
Pod. If the status is Error, the deployment may have tried to run Postgres scripts and restart the environment too soon. By default, the deployment waits 5 minutes before running scripts and restarting.
Solution: Change the value of the POSTGRES_INIT_WAIT_LOOP_COUNT in the Postgres Pod.
Follow these steps:
- Go toApplications,Pods.
- Locate the POSTGRES_INIT_WAIT_LOOP_COUNT variable.
- Increase the value to 80 or 100 seconds.The value that you specify is multiplied by five to become the wait loop count. For example, if you specify 100, OpenShift waits 500 seconds (100 x 5) to run scripts and start the environment.
The Persistent Volumes are not created or mounted correctly.
- The status in<project>,Applications,Deployments,doi-postgresisPending.
- The status in<project>,Storagefor the doi-postres Persistent Volume Claim isPending.
Solution: Run the
.See Create NFS Directories and OpenShift Persistent Volumes.
Create OpenShift Objects
The installer adds the CA Digital Operational Intelligence template to the OpenShift project. The template deploys the product pods.
To complete the installation process, configure and deploy the template.
Follow these steps:
- Open the OpenShift project that you created. ClickAdd to Project, Browse Catalog.
- Configure and deploy the CA Digital Operational Intelligence <version> template to create the pods:
- Click the CA Digital Operational Intelligence <version> template. For example,CA Digital Operational Intelligence 1.3template.
- Specify the following parameters:ParameterDescriptionPROJECT_NAMESpecify the name of the OpenShift project. This value is added as a suffix to the routes sub-domain name to make the Route Host Names unique in a OpenShift Cluster.OC_ROUTER_HOSTSpecify the fully qualified domain name for the node that hosts the OpenShift router.DOI_DOCKER_REGISTRYEnter the location of CA's Docker Registry, or specify a local registry if you copied the images:doi.packages.ca.com/<version> For example, doi.packages.ca.com/1.3.0Database TypeSpecifyPostgreSQLas the database thatDigital Operational Intelligenceuses to store tenant data, and information for Capacity Analytics and the data science platform.In this release, only a Postgres database is supported.DB_MAX_CONNECTIONS64.Do not change the default value.DB_MAX_IDLE_CONNECTIONS16.Do not change the default value.NODEPORT _HOSTSpecify the host name of the nodePort service. The nodePort service opens a static port on each node in the cluster. This value is used when configuring product integrations.ES_DATA_DIRSpecify the path to the data directory that you created for Elasticsearch in Create Directories. Example: /var/data/elasticsearchZK_DATA_DIRSpecify the path to data directory that you created for ZooKeeper in Create Directories. Example: /var/data/zookeeperKAFKA_DATA_DIRSpecify the path to data directory that you created for Kafka in Create Directories. Example: /var/data/kafkaLOG_LEVELSpecify the amount of detail in the log file. Setting the log level to DEBUG may cause performance issues.
- ClickCreate.This step creates multiple pods. This step can take up to 30 minutes to complete.