Install on a Single Node

You can install  on a single node for demonstration purposes. To install  in a production environment, see .
doi13
You can install 
Digital Operational Intelligence
 on a single node for demonstration purposes. To install 
Digital Operational Intelligence
 in a production environment, see Install on Multiple Nodes.
 
 
2
 
 
Install 
Digital Operational Intelligence
 on a Single Node
 Note the following points before you begin the installation.
  • You can run the installer as a non-root user. However, all prerequisite steps, including installing Docker and OpenShift, must be completed as 
    root
  • To change a response in the installer, type 
    back
    . To cancel the installation, type 
    quit
    .
  • If the installation stops, kill the process, then restart the installation. Run the following command: 
    ps -aef | grep installer
    kill -9 <
    process ID from above process
    >
 
Follow these steps: 
 
  1. Download the installer and image TAR from the CA Support site. 
  2. Verify that the DNS service is running by entering the following command on each node in the cluster, including the master node: 
    systemctl status dnsmasq
    The DNS service should be Active.
  3. Unzip the file as follows: 
    tar -xvf
    <filename>
    .tar.gz
  4. Copy the installer to a Linux system.
  5. To run the installer as non-root user, run the following commands on the master node in the OpenShift cluster: 
    1. Create an user: 
      useradd <
      username
      >
      Example: 
      useradd DOIuser
       
      Give this user access to the NFS base directory or Host Path.
    2. Switch to that user: 
      su -
      <username>
      Example: 
      su - DOIuser
       
  6. Run the following commands on the OpenShift node:  
    chmod u+x digital_oi_installer.bin
    ./digital_oi_installer.bin
  7. Provide the following information during installation:
    •  
      Project URL
      : Provide the URL to the OpenShift(OS) instance where the project and user exist. Specify the fully qualified domain name (FQDN) in the URL. The FQDN is case-sensitive.
    •  
      Project Name
      : Provide the name of the project that you created in Configure OpenShift (Multi Node).
    •  
      Project User Name
      Project 
       
      User password
      : Specify the username and password for the account that owns the project.
      The project name, user name, and user password 
      must
       match the information that you provided when you created the OpenShift project.
  8. Specify N when prompted to select the type of registry where product images are stored:
    •  
      N
       - Retrieve product images from one of the following registry types: 
      • CA Technologies public registry:  doi.packages.ca.com/<version>
        The product images are large (over 11-GB), and can take a long time to download from the public registry over slow networks. If network access is slow, consider downloading the product images.
      • Local Docker registry: Specify the location of the registry when prompted
      Do not select the Y option. The option to retrieve product images from the
       OpenShift Container Registry 
      (Default) is not supported in this release.
  9. Select 
    Single Node
     as the installation size. 
  10. Specify if you want to enable 
    Self Monitoring.
     By default, Self Monitoring is disabled. Small, non-production environments can store data in the Elasticsearch and Kibana components that are installed with CA Digital Operational Intelligence. For production environments, install Elasticsearch and Kibana on a separate system for self monitoring.
    1. If you are using a separate Elasticsearch instance for monitoring data, enter the following information: 
    • If you do not specify an Elasticsearch IP address and port, monitoring data is stored in the internal Elasticsearch, by default. You can always change back to store the data in the internal or external Elasticsearch and Kibana components. For more information, see the Configure Self Monitoring section.
  11. Specify a user with Cluster Admin privileges.
  12. Provide the hostname or IP address of the node that hosts the OpenShift router. Typically, the master node hosts the OpenShift router. 
  13. Specify the following details for the PostgreSQL database that stores tenant data, and information for the data science platform:
    •  
      Postgres Database Password
       
    •  
      Postgres Database Port
       (Default: 5432)
      - The Postgres Pod is not deployed until you run the script to configure Persistent Volumes. See Create NFS Directories and OpenShift Persistent Volumes.
      - If you specify a PostgreSQL port that is allocated to another application during installation, the PostgreSQL template fails to deploy.
  14. Specify the following information for the Agile Operations Analytics - Base Platform (AO Platform).
    •  
      AO Platform database name 
      (Default: aoplatform)
    •  
      AO Platform database user
       (Default: aopuser) 
    •  
      AO Platform database user password
       
    The Agile Operations Analytics Base Platform provides common services to CA Technologies products.  These common services include Data Studio and Jarvis Data Lake/Analytics, which are based on Elasticsearch, Kibana and Apache Spark.
  15. Specify the following tenant and administrator information when prompted:
    •  
      Master Administrator Password
       
    •  
      Global Administrator ID, Password, and Email
       
    •  
      Tenant Name: 
      Specify the name of the initial tenant that the deployment process creates. You specify this tenant name when you log into 
      Digital Operational Intelligence
       for the first time.
    •  
      Tenant Administrator ID, Password, and Email
      Password
       length should be 6-25 characters with at least four letters, one number, and one special character. The following special characters are supported: !, @, #, $, %, ^, . ,&, *, (, ), _, +
    Check the status of the installation in 
    /<installation folder>/digital_oi_installer.log.
     More log files are written in the following locations:
     
    /opt/CA/digital_oi/
     
     
    /opt/CA/digital_oi/_CA digital_oi <version>_installation/Logs/
     
Create NFS Directories and OpenShift Persistent Volumes
The installer adds a script called 
createNFSDirsforPV.sh
 in 
<install>
/bin (
Default
: /opt/CA/digital_OI/bin). This script creates required directories that are mapped to Persistent Volumes, and sets the appropriate permissions.
Run this script before you deploy the OpenShift template. 
Create NFS Directories
Create NFS directories for use by Persistent Volumes on the master node of the OpenShift cluster.
If you want to create the NFS directories on a different node, copy the scripts to that node before you run them.
  1. From 
    /opt/CA/digital_oi/bin
    , copy createNFSDirsForPV.sh to the NFS server that you want to use by running this command:
    scp createNFSDirsForPV.sh root@nfserver:/root/
  2. Provide the root password when prompted.
 
Follow these steps:
 
  1. Log in as a user with cluster-admin privileges. 
  2. Navigate to 
    <install>
    /bin (
    Default
    : /opt/CA/digital_oi/bin).
  3. Run the following command: 
    ./createNFSDirsForPV.sh
  4. Verify this procedure by completing the following steps: 
    1. Verify that the following directories exist in the NFS Base Folder that you specified during installation: 
      • acn-corrleation-logs
      • adminui-data
      • adminui-logs
      • adminui-tomee-logs
      • amq
      • analyticsjobs-config
      • axa-data
      • caemm-logs
      • couch-data
      • cpa-logs
      • cpa-security
      • doi-readserver-logs
      • doireaderver-tomee-logs
      • dsp-logs
      • dspintegrator-logs
         The dspintegrator-logs directory is available withCA Digital Operational Intelligence 1.3.1 release.
      • dsp-maturation-data
      • dsp-model-data
      • elastalert-config
      • elastalert-rules
      • elasticsearch-data-1
      • filebeat-config
      • filebeat-data
      • genericapiconnector
      • hadoop-data1
      • hadoop-data2
      • incidentmanager-logs
      • integrationgateway-logs
      • jarvis
      • logcollector-data
      • logcollector-logs
      • logparser-data
      • logparser-logs
      • metricbeat-modules
      • ngtas-backup
      • ngtas-data
      • normalized-alarm-logs
      • pg-data
      • servicealarm-logs
      • servicemanagement-data
      • servicetemplate-logs
      • soacorrelation-data
      • soa-logs
    2. In the OpenShift web console, go to 
      Storage
      . Verify that the Persistent Volume Claims are successfully bound to the Persistent Volumes.  
  5. Verify that the correct Persistent Volumes were created by running the following command: 
    oc get pv
    The following table lists the Persistent Volumes: 
    Component
    Persistent Volume Name
    Size
    Access Mode
    Reclaim Policy
    Used for...
    acn-correlation-logs
    1 Gi
    RWO
    Retain
    Log files
    Admin UI (AXA-AdminUI)
    adminui-logs
    1 Gi
    RWO
    Retain
    Log files
    adminui
    1Gi
    RWO
    Retain
    Mappings between tenants and Kibana
    adminui-tomee-logs
    1Gi
    RWO
    Retain
    Log files
    ActiveMQ
    amq
    1Gi
    RWO
    Retain
    ActiveMQ database
    analyticsjobs-config
    1Gi
    RWO
    Retain
    Configuration data for the analyticsjob pod
    axa-dxc-logs
    1Gi
    RWO
    Retain
    Log files
    axa-transformer-logs
    1Gi
    RWO
    Retain
    Log files
    Self Service Dashboards
    couch-data
    1Gi
    RWO
    Retain
    Data used by the self service dashboards
    ldds-web-logs
    1Gi
    RWO
    Retain
    Log files
    Capacity Analytics (CPA)
    cpa-logs
    1Gi
    RWO
    Retain
    Log files
    cpa-security
    1Gi
    RWO
    Retain
    Location of the uim.jks keystore, which is used in Capacity Analytics configuration
    doireadserver-logs
    1Gi
    RWO
    Retain
    Log files
    doireadserver-tomee-logs
    1Gi
    RWO
    Retain
    Log files
    Data Science Platform (DSP)
    dsp-logs
    100Mi
    RWO
    Retain
    Log files
    dspintegrator-logs
    100Mi
    RWO
    Retain
    Log files
     The dspintegrator-logs pv is available with 1.3.1 release.
    dsp-maturation-data
    100Mi
    RWX
    Retain
    Log files
    dsp-model-data
    100Mi
    RWX
    Retain
    Log files
    Self Monitoring
    elastalert-config
    1Gi
    RWO
    Retain
    Configuration for the ElastAlert component.
    elastalert-rules
    1Gi
    RWO
    Retain
    ElastAlert rules
    filebeat-config
    1Gi
    RWO
    Retain
    Filebeat configuration
    filebeat-data
    1Gi
    RWO
    Retain
    Parsed log data
    filebeat-logs
    1Gi
    RWO
    Retain
    Log files
    metricbeat-modules
    1Gi
    RWO
    Retain
    Metricbeat configuration
    Elasticsearch(*)
    elasticsearch-data-1
    1Gi
    RWO 
    Retain
    Shared volume for all Elasticsearch nodes
    Generic API Connector
    genericapiconnector-data
    1Gi
    RWO
    Retain
    Data source profiles
    genericapiconnector-logs
    1Gi
    RWO
    Retain
    Log files
    Hadoop
    hadoop-data-0
    2Gi
    RWX
    Retain
    Volume for client node manager 0
    hadoop-data-1 
    2Gi
    RWX
    Retain
    Volume for client node manager 1
    incidentmanagement-logs
    1Gi
    RWO
    Retain
    Log files
    Predictive Insights
    integrationgateway-logs
    1Gi
    RWO
    Retain
    Log files
    CA Jarvis
    jarvis-api-logs
    1Gi
    RWO
    Retain
    Log files
    jarvis-elasticsearch-logs
    1Gi
    RWO
    Retain
    Log files
    jarvis-indexer-logs
    1Gi
    RWO
    Retain
    Log files
    jarvis-kafka-logs
    1Gi
    RWO
    Retain
    Log files
    jarvis-kron-logs
    1Gi
    RWO
    Retain
    Log files
    jarvis-utils-logs
    1Gi
    RWO
    Retain
    Log files
    jarvis-verifier-logs
    1Gi
    RWO
    Retain
    Log files
    Log Collector
    log-collector-logs
    1Gi
    RWO
    Retain
    Log files
    logcollector
    1Gi
    RWO
    Retain
    Log Collector data
    Log Parser
    log-parser
    1Gi
    RWO
    Retain
    Log and configuration files
    log-parser-logs
    1Gi
    RWO
    Retain
    Log files
    Topology
    ngtas-backup
    1Gi
    RWO
    Retain
    Backup topology data files
    ngtas-data
    1Gi
    RWO
    Retain
    Topology data files
    Alarms
    normalized-alarm-logs
    1Gi
    RWO
    Retain
    Log files
    Postgres
    pg-data
    1Gi
    RWO
    Retain
    PostgreSQL database server data files
    readserver-logs
    1Gi
    RWO
    Retain
    Log files
    readserver-tomee-logs
    1Gi
    RWO
    Retain
    Log files
    servicealarm-logs
    1Gi
    RWO
    Retain
    Log files
    servicemanagement-data
    1Gi
    RWO
    Retain
    servicemanagement-logs
    1Gi
    RWO
    Retain
    Log files
    servicetemplate-logs
    1Gi
    RWO
    Retain
    Log files
    soacorrelationengine-data
    1Gi
    RWO
    Retain
    soacorrelationengine-logs
    1Gi
    RWO
    Retain
    Log files
  6. Verify that the Persistent Volumes Claims are bound to the NFS Persistent Volumes as follows: 
    1. In the OpenShift Web Console, go to 
      <project>
      Storage
    2. View the Status column. Each Persistent Volume Claim should say be bound to a volume.
You can access files in a Persistent Volume by using one of these methods: 
  • Use the secure File Transfer Protocol (SFTP) to access files.
  • Open a terminal window from the pod in OpenShift.
  • To open a terminal window in OpenShift, go to 
    Applications, Pods,
     pod_name
    , Terminal
    .
After successful installation and creating NFS directories, deploy 
Digital Operational Intelligence
 template to the OpenShift project. For more information see, Create OpenShift Objects
Verify PostgreSQL Deployment
The installer installs and configures a PostgreSQL database to store tenant data, and information for Capacity Analytics and the data science platform. Before you deploy the OpenShift template, verify that the PostgreSQL database is running. If the PostgreSQL database is not running, template deployment may fail. 
 
Follow these steps:
 
  1. In the OpenShift Web Console, go to 
    <project>
    Applications
    Deployments
    doi-postgres. 
     
  2. Verify that the status of the doi-postgres pod is Active. 
  3. Click the link in the Deployment column to view more details.
    image2019-6-11_12-8-56.png  
  4. Verify that the circle around the number of Pods is blue:
    image2019-6-11_12-10-0.png  
  5. Go to 
    Builds
    Image Streams
  6. Verify that the Updated column for the doi-postgres pod includes a time, such as 
    2 hours ago
    .
    If the Updated column is blank, the Postgres pod did not start successfully. 
 We recommend that you take daily backups of the PostgreSQL databases. For more information, see Backup and Restore PostgreSQL Database.
Please save at least the last three (day's) backup on a physical drive on the node. This allows you to restore the DBs if the Persistent Values are corrupted.
We recommend saving the backups on a physical drive, instead of a pod, so that you do not lose the backed up data in case you restart pods.
Troubleshoot PostgreSQL Deployment
PostgreSQL deployment can fail for the following reasons: 
 
The installer starts to retrieve the product images but the network connection is slow or fails.
 
Retrieving the product images takes longer than 15 minutes.
 
Symptom
: The status in 
<project>
Applications
Deployments
doi-postgres 
is
 Pending. 
Solution
:
  
 
    1. Verify network status and correct any issues.
       
       
    2. In 
      <project>
      Applications
      Deployments
      doi-postgres, 
      click 
      Deploy
       in the upper right corner to retrieve the PostgreSQL image manually.
 
The Postgres Pod has the status Error
 
 
Symptom
: In the OpenShift Web Console, verify the status of the Postgres Pod in 
Applications
Pod
. If the status is Error, the deployment may have tried to run Postgres scripts and restart the environment too soon. By default, the deployment waits 5 minutes before running scripts and restarting.
 
Solution
: Change the value of the POSTGRES_INIT_WAIT_LOOP_COUNT in the Postgres Pod. 
Follow these steps: 
  1. Go to 
    Applications
    Pods
    .
  2. Click 
    Environment
    .
  3. Locate the POSTGRES_INIT_WAIT_LOOP_COUNT variable. 
  4. Increase the value to 80 or 100 seconds.
    The value that you specify is multiplied by five to become the wait loop count. For example, if you specify 100, OpenShift waits 500 seconds (100 x 5) to run scripts and start the environment. 
 
The Persistent Volumes are not created or mounted correctly.
 
 
Symptoms
    • The status in 
      <project>
      Applications
      Deployments
      doi-postgres 
      is
       Pending.
       
    • The status in 
      <project>
      Storage
       for the doi-postres Persistent Volume Claim is
       Pending.
       
 
Solution
: Run the 
createNFSDirsforPV.sh
 and 
createPVs.sh
 scripts
See Create NFS Directories and OpenShift Persistent Volumes.
Create OpenShift Objects
The installer adds the CA Digital Operational Intelligence template to the OpenShift project. The template deploys the product pods. 
To complete the installation process, configure and deploy the template.
 
Follow these steps: 
 
  1. Open the OpenShift project that you created. Click 
    Add to Project, Browse Catalog
  2. Click 
    Integration
    .
  3. Configure and deploy the CA Digital Operational Intelligence <version> template to create the pods:
    1. Click the CA Digital Operational Intelligence <version> template. For example, 
      CA Digital Operational Intelligence 1.3
       template.
    2. Specify the following parameters: 
      Parameter
      Description
      PROJECT_NAME
      Specify the name of the OpenShift project. This value is added as a suffix to the routes sub-domain name to make the Route Host Names unique in a OpenShift Cluster.
      OC_ROUTER_HOST
      Specify the fully qualified domain name for the node that hosts the OpenShift router.
      DOI_DOCKER_REGISTRY
      Enter the location of CA's Docker Registry, or specify a local registry if you copied the images:
      doi.packages.ca.com/<version> For example, doi.packages.ca.com/1.3.0
      Database Type
      Specify 
      PostgreSQL
       as the database that 
      Digital Operational Intelligence
       uses to store tenant data, and information for Capacity Analytics and the data science platform.
      In this release, only a Postgres database is supported.
      DB_MAX_CONNECTIONS
      64.
       Do not change the default value.
       
      DB_MAX_IDLE_CONNECTIONS
      16.
       Do not change the default value.
       
      NODEPORT _HOST
      Specify the host name of the nodePort service. The nodePort service opens a static port on each node in the cluster. This value is used when configuring product integrations.
      ES_DATA_DIR
      Specify the path to the data directory that you created for Elasticsearch in Create Directories. Example:  /var/data/elasticsearch
      ZK_DATA_DIR
      Specify the path to data directory that you created for ZooKeeper in Create Directories. Example:  /var/data/zookeeper
      KAFKA_DATA_DIR
      Specify the path to data directory that you created for Kafka in Create Directories. Example:  /var/data/kafka
      LOG_LEVEL
      Specify the amount of detail in the log file. Setting the log level to DEBUG may cause performance issues.
    3. Click 
      Create
      .
      This step creates multiple pods. This step can take up to 30 minutes to complete.