Rehydrating Data in a Cloud Environment

If you have
CA Performance Management
set up in an cloud environment, you can patch operating systems from a common image instead of patching each operating system individually. The following procedures outline the necessary steps for rehydrating the
CA Performance Management
nodes with minimal data loss.
capm370
If you have
CA Performance Management
set up in a cloud environment, you can patch operating systems from a common image instead of patching each operating system individually. The following procedures outline the necessary steps for rehydrating the
CA Performance Management
nodes with minimal data loss.
Verify the Prerequisites
Before rehydration, ensure your environment is in a good state.
Follow these steps:
  1. Verify that the Data Aggregator and Data Repository are connected.
    Select
    Administration
    ,
    Data Source Settings
    , and
    System Status
    .
  2. Verify that the Data Aggregator is up and running.
  3. Do one of the following steps to verify there is no backed up or cached poll data:
    • Go to the activeMQ webpage for the Data Collectors and verify that the PRQ queue is empty:
      http://
      DC_Host
      :8161/admin/ browse.jsp?JMSDestination=PRQ
    • Use the following command to verify that the PRQ queue is empty:
      DC_Install_Directory
      /scripts/ activemqstat | grep -E "Queue|PRQ"
  4. If you have
    CA Virtual Network Assurance
    in your environment, in
    ,
    hover over
    Administration
    , and click
    Monitored Items Management: VNA Gateways
    .
    Set the
    Administrative Status
    to
    Down
    .
Rehydrate Each Data Collector
Rehydrate each Data Collector one at a time. Make sure each Data Collector recovers and starts polling before you rehydrate each Vertica node and the Data Aggregator. During this process, some polls are cached on the Data Collectors for a time.
Follow these steps:
  1. Build the new operating system on the Data Collector container or virtual machine.
  2. Copy the
    DCM_ID
    :
    grep "manager\-id\="
    DC_Install_Directory
    /apache-karaf-
    version
    /etc/com.ca.im.dm.core.collector.cfg
  3. Bring down the old container or virtual machine.
  4. Give the new container or virtual machine a new IP address or name and bring it online.
  5. Reinstall the Data Collector with the
    DCM_ID
    of the original Data Collector.
    export DCM_ID="
    Original_DC_Host
    :
    DCM_ID
    "
    cd /tmp;
    rm -rf install.bin;
    wget http://
    DA_Host
    :
    Port
    /dcm/InstData/Linux/VM/install.bin;
    chmod a+x install.bin;
    ./install -i silent
    The Data Collector installs, reconnects to the Data Aggregator, and starts polling.
Rehydrate Each Vertica Node
Rehydrate each Vertica node one at a time. Before you start, verify that all nodes are up and running:
/opt/vertica/bin/admintools -t list_allnodes
Follow these steps:
  1. Bring down the Vertica node:
    opt/vertica/bin/admintools -t stop_node -s
    IP_Address
  2. Unmount the
    data
    directory and the
    catalog
    directory.
  3. Create a new node with the same IP address and name.
  4. Mount the
    data
    directory and the
    catalog
    directory to the new node.
    If Vertica complains that this action is unsupported, then the directories are cleaned out and recovered when you start the new node. When this occurs, the RECOVERY state lasts longer.
  5. Run the validation script:
    ./dr_validate.sh -n -p drinstall.properties
    The script validates the system settings. Review and resolve any errors or warnings. You can run this script multiple times to verify that all system configuration options are set properly. The validation script may prompt you to reboot.
  6. Install Vertica from an up and running node:
    /opt/vertica/sbin/install_vertica -u dradmin -l /export/dradmin -d /export/data -L ./resources/vlicense.dat -Y -r ./resources/vertica-
    version
    .rpm
    Values should match those in the properties files for
    dr_install.sh
    , and point to the same resources.
  7. Start the node and verify that it is up and running:
    /opt/vertica/bin/admintools -t restart_node -s
    Host_Name
    -d
    DB_Name
    The state starts as DOWN, then changes to REBUILDING until it changes to UP.
  8. Repeat this procedure for each node.
  9. After all nodes are rehydrated, verify that all nodes are back up and running:
    /opt/vertica/bin/admintools -t list_allnode
Rehydrate the Data Aggregator
During this Vertica refresh, the Data Aggregator should collect data from the Data Collectors. The Data Aggregator pushes data to Vertica the entire time on the up and running nodes. The speed of the ingestion is sometimes cut in half during this time.After you rehydrate the Data Collectors and Vertica, you can rehydrate the Data Aggregator.
Follow these steps:
  1. Prepare a new node for the Data Aggregator.
  2. Do one of the following steps:
    • Stop the Data Aggregator service:
      service dadaemon stop
      For RHEL 7.x,
      service
      invokes
      systemctl
      . You can use
      systemctl
      instead.
    • (Fault tolerant environment) If the local Data Aggregator is running, run one the following commands to shut it down and prevent it from restarting until maintenance is complete:
      • RHEL 6.x:
        service dadaemon maintenance
      • RHEL 7.x, SLES, or OL:
        DA_Install_Directory
        /scripts/dadaemon maintenance
    The Data Aggregator completes processing and the service stops.
  3. Move the configuration files to the new node. For more information, see Back Up Data Aggregator.
    In environments with fault tolerant Data Aggregators, use the shared data directory and reattach it. For more information, see Fault Tolerance.
    The Data Collectors are queuing data.
  4. Do one of the following steps:
    • Start the Data Aggregator service:
      service dadaemon start
    • (Fault tolerant environment) Run one the following commands to enable the fault tolerant Data Aggregator so that it can start when necessary:
      • RHEL 6.x:
        service dadaemon activate
      • RHEL 7.x, SLES, or OL:
        :
        DA_Install_Directory
        /scripts/dadaemon activate
    The Data Aggregator consumes the queued polls and pushes them to Vertica.
    Depending on the total outage time, all cached data is consumed and ready for reporting in approximately two times the outage time. When the ActiveMQ consumption returns to normal, this indicates there is no longer a backlog
Rehydrate
CA Performance Center
Finally, you can rehydrate
CA Performance Center
.
Follow these steps:
  1. Prepare a node for
    CA Performance Center
    .
  2. Bring down
    CA Performance Center
    :
    service caperfcenter_console stop service caperfcenter_devicemanager stop service caperfcenter_eventmanager stop service caperfcenter_sso stop
  3. Move the database to the new node. For more information, see Back Up Performance Center.
  4. Start
    CA Performance Center
    :
    1. Start the SSO service:
      service caperfcenter_sso start
    2. Wait one minute, then start the event manager and device manager:
      service caperfcenter_eventmanager start service caperfcenter_devicemanager start
    3. Wait one minute, then start the console service:
      service caperfcenter_console start
Rehydrate
CA Virtual Network Assurance
If you have
CA Virtual Network Assurance
in your environment, rehydrate
.
Follow these steps:
  1. Query the following REST URL to find the engine ID:
    http://
    VNA_host
    :8080/vna/rest/v1/admin/engines
  2. Query the following REST URL for your plug-in configuration:
    http://
    VNA_host
    :8080/vna/rest/v1/admin/engines/
    Engine_ID
    /config
  3. Stop WildFly using one of the following commands:
    service wildfly stop
    systemctl stop wildfly
  4. Back up the existing database:
    VNA_Install_Directory
    /tools/bin/db_backup.sh
    Backup_File_Name
  5. Install
    CA Virtual Network Assurance
    on the new server and restore the database from the backup:
    VNA_Install_Directory
    /tools/bin/db_restore.sh
    Backup_File_Name
  6. Reconfigure the plug-ins.
    Use the same Domain ID from the original configuration.
  7. In
    CA Performance Center
    , hover over
    Administration
    , and click
    Monitored Items Management: VNA Gateways
    . Change the new
    CA Virtual Network Assurance
    server ID. Set the
    Administrative Status
    to
    Up
    .
Rehydrate
CA Spectrum
If you rehydrate the
CA Performance Management
environment, and want to reconnect it to an existing Spectrum server, complete the following procedure.
Follow these steps:
  1. Remove the
    CA Performance Center
    entries:
    cd
    Spectrum_Install_Directory
    /vnmsh
    ./connect
    ./show models | grep CAPC
    ./destroy -y model mh=0xXXXXX
    Run this command for every
    CAPCIPDomain
    and
    CAPCTenant
    model found.
  2. Remove the
    CA Performance Center
    integration database from
    CA Spectrum
    :
    bash -login
    cd mysql
    cd bin
    ./mysqladmin --defaults-file=../my-spectrum.cnf -unetqos -p
    password
    drop netqos_integ
  3. Restart
    CA Spectrum
    tomcat:
    cd
    Spectrum_Install_Directory
    /tomcat/bin
    ./stopTomcat.sh
    ./starttomcat.sh
Rehydrate
CA Network Flow Analysis
If you have
CA Network Flow Analysis
in your environment, rehydrate
.
Follow these steps:
  1. Determine the database files to backup.
  2. Copy each of the target directories or files to a remote location.
  3. Back up the following databases to a remote location, using
    mysqldump
    . Back up the
    reporter
    database last, regardless of the deployment architecture.
    • Customized data_retention database (Stand-alone or Harvester server):
      data_retention
    • harvester database (Stand-alone or Harvester server):
      harvester
    • reporter database (Stand-alone or NFA console):
      reporter
    mysqldump --routines --events -u root
    dbname
    --skip-lock-tables >
    dbbackupname
    .sql
  4. (Optional) Verify that the
    mysqldump
    was successful by checking that the size of the backup is over 1 KB.
  5. Restore each of the target directories or files from its remote location to its original location.
  6. Restore each of the databases. Restore the reporter database first, regardless of the deployment architecture.
    • reporter database:
      reporter
      (Stand-alone or NFA console)
    • Customized data_retention database:
      data_retention
      (Stand-alone or Harvester server)
    • harvester database:
      harvester
      (Stand-alone or Harvester server)
      For best results, restore to a clean installation.
      mysql –e “drop database
      DB_Name
      ;”
      mysql -e "create database
      DB_Name
      ;"
      mysql -u root
      DB_Name
      dbbackupname
      .sql
      mysql -u root mysql > proc.sql