Rehydrate Data in a Cloud Environment

If you have set up
DX NetOps Performance Management
in an cloud environment, you can patch operating systems from a common image instead of patching each operating system individually.
If you have set up
DX NetOps Performance Management
in an cloud environment, you can patch operating systems from a common image instead of patching each operating system individually.
Use the following process to rehydrate the
DX NetOps Performance Management
nodes with minimal data loss:
Verify the Prerequisites
Before rehydrating the data, ensure that your environment is in a good state.
Follow these steps:
  1. Hover over
    Administration
    ,
    Data Sources
    , and then click
    System Status
    .
    The
    System Status
    page appears.
  2. Verify the following:
    • The data aggregator and data repository are connected.
    • The data aggregator is up and running.
    • Backed up or cached poll data does not exist.
      If the PRQ queue is not empty, the data collectors need to send the rest of the polled data to the data aggregator. The queue fills up when an outage occurs, which causes data to be cached on the data collectors. View the System Status page to verify that all the data collectors have a green status. The Polling Status column shows whether cached values exist on the data collector.
  3. If you have
    DX NetOps Virtual Network Assurance
    in your environment, do the following steps:
    1. Hover over
      Administration
      ,
      Monitored Items Management
      , and then click
      VNA Gateways
      .
      The
      VNA Gateways
      page appears.
    2. Set the
      Administrative Status
      to
      Down
      .
Rehydrate the Data Collectors
Rehydrate each data collector one at a time. Ensure each data collector recovers and starts polling before you rehydrate each Vertica node and the data aggregator. During this process, some polls are cached on the data collectors for a time.
Follow these steps:
  1. Build the new operating system on the data collector container or virtual machine.
  2. Copy the
    DCM_ID
    by issuing the following command:
    grep "manager\-id\="
    DC_Install_Directory
    /apache-karaf-<
    version>
    /etc/com.ca.im.dm.core.collector.cfg
  3. Bring down the old container or virtual machine.
  4. Give the new container or virtual machine a new IP address or name and bring it online.
  5. Reinstall the data collector with the
    DCM_ID
    of the original data collector by issuing the following commands:
    export DCM_ID="
    Original_DC_Host
    :
    DCM_ID
    " cd /tmp; rm -rf install.bin; wget http://
    DA_Host
    :
    Port
    /dcm/InstData/Linux/VM/install.bin; chmod a+x install.bin; ./install -i silent
The data collector installs, reconnects to the data aggregator, and then starts polling.
Rehydrate the Vertica Nodes
Rehydrate each Vertica node one at a time. Before you start, verify that all nodes are up and running by issuing the following command:
/opt/vertica/bin/admintools -t list_allnodes
Follow these steps:
  1. Bring down the Vertica node by issuing the following command:
    /opt/vertica/bin/admintools -t stop_node -s
    IP_Address
  2. Unmount the
    data
    directory and the
    catalog
    directory.
  3. Create a new node with the same IP address and name.
  4. Mount the
    data
    and
    catalog
    directory to the new node.
  5. Validate the system settings by running the
    dr_validate.sh
    validation script. Issue the following command:
    ./dr_validate.sh -n -p drinstall.properties
    Review and resolve any errors or warnings. You can run this script multiple times to verify that all system configuration options are set properly. The validation script might prompt you to reboot.
  6. Install Vertica from an up-and-running node by issuing the following command:
    /opt/vertica/sbin/install_vertica -u
    dradmin
    -l /export/dradmin -d /export/data -L ./resources/vlicense.dat -Y -r ./resources/vertica-<
    version>
    .rpm
    Values should match those in the properties files for
    dr_install.sh
    , and should point to the same resources.
  7. Start the node and verify that it is up and running by issuing the following command:
    /opt/vertica/bin/admintools -t restart_node -s
    Host_Name
    -d
    DB_Name
    The state starts as DOWN, then changes to REBUILDING until it changes to UP.
Verify the Vertica Nodes
Verify that all nodes are back up and running as the
dradmin
user by issuing the following command:
/opt/vertica/bin/admintools -t list_allnode
Rehydrate the Data Aggregator
During this Vertica refresh, the data aggregator collects data from the data collectors. The data aggregator pushes data to Vertica the entire time on the up and running nodes. The speed of the ingestion is sometimes cut in half during this time. After you rehydrate the data collectors and Vertica, you can rehydrate the data aggregator.
Follow these steps:
  1. Prepare a new node for the data aggregator.
  2. Do one of the following steps:
    • Stop the Data Aggregator service by issuing the following command:
      • RHEL 6.x:
        service dadaemon stop
      • RHEL 7.x, SUSE Linux Enterprise Server (SLES), and Oracle Linux (OL)
        systemctl stop dadaemon
    • (Fault-tolerant environment) If the local data aggregator is running, issue one the following commands to shut it down and prevent it from restarting until maintenance is complete:
      • RHEL 6.x:
        service dadaemon maintenance
      • RHEL 7.x, SLES, or OL:
        <da_installation_directory>
        /scripts/dadaemon maintenance
    The data aggregator completes processing and the service stops.
  3. Move the configuration files to the new node.
    For more information, see Back Up the Data Aggregator.
    In environments with fault-tolerant data aggregators, use the shared data directory and reattach it. For more information, see Fault Tolerance.
  4. Do one of the following steps:
    • Start the Data Aggregator service by issuing the following command:
      • RHEL 6.x
        service dadaemon start
      • RHEL 7.x, SLES, and OL
        systemctl start dadaemon
    • (Fault-tolerant environment) Run one the following commands to enable the fault-tolerant data aggregator so that it can start when necessary:
      • RHEL 6.x:
        service dadaemon activate
      • RHEL 7.x, SLES, or OL:
        <da_installation_directory>
        /scripts/dadaemon activate
    The data aggregator consumes the queued polls and pushes them to Vertica.
    Depending on the total outage time, all cached data is consumed and ready for reporting in approximately two times the outage time. When the ActiveMQ consumption returns to normal, this indicates there is no longer a backlog. Verify that all the data collectors have a green status and that the system is receiving approximately the same number of polls as it was before the process started from the
    System Status
    page.
The data aggregator is rehydrated.
Rehydrate
NetOps Portal
Finally, you can rehydrate
NetOps Portal
.
Follow these steps:
  1. Prepare a node for
    NetOps Portal
    .
  2. Bring down
    NetOps Portal
    by issuing the following commands:
    • RHEL 6.x:
      service caperfcenter_console stop service caperfcenter_devicemanager stop service caperfcenter_eventmanager stop service caperfcenter_sso stop
    • RHEL 7.x, SLES, and OL
      systemctl stop caperfcenter_console systemctl stop caperfcenter_devicemanager systemctl stop caperfcenter_eventmanager systemctl stop caperfcenter_sso
  3. Move the database to the new node.
    For more information, see Back Up
    NetOps Portal
    .
  4. Start
    NetOps Portal
    :
    1. Start the SSO service by issuing the following command:
      • RHEL 6.x:
        service caperfcenter_sso start
      • RHEL 7.x, SLES, and OL
        systemctl start caperfcenter_sso
    2. Wait one minute, then start the event manager and device manager by issuing the following commands:
      • RHEL 6.x:
        service caperfcenter_eventmanager start service caperfcenter_devicemanager start
      • RHEL 7.x, SLES, and OL
        systemctl start caperfcenter_eventmanager systemctl start caperfcenter_devicemanager
    3. Wait one minute, then start the console service:
      • RHEL 6.x:
        service caperfcenter_console start
      • RHEL 7.x, SLES, and OL
        systemctl start caperfcenter_console
NetOps Portal
is rehydrated.
Rehydrate
DX NetOps Virtual Network Assurance
If you have
DX NetOps Virtual Network Assurance
in your environment, rehydrate it now.
Follow these steps:
  1. Find the engine ID required later by querying the following REST URL:
    http://
    <VNA_host>
    :
    <port>
    /vna/rest/v1/admin/engines
    • VNA_host
      Specifies the VNA host name.
    • port
      Specifies the VNA required port number.
      Default:
      8080
      For more information about the ports that are required for communication between
      DX NetOps Performance Management
      and
      DX NetOps Virtual Network Assurance
      , see Installation Requirements and Considerations.
  2. Find the plug-in configuration required later by querying the following REST URL:
    http://
    <VNA_host>
    :
    <port>
    /vna/rest/v1/admin/engines/
    Engine_ID
    /config
    • VNA_host
      Specifies the
      DX NetOps Virtual Network Assurance
      host name.
    • port
      Specifies the
      DX NetOps Virtual Network Assurance
      required port number.
      Default:
      8080
  3. Stop the application server by issuing
    one
    of the following commands:
    • RHEL 6.x:
      service wildfly stop
    • RHEL 7.x, SLES, or OL:
      systemctl stop wildfly
  4. Back up the existing database to a specified directory by issuing the following command:
    <VNA_home>
    /VNA/tools/bin/db_backup.sh
    <backup_directory/backup_filename>
    • VNA_home
      The installation directory for
      DX NetOps Virtual Network Assurance
      .
      Default:
      /opt/CA
    • backup_directory/backup_filename
      The location of the backup directory and file. Use any secure location with sufficient space for the backup directory.
      Example:
      /tmp/vna_db.sql
  5. Install
    DX NetOps Virtual Network Assurance
    on the new server and restore the database from the backup:
    <VNA_home>
    /tools/bin/db_restore.sh
    <backup_directory/backup_filename>
    • VNA_home
      The installation directory for
      DX NetOps Virtual Network Assurance
      .
      Default:
      /opt/CA
    • backup_directory/backup_filename
      The location of the backup directory and file. Use any secure location with sufficient space for the backup directory.
      Example:
      /tmp/vna_db.sql
  6. Reconfigure the plug-ins using the information from the original query.
    Use the same Domain ID from the original configuration.
  7. In
    NetOps Portal
    , do the following steps:
    1. Hover over
      Administration
      ,
      Monitored Items Management
      , and then click
      VNA Gateways
      .
      The
      VNA Gateways
      page appears.
    2. Change the new
      DX NetOps Virtual Network Assurance
      server ID.
    3. Set the
      Administrative State
      to
      Up
      .
DX NetOps Virtual Network Assurance
is rehydrated.
Reconnect an Existing
DX NetOps Spectrum
Data Source
If you rehydrate the
DX NetOps Performance Management
environment, and want to reconnect it to an existing Spectrum server, complete the following procedure.
Follow these steps:
  1. In
    NetOps Portal
    , hover over
    Administration
    ,
    Data Sources
    , and then click
    Data Sources
    .
    The
    Manage Data Sources
    page appears.
  2. Select the
    DX NetOps Spectrum
    data source, and click
    Edit
    .
    The
    Edit Data Source
    dialog opens.
  3. Change the
    Status
    to "Disabled", and then click
    Save
    .
  4. Remove the
    NetOps Portal
    entries by issuing the following commands:
    cd
    Spectrum_installation_directory
    /vnmsh ./connect ./show models | grep CAPC
  5. Issue the following command for every
    CAPCIPDomain
    and
    CAPCTenant
    model found.
    ./destroy model mh=0xXXXXX
  6. Remove the
    NetOps Portal
    integration database from
    DX NetOps Spectrum
    by issuing the following commands:
    bash -login
    cd mysql
    cd bin
    ./mysqladmin --defaults-file=../my-spectrum.cnf -u netqos -p
    password
    drop netqos_integ
  7. Restart
    DX NetOps Spectrum
    Tomcat:
    cd
    Spectrum_installation_directory
    /tomcat/bin
    ./stopTomcat.sh
    ./startTomcat.sh
  8. In
    NetOps Portal
    , hover over
    Administration
    ,
    Data Sources
    , and then click
    Data Sources
    .
    The
    Manage Data Sources
    page appears.
  9. Select the
    DX NetOps Spectrum
    data source, and click
    Edit
    .
    The
    Edit Data Source
    dialog opens.
  10. Change the
    Status
    to "Enabled", and then click
    Save
    .
The existing
DX NetOps Spectrum
data source is reconnected.
Rehydrate
DX NetOps Network Flow Analysis
If you have
DX NetOps Network Flow Analysis
in your environment, rehydrate it now.
Follow these steps:
  1. In
    NetOps Portal
    , hover over
    Administration
    ,
    Data Sources
    ,
    Data Sources
    .
    The
    Manage Data Sources
    page appears.
  2. Select the
    DX NetOps Network Flow Analysis
    data source, and click
    Edit
    .
    The
    Edit Data Source
    dialog opens.
  3. Change the
    Status
    to "Disabled", and click
    Save
    .
  4. Determine the database files to backup.
    • Customized data_retention database (Stand-alone or Harvester server):
      data_retention
    • harvester database (Stand-alone or Harvester server):
      harvester
    • reporter database (Stand-alone or NFA console):
      reporter
  5. Copy each of the target directories or files to a remote location.
  6. Back up the following databases to a remote location, using
    mysqldump
    . Back up the
    reporter
    database last, regardless of the deployment architecture.
    • Customized data_retention database (Stand-alone or Harvester server):
      data_retention
    • harvester database (Stand-alone or Harvester server):
      harvester
    • reporter database (Stand-alone or NFA console):
      reporter
    mysqldump --routines --events -u root
    dbname
    --skip-lock-tables >
    dbbackupname
    .sql
  7. (Optional) Verify that the
    mysqldump
    was successful by checking that the size of the backup is over 1 KB.
  8. Restore each of the target directories or files from its remote location to its original location.
  9. Restore each of the databases. Restore the reporter database first, regardless of the deployment architecture.
    • reporter database:
      reporter
      (Stand-alone or NFA console)
    • Customized data_retention database:
      data_retention
      (Stand-alone or Harvester server)
    • harvester database:
      harvester
      (Stand-alone or Harvester server)
      For best results, restore to a clean installation.
      mysql –e “drop database
      DB_Name
      ;”
      mysql -e "create database
      DB_Name
      ;"
      mysql -u root
      DB_Name
      dbbackupname
      .sql
      mysql -u root mysql > proc.sql
  10. In
    NetOps Portal
    , hover over
    Administration
    ,
    Data Sources
    , and then click
    Data Sources
    .
    The
    Manage Data Sources
    page appears.
  11. Select the
    DX NetOps Network Flow Analysis
    data source, and then click
    Edit
    .
    The
    Edit Data Source
    dialog opens.
  12. Change the
    Status
    to "Enabled", and then click
    Save
    .
DX NetOps Network Flow Analysis
is rehydrated.