Pre-Upgrade Tasks

dxp10
Before you start the upgrade process for DX APM, DX OI, and DX App Experience Analytics from version 20.2 to 20.2.1, perform the following tasks:
You can expect DSP band data loss for the existing alarms and metrics because the metrics are stored in NASS.
Review the Prerequisites
Ensure that the following prerequisites are met:
  • No data is being ingested during the upgrade.
  • No data is being sent to DX App Experience Analytics and DX APM.
  • All the DX APM agent integrations are stopped or all the agent traffic is blocked. To stop the integrations, log in to the DX APM agent systems, and stop the Tomcat process.
  • No third-party data is being ingested into DX OI.
  • All the RESTmon Agents are stopped and all the agent traffic is blocked.
Download the Installer Distribution File
The
DX Platform
installer is bundled with the DX APM distribution and you can download this zip file from the site.
Follow these steps:
  1. SSH to the same system that was used earlier for the installation.
  2. Create a directory name
    <latest_install_dir>
    .
  3. From the Support site, download the latest installer distribution file to the directory you created.
    1. Log in to the site using your credentials.
    2. Click
      Enterprise Software
      and then click
      Product Downloads
      .
    3. In the
      DOWNLOAD MANAGEMENT
      page, search for
      Application Performance Management
      .
    4. In the
      Product Download
      section, select the
      DX Application Performance Management Multi-Platform
      version as
      20.2.1.
    5. Click the
      DX Application Performance Management Multi-Platform
      link to open and view the files.
    6. Download the required file:
      • DX Common Platform Installer r20.2.1-online
      • DX Common Platform Installer r20.2.1-offline
    7. Extract the downloaded zip file.
Backup the Data
After you have downloaded the latest installer, the next task is to backup the data:
Backup the Elasticsearch Cluster Snapshot
The first step in this process is to take a snapshot of the Elasticsearch.
Follow these steps:
  1. In the web console, navigate to
    Application > Deployments > jarvis-esutils
    .
  2. Open the
    Environment
    tab and check for the
    EXCLUDE_INDICES
    variable.
  3. Remove
    ao_.*
    .
  4. Check for the
    SNAPSHOT_CRON
    variable and ensure that you update the expression according to your requirement. For example, to take a snapshot every 23 hours daily, update the expression as
    0 0 23 **?
    .
  5. Click
    Save
    .
  6. Ensure that the
    jarvis-esutils
    pod is restarted and the snapshots are created successfully.
  7. Run the following Elasticsearch query to check the snapshots:
    http://ES_ROUTE/_snapshot/Repository/_all
    Sample output: { "snapshots" : [ { "snapshot" : "ao-snapshot_2020-09-29_23:00:00", "uuid" : "YwhqvlnSTgSJuKzBg1J4Sw", "version_id" : 7050199, "version" : "7.5.1", "indices" : [ "jarvis_jmetrics_1.0_1", "jarvis_kron", "jarvis_jmessages_1.0_1", "jarvis_healthcheck_2.0_1", "jarvis_metadata", ".kibana_tadmin-userstore", "jarvis_config", "audit", ".ca_es_acl", ".kibana_d1b9a308-b53b-4e89-ab1a-f490d11d1193" ], "include_global_state" : true, "state" : "SUCCESS", "start_time" : "2020-09-29T22:59:59.489Z", "start_time_in_millis" : 1601420399489, "end_time" : "2020-09-29T23:00:00.490Z", "end_time_in_millis" : 1601420400490, "duration_in_millis" : 1001, "failures" : [ ], "shards" : { "total" : 10, "failed" : 0, "successful" : 10 } }, ..... ..............
Backup the PostgreSQL Database
You can backup the database using the
db-backup.tar
package that is available in the
tools
directory. Perform the following steps on the system where the
DX Platform
20.2 is installed.
Before you perform the following steps, ensure that on the NFS Server (
/var/nfs/dxi/backups/db
), you apply the following permissions to the
acc
folder.
  • chmod 766
  • chown -R 1010:1010
Follow these steps:
  1. Navigate to the
    tools
    directory of the 20.2 downloaded zip file.
    cd <latest_install_dir>/tools
  2. Create a directory named
    db-backup
    .
    mkdir db-backup
  3. Copy the
    db-backup.tar
    package from the
    tools
    directory to the
    db-backup
    directory.
    For example, cp /root/<latest_install_dir>/tools/db-backup.tar /root/<latest_install_dir>/tools/db-backup
  4. Extract the
    db-backup.tar
    package.
  5. Run the
    run.sh
    script from the
    db-backup
    directory.
    For Kubernetes: OS=kubernetes ./run.sh For OpenShift: ./run.sh
To run the backup again, run the cleanup script from the
tools
directory:
For Kubernetes: OS=kubernetes ./cleanup.sh For OpenShift: ./cleanup.sh
Backup the OI Metric Publisher Configurations
Run the following command from the
db-backup
directory that you created earlier to backup the APM OIMP configurations:
kubectl get cm apmservices-oimetricpublisher -o yaml > oim_configmap.yaml -n <namespace>
oc get cm apmservices-oimetricpublisher -o yaml > oim_configmap.yaml
Backup and Restore the YAML Files
The
tools
directory in the installation zip file includes the backup and restore scripts to help you backup and restore the YAML files:
  • backup-yamls.sh
  • restore-dxi.sh
backup-yamls.sh
The
backup-yamls.sh
script that is in the
tools
directory creates a tarball of all the YAML files for the given namespace.
Follow these steps:
  1. Navigate to the
    tools
    directory of the 20.2.1 downloaded zip file.
    cd <latest_install_dir>/tools
  2. Run the following command to backup the YAML files:
    ./backup-yamls.sh [-n| <namespace>] [-f <filename>] Where: -n : proceed in the namespace <namespace> ("dxi" is default) -f : save backup as <filename> instead of yaml-backup-YYYY-MM-DD-HH-MM-SS.tgz in the current directory. <filename> can be /path/to/mybackup.tgz or backup.tgz in the current directory.
    We recommend that you save the backup process logs as shown in the following example:
    ./backup_yamls.sh -n dx -f backup-2020-Jun-28.tgz 2>&1 | tee backup-2020-Jun-28.log
    Where
    2>&1
    redirects STDERR to STDOUT, and then
    tee
    sends output into the display and into the specified file. To append to the filename instead of overwriting, use
    tee -a <filename>
    .
    You can also run this script without any parameters from the
    tools
    directory:
    ./backup-yamls.go
You can also set up a schedule job using the
backup-yamls.sh
script.
Follow these steps:
  1. Run the following command to list all the scheduled jobs:
    crontab -e
    The cron jobs editor is displayed.
  2. Define the date and time to run the job in the editor. For example, to run the job at 4:05 am every day, add the following entry:
    5 4 * * * /bin/bash -c "/path/to/tools/backup-yamls.sh -f /path/to/backups/filename-$(date '+\%Y-\%m-\%d-\%H-\%M-\%S').tgz” Note: You must escape % in the command because % is a special sign for the cron daemon.
restore-dxi.sh
The
restore-dxi
script that is in the
tools
directory restores the backed up files in the cluster.
Follow these steps:
  1. Navigate to the
    tools
    directory of the 20.2.1 downloaded zip file.
    cd <latest_install_dir>/tools
  2. Run the following command:
    ./restore-dxi.sh [-d|--backup-dir <DIRECTORY> | -f|--backup-file <TARBALL>] Where: backup-dir: Unpack the backed up tgz to view the files and specify this directory. Takes backups from the DIRECTORY (under the installation directory). backup-file: Takes backups from the TARBALL instead of the
    backup
    directory.
    You can use
    -d
    or
    -f
    to provide the location of the backed up data.
Backup the Management Modules
Perform the following steps to backup any customisations or changes made to the tenants.
Follow these steps:
  1. Navigate to the host where the NFS server is running.
  2. Create a folder under the
    configs
    directory for every tenant to be backed up. For example, create a folder named
    backup_production_tenant
    .
    For example, mkdir <base-dir>/configs/backup_production_tenant
    Where,
    <base-dir>
    is the NFS folder that you chose during the installation.
  3. For every tenant, copy the folder named
    customize
    into the
    backup_production_tenant
    folder that you created earlier.
    For example, cp -a <base-dir>/em/<tenantid>-random_int/001/customize <base-dir>/configs/backup_production_tenant
    • Folder to backup a Standalone Tenant:
      <base-dir>/em/<tenantid>-<random_int>/001/customize
    • Folder to backup a Small, Regular, Large and Max Tenant:
      <base-dir>/em/<tenantid>-<random_int>/000/customize
Backup the JavaScript Calculator Scripts
Perform the following steps to backup the JavaScript Calculator scripts.
Follow these steps:
  1. Navigate to the JavaScripts Extensions UI (
    http://apmservices-gateway.<defaultSubDomain>/<tenantID>/apm/atc/?#/extensions
    ).
  2. From the JavaScript Calculator list, locate the required JavaScript file.
  3. Click
    Download
    in the
    Action
    column for the required file.
Scale Down Additional APM OI Metric Publisher (OIMP) Pods
The upgrade process requires only one APM OIMP deployment to be running in the cluster. If there are multiple APM OIMP deployments, you must scale down the additional deployments using the steps described in this section. You can scale down the pods using the Command line or the UI.
Perform this task only if there are multiple APM OIMP deployments in the cluster.
Kubernetes - Command Line
Perform the following steps to scale down the additional deployments using the Command line:
Follow these steps:
  1. Run the following command to get the list of all the APM OIMP deployments:
    kubectl get deployment -n <namespace> | grep oimetricpublisher
  2. If the number of deployments is more than one, run the following command to scale down the
    additional APM OIMP
    deployments to zero:
    kubectl get deployment -n <namespace> | grep oimetricpublisher | awk '{print $1}' | xargs kubectl -n <namespace> scale deployment --replicas=0
    Ensure that only one APM OIMP pod is running.
  3. Run the following command to delete the scaled down deployments:
    kubectl delete deployment <deployment name> -n <namespace>
  4. Run the following command to delete the config map references that point to the deleted deployments:
    kubectl get configmaps | grep oimetricpublisher | awk '{print $1}' | xargs kubectl delete configmap -n <namespace>
Kubernetes - UI
Perform the following steps to scale down the additional deployments using the UI:
Follow these steps:
  1. Login to the Kubernetes web console.
  2. Navigate to the project.
  3. Search for the APM OIMP deployment and check if the number of APM OIMP deployments is more than one.
  4. If yes, scale down those deployments to zero and ensure that only one deployment is scaled up.
  5. Delete the scaled down deployments.
  6. Delete the config map references that point to the deleted deployments.
OpenShift - Command Line
Perform the following steps to scale down the additional deployments using the Command Line:
Follow these steps:
  1. Login to OpenShift.
    oc login -u <username> -p <password>
  2. Open the project:
    oc project <dxi project>
  3. Run the following command to get the list of all the APM OIMP deployments:
    oc get deployment | grep oimetricpublisher
  4. If the number of deployments is more than one, run the following command to scale down the
    additional APM OIMP
    deployments to zero:
    oc get deployment | grep oimetricpublisher | awk '{print $1}' | xargs oc scale deployment --replicas=0
  5. Run the following command to delete the scaled down deployments:
    oc delete deployment <deployment name>
    Ensure that only one APM OIMP pod is running.
  6. Run the following command to delete the config map references that point to the deleted deployments:
    oc get configmaps | grep oimetricpublisher | awk '{print $1}' | xargs oc delete configmap
OpenShift - UI
Perform the following steps to scale down the additional deployments using the UI:
Follow these steps:
  1. Login to the OpenShift console.
  2. Navigate to the project.
  3. Navigate to
    Applications, Deployments
    page.
  4. Ensure that only one APM OIMP deployment is scaled up. If there are additional deployments, scale down those deployments to zero.
  5. Delete the APM OIMP deployments that you scaled down.
  6. Navigate to the
    Resources, Config Maps
    page.
  7. Delete the config map references that point to the deleted deployments.
Additional Tasks for DX App Experience Analytics
Perform the following tasks to upgrade DX App Experience Analytics 20.2 to DX App Experience Analytics 20.2.1:
After the upgrade process is complete, you can expect some data loss.
Patch the Mappings
Before you upgrade DX App Experience Analytics, add the properties
txn_start
,
service_name, urln, and parent_urln
to the mappings.
Follow these steps:
  1. Access the Jarvis API UI (
    http://apis.<defaultSubDomain>
    ). For more information, see the section.
    Ensure to run on the browser that brings up the Swagger APIs.
  2. Go to the
    Mapping
    section and select
    PATCH
    .
  3. Patch the following mappings for the
    axa_crashes
    index:
    { "product_id": "ao", "doc_type_id": "axa_crashes", "doc_type_version": "1", "mappings": { "data": { "dynamic": "strict", "_all": { "enabled": false }, "properties": { "txn_start": { "type": "text", "analyzer": "keyword_lowercase" }, "service_name": { "type": "text", "analyzer": "keyword_lowercase" } } } } }
  4. Verify if the mappings (
    txn_start
    and
    service_name
    ) are successfully updated using the
    GET Mapping API
    . To verify, provide the following information in the API:
    • product_id:
      ao
    • doc_type_id:
      axa_crashes
    • doc_type_version:
      1
  5. Similarly, patch the mappings for the
    axa_sessions
    index.
    { "product_id": "ao", "doc_type_id": "axa_sessions", "doc_type_version": "1", "mappings": { "data": { "dynamic": "strict", "_all": { "enabled": false }, "properties": { "network_events": { "type": "nested", "properties": { "urln": { "type": "keyword" }, "parent_urln": { "type": "keyword" } } } } } } }
  6. Verify if the mappings (
    urln
    and
    parent_urln
    ) are successfully updated using the
    GET Mapping API
    . To verify, provide the following information in the API:
    • product_id
      : ao
    • doc_type_id
      : axa_sessions
    • doc_type_version
      : 1
Update the session_event Index Mappings
Update the session event index mappings in the Elasticsearch with the following custom properties:
session_field1
,
session_field2
,
session_field3
,
session_field4
,
session_field5
,
session_field6
.
Follow these steps:
  1. Run the following command to get the template:
    curl -XGET $ES_HOST:$ES_REST_PORT/_template/ao_axa_session_events_1?pretty >ao_axa_session_events_1_template.json
  2. Remove the following lines in the
    ao_axa_session_events_1
    json file:
    "ao_axa_session_events_1" : { and the corresponding }
  3. Update the
    session_events
    mapping with the highlighted properties as shown:
    "session" : { "properties" : { "duration" : { "type" : "long" }, "custom" : { "type" : "object", "dynamic" : true,
    "properties": { "session_field1": { "type": "keyword" }, "session_field2": { "type": "keyword" }, "session_field3": { "type": "keyword" }, "session_field4": { "type": "keyword" }, "session_field5": { "type": "keyword" }, "session_field6": { "type": "keyword" } }
    }, "start" : { "format" : "epoch_millis", "type" : "date" }, "end" : { "format" : "epoch_millis", "type" : "date" }, "id" : { "type" : "keyword" } } }
  4. Run the following command to update the template:
    curl -XPUT $ES_HOST:$ES_REST_PORT/_template/ao_axa_session_events_1 -H 'Content-Type: application/json' -d @ao_axa_session_events_1_template.json
  5. Verify if the template was updated using the following URL:
    http(s)://$ES_HOST:$ES_REST_PORT/_template/ao_axa_session_events_1?pretty
    The template should display the
    session_field1
    to
    session_field6
    properties added to the mapping.
Configure Rollover of an Index
To meet your indexing and search performance requirements and manage the resource usage, you can write to an index until some threshold is met and then create a new index and start writing to it instead.
Follow these steps:
  1. Open the
    jarvis-esutils
    pod.
  2. In the
    Environment
    tab, set the
    INDEX_LIMIT
    environment variable as
    ao_axa_session_events_1=1mb
    .
    Where,
    • ao
      is the product ID.
    • axa_session_events
      is the doc_type_id.
    • 1
      is the doc_version.
  3. Restart the
    jarvis-esutils
    pod.
  4. After the rollover is completed, delete the value for the
    INDEX_LIMIT
    variable.