Pre-Upgrade Tasks
dxp10
Before you start the upgrade process for DX APM, DX OI, and DX App Experience Analytics from version 20.2 to 20.2.1, perform the following tasks:
You can expect DSP band data loss for the existing alarms and metrics because the metrics are stored in NASS.
Review the Prerequisites
Ensure that the following prerequisites are met:
- No data is being ingested during the upgrade.
- No data is being sent to DX App Experience Analytics and DX APM.
- All the DX APM agent integrations are stopped or all the agent traffic is blocked. To stop the integrations, log in to the DX APM agent systems, and stop the Tomcat process.
- No third-party data is being ingested into DX OI.
- All the RESTmon Agents are stopped and all the agent traffic is blocked.
Download the Installer Distribution File
The
DX Platform
installer is bundled with the DX APM distribution and you can download this zip file from the site.Follow these steps:
- SSH to the same system that was used earlier for the installation.
- Create a directory name<latest_install_dir>.
- From the Support site, download the latest installer distribution file to the directory you created.
- Log in to the site using your credentials.
- ClickEnterprise Softwareand then clickProduct Downloads.
- In theDOWNLOAD MANAGEMENTpage, search forApplication Performance Management.
- In theProduct Downloadsection, select theDX Application Performance Management Multi-Platformversion as20.2.1.
- Click theDX Application Performance Management Multi-Platformlink to open and view the files.
- Download the required file:
- DX Common Platform Installer r20.2.1-online
- DX Common Platform Installer r20.2.1-offline
- Extract the downloaded zip file.
Backup the Data
After you have downloaded the latest installer, the next task is to backup the data:
Backup the Elasticsearch Cluster Snapshot
The first step in this process is to take a snapshot of the Elasticsearch.
Follow these steps:
- In the web console, navigate toApplication > Deployments > jarvis-esutils.
- Open theEnvironmenttab and check for theEXCLUDE_INDICESvariable.
- Removeao_.*.
- Check for theSNAPSHOT_CRONvariable and ensure that you update the expression according to your requirement. For example, to take a snapshot every 23 hours daily, update the expression as0 0 23 **?.
- ClickSave.
- Ensure that thejarvis-esutilspod is restarted and the snapshots are created successfully.
- Run the following Elasticsearch query to check the snapshots:http://ES_ROUTE/_snapshot/Repository/_allSample output: { "snapshots" : [ { "snapshot" : "ao-snapshot_2020-09-29_23:00:00", "uuid" : "YwhqvlnSTgSJuKzBg1J4Sw", "version_id" : 7050199, "version" : "7.5.1", "indices" : [ "jarvis_jmetrics_1.0_1", "jarvis_kron", "jarvis_jmessages_1.0_1", "jarvis_healthcheck_2.0_1", "jarvis_metadata", ".kibana_tadmin-userstore", "jarvis_config", "audit", ".ca_es_acl", ".kibana_d1b9a308-b53b-4e89-ab1a-f490d11d1193" ], "include_global_state" : true, "state" : "SUCCESS", "start_time" : "2020-09-29T22:59:59.489Z", "start_time_in_millis" : 1601420399489, "end_time" : "2020-09-29T23:00:00.490Z", "end_time_in_millis" : 1601420400490, "duration_in_millis" : 1001, "failures" : [ ], "shards" : { "total" : 10, "failed" : 0, "successful" : 10 } }, ..... ..............
Backup the PostgreSQL Database
You can backup the database using the
db-backup.tar
package that is available in the tools
directory. Perform the following steps on the system where the DX Platform
20.2 is installed.Before you perform the following steps, ensure that on the NFS Server (
/var/nfs/dxi/backups/db
), you apply the following permissions to the acc
folder.
- chmod 766
- chown -R 1010:1010
Follow these steps:
- Navigate to thetoolsdirectory of the 20.2 downloaded zip file.cd <latest_install_dir>/tools
- Create a directory nameddb-backup.mkdir db-backup
- Copy thedb-backup.tarpackage from thetoolsdirectory to thedb-backupdirectory.For example, cp /root/<latest_install_dir>/tools/db-backup.tar /root/<latest_install_dir>/tools/db-backup
- Extract thedb-backup.tarpackage.
- Run therun.shscript from thedb-backupdirectory.For Kubernetes: OS=kubernetes ./run.sh For OpenShift: ./run.sh
To run the backup again, run the cleanup script from the
tools
directory:
For Kubernetes: OS=kubernetes ./cleanup.sh For OpenShift: ./cleanup.sh
Backup the OI Metric Publisher Configurations
Run the following command from the
db-backup
directory that you created earlier to backup the APM OIMP configurations:kubectl get cm apmservices-oimetricpublisher -o yaml > oim_configmap.yaml -n <namespace>
oc get cm apmservices-oimetricpublisher -o yaml > oim_configmap.yaml
Backup and Restore the YAML Files
The
tools
directory in the installation zip file includes the backup and restore scripts to help you backup and restore the YAML files:
- backup-yamls.sh
- restore-dxi.sh
backup-yamls.sh
The
backup-yamls.sh
script that is in the tools
directory creates a tarball of all the YAML files for the given namespace. Follow these steps:
- Navigate to thetoolsdirectory of the 20.2.1 downloaded zip file.cd <latest_install_dir>/tools
- Run the following command to backup the YAML files:./backup-yamls.sh [-n| <namespace>] [-f <filename>] Where: -n : proceed in the namespace <namespace> ("dxi" is default) -f : save backup as <filename> instead of yaml-backup-YYYY-MM-DD-HH-MM-SS.tgz in the current directory. <filename> can be /path/to/mybackup.tgz or backup.tgz in the current directory.We recommend that you save the backup process logs as shown in the following example:
Where./backup_yamls.sh -n dx -f backup-2020-Jun-28.tgz 2>&1 | tee backup-2020-Jun-28.log2>&1redirects STDERR to STDOUT, and thenteesends output into the display and into the specified file. To append to the filename instead of overwriting, usetee -a <filename>.You can also run this script without any parameters from thetoolsdirectory:./backup-yamls.go
You can also set up a schedule job using the
backup-yamls.sh
script. Follow these steps:
- Run the following command to list all the scheduled jobs:
The cron jobs editor is displayed.crontab -e - Define the date and time to run the job in the editor. For example, to run the job at 4:05 am every day, add the following entry:5 4 * * * /bin/bash -c "/path/to/tools/backup-yamls.sh -f /path/to/backups/filename-$(date '+\%Y-\%m-\%d-\%H-\%M-\%S').tgz” Note: You must escape % in the command because % is a special sign for the cron daemon.
restore-dxi.sh
The
restore-dxi
script that is in the tools
directory restores the backed up files in the cluster.Follow these steps:
- Navigate to thetoolsdirectory of the 20.2.1 downloaded zip file.cd <latest_install_dir>/tools
- Run the following command:./restore-dxi.sh [-d|--backup-dir <DIRECTORY> | -f|--backup-file <TARBALL>] Where: backup-dir: Unpack the backed up tgz to view the files and specify this directory. Takes backups from the DIRECTORY (under the installation directory). backup-file: Takes backups from the TARBALL instead of thebackupdirectory.You can use-dor-fto provide the location of the backed up data.
Backup the Management Modules
Perform the following steps to backup any customisations or changes made to the tenants.
Follow these steps:
- Navigate to the host where the NFS server is running.
- Create a folder under theconfigsdirectory for every tenant to be backed up. For example, create a folder namedbackup_production_tenant.For example, mkdir <base-dir>/configs/backup_production_tenantWhere,<base-dir>is the NFS folder that you chose during the installation.
- For every tenant, copy the folder namedcustomizeinto thebackup_production_tenantfolder that you created earlier.For example, cp -a <base-dir>/em/<tenantid>-random_int/001/customize <base-dir>/configs/backup_production_tenant
- Folder to backup a Standalone Tenant:<base-dir>/em/<tenantid>-<random_int>/001/customize
- Folder to backup a Small, Regular, Large and Max Tenant:<base-dir>/em/<tenantid>-<random_int>/000/customize
Backup the JavaScript Calculator Scripts
Perform the following steps to backup the JavaScript Calculator scripts.
Follow these steps:
- Navigate to the JavaScripts Extensions UI (http://apmservices-gateway.<defaultSubDomain>/<tenantID>/apm/atc/?#/extensions).
- From the JavaScript Calculator list, locate the required JavaScript file.
- ClickDownloadin theActioncolumn for the required file.
Scale Down Additional APM OI Metric Publisher (OIMP) Pods
The upgrade process requires only one APM OIMP deployment to be running in the cluster. If there are multiple APM OIMP deployments, you must scale down the additional deployments using the steps described in this section. You can scale down the pods using the Command line or the UI.
Perform this task only if there are multiple APM OIMP deployments in the cluster.
Kubernetes - Command Line
Perform the following steps to scale down the additional deployments using the Command line:
Follow these steps:
- Run the following command to get the list of all the APM OIMP deployments:kubectl get deployment -n <namespace> | grep oimetricpublisher
- If the number of deployments is more than one, run the following command to scale down theadditional APM OIMPdeployments to zero:kubectl get deployment -n <namespace> | grep oimetricpublisher | awk '{print $1}' | xargs kubectl -n <namespace> scale deployment --replicas=0Ensure that only one APM OIMP pod is running.
- Run the following command to delete the scaled down deployments:kubectl delete deployment <deployment name> -n <namespace>
- Run the following command to delete the config map references that point to the deleted deployments:kubectl get configmaps | grep oimetricpublisher | awk '{print $1}' | xargs kubectl delete configmap -n <namespace>
Kubernetes - UI
Perform the following steps to scale down the additional deployments using the UI:
Follow these steps:
- Login to the Kubernetes web console.
- Navigate to the project.
- Search for the APM OIMP deployment and check if the number of APM OIMP deployments is more than one.
- If yes, scale down those deployments to zero and ensure that only one deployment is scaled up.
- Delete the scaled down deployments.
- Delete the config map references that point to the deleted deployments.
OpenShift - Command Line
Perform the following steps to scale down the additional deployments using the Command Line:
Follow these steps:
- Login to OpenShift.oc login -u <username> -p <password>
- Open the project:oc project <dxi project>
- Run the following command to get the list of all the APM OIMP deployments:oc get deployment | grep oimetricpublisher
- If the number of deployments is more than one, run the following command to scale down theadditional APM OIMPdeployments to zero:oc get deployment | grep oimetricpublisher | awk '{print $1}' | xargs oc scale deployment --replicas=0
- Run the following command to delete the scaled down deployments:oc delete deployment <deployment name>Ensure that only one APM OIMP pod is running.
- Run the following command to delete the config map references that point to the deleted deployments:oc get configmaps | grep oimetricpublisher | awk '{print $1}' | xargs oc delete configmap
OpenShift - UI
Perform the following steps to scale down the additional deployments using the UI:
Follow these steps:
- Login to the OpenShift console.
- Navigate to the project.
- Navigate toApplications, Deploymentspage.
- Ensure that only one APM OIMP deployment is scaled up. If there are additional deployments, scale down those deployments to zero.
- Delete the APM OIMP deployments that you scaled down.
- Navigate to theResources, Config Mapspage.
- Delete the config map references that point to the deleted deployments.
Additional Tasks for DX App Experience Analytics
Perform the following tasks to upgrade DX App Experience Analytics 20.2 to DX App Experience Analytics 20.2.1:
After the upgrade process is complete, you can expect some data loss.
Patch the Mappings
Before you upgrade DX App Experience Analytics, add the properties
txn_start
, service_name, urln, and parent_urln
to the mappings.Follow these steps:
- Access the Jarvis API UI (http://apis.<defaultSubDomain>). For more information, see the section.Ensure to run on the browser that brings up the Swagger APIs.
- Go to theMappingsection and selectPATCH.
- Patch the following mappings for theaxa_crashesindex:{ "product_id": "ao", "doc_type_id": "axa_crashes", "doc_type_version": "1", "mappings": { "data": { "dynamic": "strict", "_all": { "enabled": false }, "properties": { "txn_start": { "type": "text", "analyzer": "keyword_lowercase" }, "service_name": { "type": "text", "analyzer": "keyword_lowercase" } } } } }
- Verify if the mappings (txn_startandservice_name) are successfully updated using theGET Mapping API. To verify, provide the following information in the API:
- product_id:ao
- doc_type_id:axa_crashes
- doc_type_version:1
- Similarly, patch the mappings for theaxa_sessionsindex.{ "product_id": "ao", "doc_type_id": "axa_sessions", "doc_type_version": "1", "mappings": { "data": { "dynamic": "strict", "_all": { "enabled": false }, "properties": { "network_events": { "type": "nested", "properties": { "urln": { "type": "keyword" }, "parent_urln": { "type": "keyword" } } } } } } }
- Verify if the mappings (andurlnparent_urln) are successfully updated using theGET Mapping API. To verify, provide the following information in the API:
- product_id: ao
- doc_type_id: axa_sessions
- doc_type_version: 1
Update the session_event Index Mappings
Update the session event index mappings in the Elasticsearch with the following custom properties:
session_field1
, session_field2
, session_field3
, session_field4
, session_field5
, session_field6
.Follow these steps:
- Run the following command to get the template:curl -XGET $ES_HOST:$ES_REST_PORT/_template/ao_axa_session_events_1?pretty >ao_axa_session_events_1_template.json
- Remove the following lines in theao_axa_session_events_1json file:"ao_axa_session_events_1" : { and the corresponding }
- Update thesession_eventsmapping with the highlighted properties as shown:"session" : { "properties" : { "duration" : { "type" : "long" }, "custom" : { "type" : "object", "dynamic" : true,"properties": { "session_field1": { "type": "keyword" }, "session_field2": { "type": "keyword" }, "session_field3": { "type": "keyword" }, "session_field4": { "type": "keyword" }, "session_field5": { "type": "keyword" }, "session_field6": { "type": "keyword" } }}, "start" : { "format" : "epoch_millis", "type" : "date" }, "end" : { "format" : "epoch_millis", "type" : "date" }, "id" : { "type" : "keyword" } } }
- Run the following command to update the template:curl -XPUT $ES_HOST:$ES_REST_PORT/_template/ao_axa_session_events_1 -H 'Content-Type: application/json' -d @ao_axa_session_events_1_template.json
- Verify if the template was updated using the following URL:http(s)://$ES_HOST:$ES_REST_PORT/_template/ao_axa_session_events_1?prettyThe template should display thesession_field1tosession_field6properties added to the mapping.
Configure Rollover of an Index
To meet your indexing and search performance requirements and manage the resource usage, you can write to an index until some threshold is met and then create a new index and start writing to it instead.
Follow these steps:
- Open thejarvis-esutilspod.
- In theEnvironmenttab, set theINDEX_LIMITenvironment variable asao_axa_session_events_1=1mb.Where,
- aois the product ID.
- axa_session_eventsis the doc_type_id.
- 1is the doc_version.
- Restart thejarvis-esutilspod.
- After the rollover is completed, delete the value for theINDEX_LIMITvariable.