Post Installation Tasks

This section describes the configuration tasks that are required post installation:
dxp10
This section describes the post installation tasks:
Tasks for DX Operational Intelligence
Perform the following task for the DX OI installation:
Integrate Syslog Based Alerts with DX OI
Syslogs can be ingested to Elasticsearch as alerts. You can create a profile in DX RESTmon to ingest Syslogs to DX OI as alerts based on severity.
Follow these steps:
  1. Create a custom profile and schema using DX RESTmon. Sample and files are attached for your reference.
  2. Login to the system where the Syslog messages are available.
  3. Open the
    /etc/rsyslog.conf
    file and edit the
    tenant_id
    and the
    <log collector machine IP>
    as shown:
    template(name="ls_json" type="list" option.json="on") { constant(value="{") constant(value="\"syslog_timestamp\":\"") property(name="timereported" dateFormat="rfc3339") constant(value="\",\"syslog_pri\":\"") property(name="pri") constant(value="\",\"syslog_ver\":\"1") constant(value="\",\"tenant_id\":\"<
    FB9F9490-74E8-4DB3-9E0A-03C2966AC92F>
    ") constant(value="\",\"syslog_message\":\"") property(name="msg") constant(value="\",\"host\":\"") property(name="hostname") constant(value="\",\"syslog_severity\":\"") property(name="syslogseverity-text") constant(value="\",\"syslog_facility\":\"") property(name="syslogfacility-text") constant(value="\",\"syslog_severity_code\":\"") property(name="syslogseverity") constant(value="\",\"syslog_facility_code\":\"") property(name="syslogfacility") constant(value="\",\"syslog_program\":\"") property(name="programname") constant(value="\",\"syslog_pid\":\"") property(name="procid") constant(value="\",\"syslog_hostname\":\"") property(name="$myhostname") constant(value="\",\"syslog_priority\":\"") property(name="syslogpriority") constant(value="\"}\n") } *.*; @@<log collector machine IP>:6514;ls_json
  4. Restart the
    rsyslog.service
    .
    systemctl restart rsyslog.service
  5. Update the DX OI Log Collector pod.
    1. Log in to your Kubernetes or OpenShift environment.
    2. Search for the
      doi-logcollector
      pod.
    3. Open the pod
      Terminal
      .
    4. Open the
      /logcollector_config/conf/logcollector.conf
      file.
    5. Add the following content at the end of the file:
      http { http_method => "post" url => "http://<restmon machine>:<port>/restmon/api/v1/logs?profileName=<profilename>&schemaName=<schemaname>" format => "message" headers => {"Content-Type" => "application/json"} content_type => "application/json" message => '%{message}' }
    6. Restart the Log Collector pod.
  6. Log in to the Syslog system and run the following command:
    systemctl restart chronyd.service
    Notice that the alarms are displayed in the DX OI Alarms page.
Tasks for DX App Experience Analytics
Perform the following task for DX App Experience Analytics:
Run the Database Queries
Perform the following steps only for DX App Experience Analytics installation. Run the following queries from the
dxi-postgresql
pod.
Ensure that you are connected to the database.
Insert Queries
INSERT INTO MDO_EXT_PRODUCT_CONFIG (PRODNAME, PRODVERSION, CONFIGKEY, CONFIGVALUE, DESCRIPTION) VALUES ('APM', '20.2', 'APMIsolationWithWebappName', '/map?fa=[{"n":"apf","l":"ATC","o":"AND","v":["{apmWebappName}"],"b":1}]&ep=0&g=[{"attributeName":"applicationName","layer":"ATC"},{"attributeName":"hostname","layer":"ATC"},{"attributeName":"agent","layer":"ATC"}]&cha=0&cht=0&chs=0&m=H&l=ATC&u=&range=0&dvn=applicationName&dvv={apmWebappName}', 'APM Isolation url template for webappname'); INSERT INTO MDO_EXT_PRODUCT_CONFIG (PRODNAME, PRODVERSION, CONFIGKEY, CONFIGVALUE, DESCRIPTION) VALUES ('APM', '20.2', 'APMIsolationWithHostName', '/map?fa=[{"n":"hostname","l":"ATC","o":"AND","v":["{agentHostName}"],"b":1}]&ep=0&g=[{"attributeName":"hostname","layer":"ATC"},{"attributeName":"agent","layer":"ATC"}]&cha=0&cht=0&chs=0&m=H&l=ATC&u=&range=0&dvn=hostname&dvv={agentHostName}', 'APM Isolation url template for hostname'); INSERT INTO MDO_EXT_PRODUCT_CONFIG (PRODNAME, PRODVERSION, CONFIGKEY, CONFIGVALUE, DESCRIPTION VALUES ('APM', '20.2', 'APMUrl', '/apm/appmap/ApmServer/#/map?ep=0&m=H&cht=0&chs=1&cha=0&u=&ts1={starttime}&ts2={endtime}&fl=ATC&g=PE.DEFAULT.GroupingService.Group.CompoundOverview&vertexIdsLayer=ATC&fa=[{"n":"bsf","o":"AND","v":["{bsname}"]},{"n":"trf","o":"AND","v":["{btname}+via+{platform}+{platform_major_version}"]}]', 'APM url template for transaction click through'); INSERT INTO MDO_EXT_PRODUCT_CONFIG (PRODNAME, PRODVERSION, CONFIGKEY, CONFIGVALUE, DESCRIPTION) VALUES ('APM', '20.2', 'APMCorrUrl', '/apm/appmap/ApmServer/#/map?m=H&corIds=CorCrossProcessData:{corrId}&fullscreen&displayCake', 'APM url template for network click through');
Update Queries
UPDATE MDO_EXT_PRODUCT_CONFIG SET CONFIGVALUE='/apm/atc/#/map?ep=0&m=H&cht=0&chs=1&cha=0&u=&ts1={starttime}&ts2={endtime}&fl=ATC&g=PE.DEFAULT.GroupingService.Group.CompoundOverview&vertexIdsLayer=ATC&fa=[{"n":"bsf","o":"AND","v":["{bsname}"]},{"n":"trf","o":"AND","v":["{btname}+via+{platform}+{platform_major_version}"]}]' WHERE PRODVERSION='20.2' and CONFIGKEY='APMUrl'; UPDATE MDO_EXT_PRODUCT_CONFIG SET CONFIGVALUE='/apm/atc/#/map?m=H&corIds=CorCrossProcessData:{corrId}&fullscreen&displayCake' WHERE PRODVERSION='20.2' and CONFIGKEY='APMCorrUrl'; UPDATE mdo_ext_product_config SET configvalue='/link/apm_isolation_view?t_id_cohort={tenant_id}&fa=[{"n":"apf","l":"ATC","o":"AND","v":["{apmWebappName}"],"b":1}]&ep=0&g=[{"attributeName":"applicationName","layer":"ATC"},{"attributeName":"hostname","layer":"ATC"},{"attributeName":"agent","layer":"ATC"}]&cha=0&cht=0&chs=0&m=H&l=ATC&u=&range=0&dvn=applicationName&dvv={apmWebappName}' WHERE prodversion='20.2' and configkey='APMIsolationWithWebappName'; UPDATE mdo_ext_product_config SET configvalue='/link/apm_isolation_view?t_id_cohort={tenant_id}&fa=[{"n":"hostname","l":"ATC","o":"AND","v":["{agentHostName}"],"b":1}]&ep=0&g=[{"attributeName":"hostname","layer":"ATC"},{"attributeName":"agent","layer":"ATC"}]&cha=0&cht=0&chs=0&m=H&l=ATC&u=&range=0&dvn=hostname&dvv={agentHostName}' WHERE prodversion='20.2' and configkey='APMIsolationWithHostName';
Configure the Data Purge
As an Administrator, purge the data periodically for the smooth operation. Execute the data purge script to ensure that the old data is purged. The Jarvis data purge is a kron job which runs based on the
jarvis-es-utils
service. Once the
Utils.properties
is set with the default-retention-period, the kron job purges the data that is older than the specified value. This helps in maintaining healthy disk space where huge amount of data flows in Jarvis.
Perform these steps only if you want to change the default retention period. By default, the retention period is 45 days.
The following procedure describes the steps for OpenShift. You can perform similar steps in the Kubernetes console.
Follow these steps:
  1. Navigate to the
    Deployments
    page in the
    OpenShift
    web console.
  2. Open the
    jarvis-esutils
    pod.
  3. In the
    Environment
    tab, change the value of
    DEFAULT_RETENTION_PERIOD
    as required.
  4. Scale down and scale up the pod again.
For more information about notations in the utils.properties, see the documentation.