Monitor Technologies Using RESTmon Probe
You can now monitor any technology or device data using the REST APIs. Using the templates that are provided by UIM you can upload the schema, which details the QoS and the aggregation logic with the http/https REST end points. You can define the metrics, alarms, and can populate CABI dashboards for the monitored devices. From the Settings page, you can download the default template and can build your own schema, or you can use one of the following out-of-the-box schemas:
uimpga-ga
HID_RESTMON_PROBES
You can now monitor any technology or device data using the REST APIs. Using the templates that are provided by UIM you can upload the schema, which details the QoS and the aggregation logic with the http/https REST end points. You can define the metrics, alarms, and can populate CABI dashboards for the monitored devices. From the
Settings
page, you can download the default template and can build your own schema, or you can use one of the following out-of-the-box schemas:Contents
Revision History
This section describes the history of revisions for this probe.
: Support cases may not be viewable to all customers.
Version | Description | State | Date |
1.41 | (Included in UIM 20.3.0) What's New
| GA | September 2020 |
1.38 | What's New:
| GA | April 2019 |
1.20 | Initial release of the probe. | GA | October 2018 |
Prerequisite
- Download and install RESTMon 1.38 probe from the CA Support site.
- Ensure that CA UIM 9.0.2 or later is installed in your environment.
- Ensure that Operator Console is available in your environment.
- MCS templates are available.
- Java 8 installed on the robots.
Configure the REST Clients
Configure and upload the JSON file for each REST client that you want to monitor.
Step 1: Login to UMP and download the schema
- Log into the UMP and navigate toActions, Operator Console.
- ClickSettings,RESTMon.
- Download the schema template in JSON format.
Step 2: Configure the Monitoring Technology Details
Define the following sections to customize the schema.
name
Each schema that you define must have a name that is associated with it. After you download the JSON schema, replace the name
localservice
with the name of the technology, such as elasticsearch
."elasticsearch": {
calculated_methods
In the calculated_methods section, you define a method to convert the QoS values that are returned by JSON into the correct units. For example, if you want to convert the values from kilobyte (KB) to gigabyte (GB), see this example:
"calculated_methods": {"convertKBtoGB": "/ 1048576;"},
Using calculated_methods in metrics definition
For example, if the JSON in the example below returns a value of 27992641617920 Bytes and the calculation $convertKBtoGB is defined as "/ 1048576"; when this is combined with the qos_value, results in the following code being sent to the JavaScript engine (27992641617920 / 1048576) this returns a final value of 26695863.36 GB which is then submitted as the QoS value in GB.
"calculated_methods": [{"convertBytestoGB": "/ 1073741824"}],"metrics": [{"calculation": "$value $convertBytestoGB","xml_ns": "","attributes": {"uim": {"defaultpublishing": "true","qos_name": "QOS_ES_NS_IND_STORE_SIZE","qos_desc": "Store Size","qos_abbr": "GB","metric_type": "4.13.4.2.1.1:80","qos_unit": "GB","qos_value": "$.[*].input_per_sec","source": "$['nodes'][*]['host']","target": "$['nodes'][*]['host']"},"value": "$['nodes'][*]['indices']['store']['size_in_bytes']","url": "nodestats","group": "Node Stats Indices"},]
urls
In the urls section, define the list of REST endpoints that are used to gather metrics and node information.
Define the following parameters in the section:
Attributes | Description |
xml_ns | (Optional) XML namespace to use when parsing the node information retrieved from the referenced url |
src | (Optional) ID for a sibling url that contains instance information that is needed for this url. Used with var field. |
var | (Optional) Define the JPath or XPath directive that is used with src value to parse the information that is returned by src endpoint and substitute it for the $var tokens in the url field. For example, in instances where a src url returns a list of hosts / or nodes / or volumes where each instance value is targeted separately to get more detailed information. |
id | Unique name for the url information. |
url | REST endpoint that provides metric and node information. |
For each URL that you want to monitor, create an entry and define the value in the
url
parameter. Verify that all the URLs are valid and return a result when you access it from a browser.
Sample URL section
"urls": [{"xml_ns": "","src": "","var": "","id": "clusterhealthindices","url": "/_cluster/health?level=indices"},{"src": "","xml_ns": "","var": "","id": "nodestats","url": "/_nodes/stats"},{"src": "","xml_ns": "","var": "","id": "clusterstats","url": "/_cluster/stats"},{"src": "","xml_ns": "","var": "","id": "indexstats","url": "/_all/_stats"}],
definition
In the definition section, define the authentication and connection-related details.
Define the following parameters in the section:
Section | Attributes | Default Value | Description |
resource_category | QOS_APPLICATION | (Optional) User defined category that is used in publishing metrics. | |
defaults | port | 80/443 | Default port to use for endpoints depending on type that is specified (http/s). |
interval | 60 | How often the REST endpoints are polled in seconds. | |
httptimeout | 30000 | How long to wait for a response from a REST endpoint in milliseconds. | |
auth | none | Default authentication mechanism to use. Supported authentication mechanism is: basic; digest; ntlm; token; bearer; urltoken; Outh2 | |
xml_ns | (Optional) XML Namespace to use when parsing the node information retrieved from the referenced url. | ||
name | <schema name> | Default name to use for profile instance, which is typically the schema name. The schema root node name is the same as the name attribute in the definition section of the schema ($.name = $.{name}.definition.name) | |
type | http | Type of REST connections. Valid options are http and https. | |
(optional) addedProfileFields | Define the attributes in this section if you want custom fields (with or without default values) to appear when you create a profile. For example, if your requirement is to define the following fields: Authority (with default value), Scope (with default value), Client ID, Client Secret, Client Secret ID, and Project ID, you can define the attributes as below:
These fields appear in the UI when you create the profile from the Monitoring tab. |
Sample definition section
"definition": {"node": "","resource_category": null,"defaults": {"port": 9200,"interval": 60,"httptimeout": 30000},"auth": "basic","xml_ns": "","name": "elasticsearch","type": "http"},
metrics
In this section, define the metrics that you want to collect.
Define the following parameters in the section:
Section | Attributes | Default Value | Description |
xml_ns | (Optional) XML namespace to use when parsing the node information retrieved from the referenced url | ||
calculation | (Optional) Calculation to apply to raw value to produce metric. This parameter can reference a calculated_method. | ||
attributes > uim | UIM-specific fields that are used when publishing metric value. | ||
qos_name | QoS name, such as QOS_HTTP_STATUS. You can define your own QoS names that you want to publish. | ||
qos_desc | A description about the QoS defined in qos_name attribute, such as HTTP response status. | ||
qos_abbr | The abbreviated QoS name that is used on the GUI, such as State. For more details on the abbreviations | ||
metric_type | Metric type, such as 2. Used with CI type when generating a UIM metric instance to produce a type value like 2.2.2:2. By default, the ci type available is 9.1.1. For more information about ci type and metrics, see Declaring Inventory Metrics and Bulk Configuration and refer the SUPPORTED_CI_METRIC_TYPES.XLSX file available in the GUI. To add a custom ci type, contact CA Support. | ||
qos_unit | The measurement unit of the QoS, such as State. For more information about the supported QoS units, refer the spreadsheet available in the GUI. | ||
source | Device from which the metric value was collected. | ||
target | Device against which the metric value was collected. | ||
defaultpublishing | true | (Optional) Used in building UIM MCS templates. | |
conversion | Converts the string value that the schema returns into numeric value before it can be saved to the database. For example, you can define the conversion values as: Partial-Fault:0, Healthy:1, Degraded:2, Failed:3, Default:-999. In this case, if the schema return the string Healthy, it is converted into the numeric value 1 and saved to the database, which indicates the system health when the metrics were collected. | ||
value | The JPath or XPath used to parse REST endpoint response data for the metric. | ||
url | The Id that references entries in urls section for the REST endpoint. | ||
group | (Optional) The user defined group tag. |
Sample metrics Section
"metrics": [{"calculation": "","xml_ns": "","attributes": {"uim": {"qos_name": "QOS_HTTP_STATUS","qos_desc": "HTTP response status","qos_abbr": "State","metric_type": "2.2.2.2:2","qos_unit": "State","source": "%hostname","target": "%urlid","defaultpublishing": "true"},"value": "%httpstatus","url": "%urlid","group": "Connections"},
calculated_metrics
In this section, define the KPIs that are derived from the raw data to produce metrics to collect.
Attribute | Description |
values | Array of name / value entries that define the derived KPIs. |
name | Unique name for the metric that is used when publishing the value. |
value | The JPath or XPath used to parse REST endpoint response data for the metric. This parameter is used in the calculation field expression to derive the KPI value. |
Sample calculated_metrics section
"calculated_metrics": [{"calculation": "$fetch_total / ($fetch_time_in_millis/1000)","xml_ns": "","values": [{"name": "$fetch_total","value": "$['nodes'][*]['indices']['search']['fetch_total']"},{"name": "$fetch_time_in_millis","value": "$['nodes'][*]['indices']['search']['fetch_time_in_millis']"}],"attributes": {"uim": {"defaultpublishing": "true","qos_name": "QOS_ES_NS_IND_SEARCH_AVERAGE_FETCH_TIME","qos_desc": "Search Average Fetch Time","qos_abbr": "s","metric_type": "4.13.4.2.1.1:53","qos_unit": "s","source": "%hostname","target": "$['nodes'][*]['host']"},}},"url": "nodestats","group": "Node Stats Indices"}
Step 3: Browse and upload the customized schema
You can upload multiple schemas of the same technology type. The schema filenames must follow the following filename format:
<schema-name>_schema.json
Step 4: Validate the schema
Before you validate and deploy the schema, define
Friendly Name
, which is also used by the probe to define the template name. Click Proceed
.The schema is validated for the following conditions:
- It is a valid JSON file with a valid syntax.
- If the file contains APM schema with no UIM metrics.
- The schema root node is also the probe name that can contain only lowercase characters, numbers, hyphen, or underscore
- The schema root node name is the same as the name attribute in the definition section of the schema ($.name = $.{name}.definition.name)
The upload fails if any of the conditions are not met. You can view the logs to debug the errors.
Step 5: Verify the configuration in Monitoring tab
Verify that the corresponding technology appears in the
Monitoring
tab in UMP.- Log in to UMP and navigate toGroups,Operating System,Robotwhere you deployed the custom probe.
- Click theMonitoringtab and view the custom probe.For example, using the default schema you can create a custom probe to monitor elasticsearch servers.
Create Profiles and Monitor the Technologies Using Restmon Probe
To start monitoring the technology using the REST API, create a profile from the and then enable or disable the required metrics to collect the required data.
Monitoring
tab in UMPFollow these steps:
- As a Tenant Administrator, log into UMP and navigate toGroups,Operating System,Robotand then click theMonitoringtab.
- Select and expand the node for the custom probe and create the profile.
- Navigate further to a sub-profile and activate or deactivate the required metrics.To enable threshold alarms, configure thepolicy_mode_enabledparameter in the MCS configuration file and set the value asfalse. For more information about configuring alarm thresholds, see Configuring Alarm Thresholds in MCS. Alternatively, you can create alarm policies and can configure thresholds from the Operator Console.
- Navigate to theInventory, search for the hostname that you define dwhile creating the profile, and then click on theMetricstabAlarmstab.View MetricsAlarms View
Troubleshooting
Symptom:
When you define a friendly name for a schema, you may encounter the following error:

Solution:
If you encounter such an error, define another friendly name and deploy the schema.
Symptom:
Upload a JSON schema and the validation fails resulting in the following error:[<schema_name>.json] is/are missing UIM metric definitions syntax.
Solution:
This error typically occurs when the UIM metrics definition section is missing from the JSON file or contains incorrect values in the
operator
and severity
attributes.To resolve the error, edit the schema file, and perform either of the actions:
- In themetricssection of the schema, define only one value for the following attributes:operatorandseverity. These attributes do not support an array of values.Or
- Remove the following attributes from themetricssection in the schema:thresholdenabled;operator;severity;custom_message;custom_clear_message
Symptom:
Errors occur when configuring RESTMon probe.Solution
Analyze the following log files that are located here: $UIM_installation_dir/Nimsoft/probes/services/wasp
- operatorconsole.log
- wasp.log
Further, you can also capture the http-packet requests for specific details on the errors by creating a
.har
file from the browser for the following end points:- http://$HOSTNAME/operatorconsole_portlet/api/v1/restmon/validateSchema?fileNames=${commaSeparatedFileName}&checkIfExist=true/false&friendlyName=${friendlyName}
- http://$HOSTNAME/operatorconsole_portlet/api/v1/restmon/uploadSchema
- http://$HOSTNAME/operatorconsole_portlet/api/v1/restmon/downloadSchema/{resourceName}
Symptom:
Errors occur when creating a monitoring profile using an MCS template.Solution
Analyze the
mon_config_service.log
file that is located here: $UIM_installation_dir/Nimsoft/probes/services/mon_config_service/Symptom
The probe does not collect the QoS data about the metrics definition in the schema file.
Solution
Verify the following:
- If the status that the QOS_HTTP_STATUS returns is 200. If not, verify the logs to troubleshoot.
- Using the REST Client verify that the endpoint is reachable and returns the expected response.
- If the calculated_method attribute is required with a QoS definition to publish a derived metric value from a raw value.