Prometheus and Grafana Integration

with Prometheus through a REST API endpoint prometheus/generic. Configure Prometheus to scrape the data through this endpoint. When Prometheus is configured to scrape and store the data, connect Grafana to visualize the data.
As a database administrator (DBA), you want to be able to query, visualize, alert on, and explore the metrics that are most important to you.
provides integration with Prometheus through a
(DBM) Data Service REST API endpoint
. You can configure Prometheus to scrape the
data through this endpoint. When Prometheus is successfully configured to scrape and store the
data, you can use Grafana with Prometheus to visualize the data.
For the list of metrics that are available with
REST API, see REST API Requests.
Architecture Diagram
The following figure shows an overview of the
integration with Prometheus and Grafana.
Prometheus Grafana Integration Architecture
The integration includes the following components:
  • The following components run on z/OS:
    • SYSVIEW for Db2
      Data Collectors
      Provide the direct connection to Db2. Operate continuously as a started task, collect, keep, and process Db2 performance data.
    • Database Management Data Service (REST API Service)
      Provides a RESTful API that enables access to Db2 data and metrics that are collected by the DBM.
    • (Optional)
      API ML
      Consolidates mainframe RESTful API services at a single secure point of access, including the
      SYSVIEW for Db2
      REST API.
  • The following components run on the local system (non-z/OS):
    • Prometheus
      The monitoring system and time series database.
    • Grafana
      The open-source platform for monitoring and observability.
Configure Prometheus
Prometheus records real-time metrics in a time series database that is built using an HTTP pull model. You must configure Prometheus to scrape metrics from the
REST API endpoint in 1-minute intervals.
To simplify the setup, we provide a sample Prometheus configuration file. Download, customize, and use the file to instantiate the Prometheus data collection.
Follow these steps:
  1. Download and extract the latest release of Prometheus for your platform.
    The minimum supported version for integration with
    SYSVIEW for Db2
    is Prometheus 2.5.0.
  2. Download the sample Prometheus configuration file from the USS-mounted file system. The path is
    • The sample file is encoded in ASCII/UTF-8. You must download it in
    • If you cannot locate the mountpoint directory, contact a z/OS programmer who performs DBM installations and maintenance on your z/OS systems. The USS mountpoint for the zFS file system is defined during the SMP/E installation.
  3. Customize the file to connect it to your
    Data Service. For more information, see Customize the Prometheus Configuration File.
  4. Start Prometheus with the
    sample configuration file.
    prometheus --config.file=prometheus.yml
    If you start Prometheus with the executable file, prometheus.yml must reside in the same folder as prometheus.exe.
    Prometheus parses prometheus.yml as a default configuration file.
  5. Verify that
    opens the Prometheus user interface.
    This URL also serves as a data source for Grafana.
  6. Open
    to verify that Prometheus collects the
    SYSVIEW for Db2
Customize the Prometheus Configuration File
Customize the Prometheus configuration according to your DBM Data Service configuration.
The following example shows the parameters that enable you to integrate Prometheus with the
global: scrape_interval: 1m scrape_timeout: 30s external_labels: monitor: 'sysview-for-db2' scrape_configs: - job_name: 'SSID' scheme: https basic_auth: username: MFUSERID password_file: pwd_file.txt metrics_path: "/dbm/api/v1/idb2/prometheus/generic" params: function: ['DSAISTD,DSAISTDX,DSAISTDA,DSAISTDB,DSAISTDD,DSAISTDG,DSAIED,DSAIEDA,DSAISACD'] delta: ['true'] ssid: ['SSID'] static_configs: - targets: ['dbmds.lpar.hostname:port']
section specifies the global configuration parameters:
  • scrape_interval
    Specifies the global scrape interval.
    Set this value to 1 minute, so that it matches the interval, with which
    SYSVIEW for Db2
    produces data.
  • scrape_timeout
    Specifies the global scrape timeout. Recommended value is 30 seconds.
  • external_labels
    Specifies the labels for communication with external systems.
section specifies a set of targets and parameters to scrape the data. The scrape configuration can contain multiple scraping instances. Typically, there would be one instance for every
data collector (SSID).
  • job_name
    Specifies a unique scraping instance name.
  • scheme
    Specifies the TCP/IP protocol scheme to access the REST API. Typically, scheme should be configured as
  • basic_auth
    Specifies the basic authentication details to access the DBM REST API:
    • username
      A mainframe user id to connect to
      SYSVIEW for Db2
      . This user id only requires access to
      SYSVIEW for Db2
    • password_file
      The name of the text file with the mainframe user ID password.
      Alternatively, you can specify the password string directly.
  • metrics_path
    Specifies the scraping endpoint path, which is
    . You should only customize this path if you use API Mediation Layer for z/OS or other reverse-proxy servers that change the URL.
  • params
    Specifies the scraping endpoint parameters. For more information, see Swagger JSON File in Using the REST API.
    • function
      Specifies a list of
      SYSVIEW for Db2
      Data Service IQL request names that feed the data. Multiple requests are delimited by comma. Only change this setting if needed.
    • delta
      Specifies that the data that is feeding the IQL requests produces metric values as a 1-minute interval delta's. Set this value to
    • ssid|agent|environment
      Specifies the target
      SYSVIEW for Db2
      data collector.
      • agent
        —Specifies Xnet agent ID. Required unless ssid is specified.
      • ssid
        —Specifies the Db2 subsystem ID. Required unless agent is specified.
      • environment
        —Specifies the Xnet environment.
      Typically, you would only specify a ssid parameter. This specification is sufficient for sites with one
      SYSVIEW for Db2
      data collector per Db2 subsystem and one Xmanager per z/OS LPAR.
  • static_configs: targets
    Specifies the host name and port of the DBM Data Service instance.
Configure Grafana
Grafana lets you visualize and explore the state of your system. Use Grafana with Prometheus as a data source to display the
Follow these steps:
  1. Download and install the latest release of Grafana for your platform.
  2. Verify that
    opens the Grafana user interface.
    • By default, Grafana is available locally at the URL http://localhost:3000/.
    • On a Windows platform, you might need to assign special permissions to use the default Grafana port 3000. You can change the default port in the custom.ini file. For more information, see the Grafana documentation.
  3. Add Prometheus as a data source in Grafana:
    1. Select
    2. Select
      Data Sources
    3. Select
      Add Data Source
    4. Select
    5. Configure the Prometheus data source name.
      • The sample dashboard files that we provide use the name
        . If you choose another name, you must edit the data source name in the sample dashboard files.
      • By default, Prometheus runs on port 9090 and is available locally on the URL http://localhost:9090.
    6. Select
      Save & Test
    For more information about the data source configuration parameters, see the Prometheus documentation.
  4. Download the following
    SYSVIEW for Db2
    dashboard files and import them to Grafana:
    • The dashboard files are encoded in ASCII/UTF-8. You must download them in
    • These dashboards are compatible with Grafana 7.0. You may have to install the missing plugins.
    • uss-mount-../PXM/ds/config/idb2_grafana_gm_v1.0.json—Group by Member View
    • uss-mount-../PXM/ds/config/idb2_grafana_gw_v1.0.json—Group-Wide View
    • uss-mount-../PXM/ds/config/idb2_grafana_mv_v1.0.json—Member View
    • uss-mount-../PXM/ds/config/idb2_grafana_bp_v1.0.json—Buffer Pool Statistics
  5. (Optional) Customize the dashboards as needed.
  6. Verify that the dashboards display the correct data.
Prometheus Metric Format
Prometheus stores data as a time series, with streams of timestamped values belonging to the same metric and set of labels. Every time series is uniquely identified by a metric name and an optional key-value label. A specific combination of labels identifies any variation of a particular metric.
The following example shows two instances of the BP_GETPAGE metric. The two instances are collected on the same subsystem, within the same request.
# HELP BP_GETPAGE N/A # TYPE BP_GETPAGE gauge BP_GETPAGE{index="BP0",ssid="DT31",group="DTGP",function="DSAISTDB";} 23781.0 # HELP BP_GETPAGE N/A # TYPE BP_GETPAGE gauge BP_GETPAGE{index="BP1",ssid="DT31",group="DTGP",function="DSAISTDB";} 14794.0
Prometheus supports the following labels:
  • function
    Specifies a data feeding request name.
  • group
    Specifies a data sharing group name.
    For a standalone subsystem, the group label value is the same as ssid.
  • ssid
    Specifies a Db2 subsystem.
  • index
    Specifies an identifier for the repeating field values. Assignment of this label depends on the
    The REPEATING_INDEX_FIELD in the associated IQL request specifies an IQL field whose value populates the index label value.
    • For DSAISTDA, the index values contain Db2 accelerator names.
    • For DSAISTDB, the index values contain buffer pool names, such as BP0 or BP1.
    • For DSAISTDG, the index values contain group buffer pool names, such as GBP0 or GBP1.
    • For DSAISTDD, the index values contain remote location names, such as DRDA REMOTE LOCS for the summary or ::FFFF: for a specific location.
    • For DSAIEDA, the index values contain Db2 address space names, such as DBM1 or DIST.
    • For DSAISACD, the index values contain Db2 connection types, such as ALL for the summary or REST API or BATCH for a specific type.