MQS Incremental Update Statements

This article discusses the role of Incremental Update statements in the MQSeries analyzer, and lists the procedure for enabling incremental update.
micsrm140
This article discusses the role of Incremental Update statements in the MQSeries analyzer and lists the procedure for enabling incremental update.
INCRUPDATE
(Optional) Specify the following value to enable incremental update for this product:
INCRUPDATE YES
If you do not specify or enable the INCRUPDATE parameter, then it defaults to the following value and incremental update is disabled:
INCRUPDATE NO
Note:
Changing the INCRUPDATE parameter (either from NO to YES or from YES to NO) requires regeneration of the DAILY operational job by executing
prefix.MICS.CNTL(JCLGEND)
or by specifying DAILY in
prefix.MICS.PARMS(JCLGENU)
and executing
prefix.MICS.CNTL(JCLGENU)
.
If you specify INCRUPDATE YES, you must also generate the INCRccc, cccIUALC, and cccIUGDG jobs (where ccc is the 3 character product ID). Depending on the options that you select, you may also need to execute the cccIUALC and/or cccIUGDG jobs.
Incremental update can significantly reduce time and resource usage in the DAILY job. It allows you to split out a major portion of daily database update processing into multiple, smaller, incremental updates executed throughout the day.
  • Standard
    MICS
    database update processing involves to following processes:
    1. Reading and processing raw input data to generate DETAIL and DAYS level
      MICS
      database files.
    2. Summarization of DETAIL/DAYS level data to update week-to-date and month-to-date database files.
  • When you activate incremental update:
    1. You can execute the first-stage processing (raw data input to create DETAIL/DAYS files) multiple times throughout the day, each time processing a subset of the total day input data.
    2. Then, during the final update of the day (in the DAILY job), the incremental DETAIL/DAYS files are "rolled-up" to the database DETAIL and DAYS timespans. Then they are summarized to update the week-to-date and month-to-date files.
  • Incremental update is independent of your internal step restart or DBSPLIT specifications. You can perform incremental updates with or without internal step restart support.
  • Incremental update is activated and operates independently by product. The incremental update job for this product, INCRccc (where ccc is the product ID), can execute concurrently with the incremental update job for another product in the same unit database.
  • The
    MICS
    database remains available for reporting and analysis during INCRccc job execution.
MICS
is a highly configurable system that supports up to 36 unit databases, each of which can be configured and updated independently. Incremental update is just one of the options you can use to configure your
MICS Complex
. Employ
MICS
configuration capabilities to minimize issues before activating incremental update. For example:
  • Splitting work to multiple units is an effective way to enable parallel database update processing.
  • Adjusting account code definitions to ensure adequate data granularity while minimizing total database space and processing time.
  • Tailoring the database to drop measurements and metrics of lesser value to your data center, thereby reducing database update processing and resource consumption.
While incremental update is intended to reduce DAILY job elapsed time, total resource usage of the combined INCRccc and DAILY jobs steps can increase due to the additional processing required to maintain the incremental update "to-date" files and for roll-up to the unit database. The increased total resource usage is more noticeable with small data volumes, where processing code compile time is a greater percentage of total processing cost.
Note:
When you activate incremental update (INCRUPDATE YES), the following optional incremental update parameters are enabled. If incremental update is disabled (INCRUPDATE NO), these parameters have no effect. For more details, see the individual parameter descriptions later in this section.
INCRDB
PERM/TAPE/DYNAM
INCRDETAIL
data_set_allocation_parameters
INCRDAYS
data_set_allocation_parameters
INCRCKPT
data_set_allocation_parameters
INCRSPLIT
USE/IGNORE data_set_allocation_parameters
Incremental update processing reads and processes raw measurement data to create and maintain DETAIL and DAYS level "to-date" files for the current day.
  • These incremental update database files are maintained on unique z/OS data sets. The files are independent of the standard
    MICS
    database files, and independent of any other product's incremental update database files. There is one data set each for DETAIL and DAYS level "to-date" data and a single incremental update checkpoint data set for this product in this unit.
  • The incremental update DETAIL and DAYS files can be permanent DASD data sets, or they can be allocated dynamically as needed and deleted after DAILY job processing completes. Optionally, you can keep the incremental update DETAIL and DAYS files on tape, with the data being loaded onto temporary DASD space as needed for incremental update or DAILY job processing. For more information, see the INCRDB PERM/TAPE/DYNAM option section.
After you activate incremental update, use the following three incremental update facility jobs that are found in
prefix.MICS.CNTL
:
Note:
ccc is the product ID in the following list.
cccIUALC
Execute this job to allocate and initialize the incremental update checkpoint file, and optionally the incremental update DETAIL and DAYS database files. cccIUALC is generally executed once.
cccIUGDG
Execute this job to add generation data group (GDG) index definitions to your system catalog in support of the INCRDB TAPE option. cccIUGDG is generally executed just ONE time.
INCRccc
Execute this job for each incremental update. Integrate this job into your database update procedures for execution one or more times per day to process portions of the total day's measurement data
Note:
The DAILY job is run once at the end of the day. It will perform the final incremental update for the day's data, and then roll-up the incremental DETAIL/DAYS files to the database DETAIL and DAYS timespans and update the week-to-date and month-to-date files.
INCRUPDATE Considerations
Overhead
Incremental update reduces DAILY job resource consumption and elapsed time by offloading a major portion of database update processing to one or more executions of the INCRccc job. In meeting this objective, incremental update adds processing in the INCRccc and DAILY jobs to accumulate data from each incremental update execution into the composite to-date DETAIL and DAYS incremental update files. Incremental updates also adds processing in the DAILY job to copy the incremental update files to the unit database DETAIL and DAYS timespans. The amount of this overhead and the savings in the DAILY job are site-dependent, and vary based on input data volume and on the number of times INCRccc is executed each day.
Activating incremental update causes additional compile-based CPU time to be consumed in the DAYnnn DAILY job step. The increase in compile time is due to extra code included for each file structure in support of the feature. This increase should be static based on the scope of the
MICS
data integration product in terms of files. This compile-time increase does not imply an increase in elapsed or execution time. Incremental update allows I/O bound, intensive processing (raw data inputting, initial
MICS
transformation, and so on.) to be distributed outside of the DAILY job. I/O processing is the largest contributor to elapsed time in large volume applications. Thus, the expected overall impact is a decrease in the actual runtime of the DAYnnn job step.
Increased "Prime Time" Workload
When you offload work from the DAILY job to one or more INCRccc executions throughout the day, you are moving system workload and DASD work space usage from the "off-hours," (when the DAILY job is normally executed) to periods of the day where your system resources are in highest demand. Schedule INCRccc executions carefully to avoid adverse impact to batch or online workloads. For example, if your the prime shift on your site is 8:00 AM to 5:00 PM, you can choose to schedule incremental updates for 7:00 AM (just before the prime shift) and 6:00 PM (just after the prime shift), with the DAILY job executing just after midnight.
Increased DASD Usage
The DASD space, that is required for the incremental update DETAIL and DAYS database files, is in addition to the DASD space that is already reserved for the
MICS
database. By default, the incremental update database files are permanently allocated, making this DASD space unavailable for other applications. In general, you can assume that the incremental update database files requires space equivalent to two cycles of this product's DETAIL and DAYS timespan files.
Alternatively, the incremental update database files can be allocated in the first incremental update of the day and deleted by the DAILY job (see the INCRDB DYNAM option description later in this section). This approach reduces the amount of time that the DASD space is dedicated to incremental update, and lets the amount of DASD space consumed increase through the day as you execute each incremental update.
A third option is to store the incremental update database files on tape (see the INCRDB TAPE option). With this approach, the DASD space is required only for the time that each incremental update or DAILY job step is executing. Note that while this alternative reduces the "permanent" DASD space requirement, the total amount of DASD space, that is during the time the incremental update or DAILY jobs are executing, is unchanged. In addition, the TAPE option adds processing to copy the incremental update files to tape, and to reload the files from tape to disk. Note: The incremental update checkpoint file is always a permanently allocated disk data set. This is a small data set and should not be an issue.
Operational Complexity
Incremental update expands your measurement data management and job scheduling issues. Ensure that each incremental update and the DAILY job processes your measurement data chronologically - each job must see data that are newer than the data processed by the prior job. By incrementally updating the database, you increase the risk to miss a log file, or to process a log out of order.
Interval End Effects
Each incremental update processes a subset of the daily measurement data, taking advantage of early availability of some of the day's data, for example, when a measurement log fills and switches to a new volume. This can cause a problem if the measurement log split occurs while the data source is logging records for the end of a measurement interval, thus splitting the data for a single measurement interval across two log files. When an incremental update processes the first log file, the checkpoint high end timestamp is set to indicate that this split measurement interval has been processed. Then, when the rest of the measurement interval data are encountered in a later update, it can be dropped as duplicate data (because data for this measurement interval end timestamp has already been processed).
Appropriate scheduling of log dumps and incremental updates can avoid this problem. For example, if you plan to run incremental updates at 7:00 AM and 6:00 PM, you could force a log dump in the middle of the measurement interval before the scheduled incremental update executions. This is an extension of the procedure you can already use for end-of-day measurement log processing. The objective is to ensure that all records for each monitor interval are processed in the same incremental update.
Dynamic Allocation
When you activate incremental update and specify TAPE or DYNAM for the INCRDB parameter, dynamic allocation is employed for the incremental update database files. If your site restricts dynamic allocation of large, cataloged data sets, use the INCRDETAIL and INCRDAYS parameters to direct incremental update data set allocation to a generic unit or storage class where dynamic allocation is allowed.
Data Set Names
The incremental update database files are allocated and cataloged according to standard
MICS
unit database data set name conventions. The DDNAME and default data set names are (where ccc is the product ID):
  • Incremental update checkpoint file
    //IUCKPT DD DSN=prefix.MICS.ccc.IUCKPT,.....
  • Incremental update DETAIL
    //IUDETAIL DD DSN=prefix.MICS.ccc.IUDETAIL,.....
  • Incremental update DAYS
    //IUDAYS DD DSN=prefix.MICS.ccc.IUDAYS,.....
These data sets conform to the same data set name conventions as your existing
MICS
data sets. This minimizes data-set-name-related allocation issues. However, you can override the data set names if required. Contact Technical Support for assistance if you must change data set names.
INCRDB
(Optional) The default is the following value:
INCRDB PERM
Note:
INCRDB is ignored when you specify INCRUPDATE NO.
Specify this statement, or use the default, to keep the incremental update database DETAIL and DAYS files on permanently allocated DASD data sets:
INCRDB PERM
Execute the
prefix.MICS.CNTL(cccIUALC)
job to allocate the incremental update database files.
Note:
The incremental update checkpoint file is always a permanently allocated DASD data set.
Specify the following value to offload the incremental update DETAIL and DAYS files to tape between incremental update executions:
INCRDB TAPE #gdgs UNIT=name
With the TAPE option, the incremental update DETAIL and DAYS DASD data sets are dynamically allocated at the beginning of the incremental update job or DAILY job step, and then are deleted after the job step completes.
  • The first incremental update job of the day allocates and initializes the incremental update database files. At the end of the job, the DETAIL and DAYS files are copied to a new (+1) generation of the incremental update tape data sets. Then the DASD files are deleted.
  • Subsequent incremental update jobs restore the DASD incremental update database files from the current, (0) generation, incremental update tape data sets before processing the input measurement data. At the end of the job, the DETAIL and DAYS files are copied to a new (+1) generation of the incremental update tape data sets. Then the DASD files are deleted.
  • The DAILY job step also restores the DASD incremental update database files from the (0) generation tape files before processing the input data, but does NOT copy the incremental update database files to tape. Thus, the DAILY job actually creates a new, null (+1) generation.
  • Use the #gdgs parameter to specify the maximum number of incremental update tape generations. The minimum is 2 and the maximum is 99, with a default of 5.
    Set the number of generations equal to or greater than the number of incremental updates, including the DAILY job that you plan to execute each day. This facilitates restart and recovery if you encounter problems requiring you to reprocess portions of the daily measurement data.
  • Use the optional UNIT=name parameter to specify a tape unit name for the incremental update database output tapes. The default is to use the same tape unit as the input tapes.
  • A special index must be created in your system catalog for each of the incremental update tape data set generation data groups. The
    prefix.MICS.CNTL(cccIUGDG)
    job generates the statements to create the incremental update GDG index definitions.
    • Before each index is built, it is deleted. DLTX (or DELETE) statements causes an error message if no entry exists. This allows you to change the number of entries without having to delete each of the index entries.
    • DLTX and BLDG (or DELETE and DEFINE) fail if there is a cataloged data set with the same index. IDCAMS (or IEHPROGM) issues a message and gives a return code of 8. This issue is not a problem for non-GDG entries or if the GDG already has the desired number of entries.
    • If you want to change the number of entries that are kept in a GDG with cataloged data sets, follow these steps:
      1. Uncatalog any existing entries in the GDG.
      2. Delete the index with a DLTX (or DELETE).
      3. Create the index with a BLDG (or DEFINE).
      4. Catalog any entries that are uncataloged in step 1.
  • The incremental update tape data set names are as follows, where ccc is the product ID:
    • Incremental update tape DETAIL file
      tapeprefix.MICS.ccc.IUXTAPE.GnnnnV00
    • Incremental update tape DAYS file
      tapeprefix.MICS.ccc.IUDTAPE.GnnnnV00
Note:
The INCRDETAIL and INCRDAYS parameters are required when you specify INCRDB TAPE.
Specify the following parameter to allocate dynamically the incremental update DETAIL and DAYS DASD data sets in the first incremental update of the day, and then delete these data sets at the end of the DAILY job step:
INCRDB DYNAM
  • With this option, no space is used for the incremental update database files during the time between the end of the DAILY job step and the beginning of the next day's first incremental update.
  • With this approach, you can set the data set allocation parameters so that the incremental update DETAIL and DAYS data sets start out with a minimum allocation and then grow through secondary allocations as more space is required for subsequent incremental updates. For example, enough space for one incremental update.
Note:
The INCRDETAIL and INCRDAYS parameters are required when you specify INCRDB DYNAM.
INCRDETAIL
This statement is required if you specify either of the following values:
INCRDB TAPE
INCRDB DYNAM
Otherwise, this statement is optional. There is no default.
Specify the following to define data set allocation parameters for the incremental update DETAIL data set (IUDETAIL):
INCRDETAIL data_set_allocation_parameters
Note:
INCRDETAIL is ignored when you specify INCRUPDATE NO.
The incremental update DETAIL data set (IUDETAIL) contains the current incremental update detail-level database files, and the DETAIL "to-date" data for the current daily update cycle. Allocate DASD space equivalent to two cycles of this product's DETAIL timespan data.
If you specify INCRDB PERM (the default), your INCRDETAIL parameter specifications are used in generating the cccIUALC job (where ccc is the product ID).
  • Execute the cccIUALC job to allocate and initialize the incremental update database and checkpoint files.
  • Omit the INCRDETAIL parameter if you prefer to specify data set allocation parameters directly in the generated
    prefix.MICS.CNTL(cccIUALC)
    job.
If you specify INCRDB TAPE or INCRDB DYNAM, your INCRDETAIL parameter specifications are used in incremental update DETAIL data set dynamic allocation during incremental update or DAILY job step execution.
  • The INCRDETAIL parameter is required for the TAPE or DYNAM option.
  • Specify data set allocation parameters, separated by blanks, according to SAS LIBNAME statement syntax. If you need multiple lines, repeat the INCRDETAIL keyword on the continuation line.
  • INCRDETAIL accepts the engine/host options documented in the SAS Companion for the z/OS Environment, including STORCLAS, UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and VOLSER.
    Do not specify the DISP parameter.
  • You can override the INCRDETAIL data set allocation parameters at execution-time using the //PARMOVRD facility. For more information about execution-time override of dynamic data set allocation parameters, see Dynamic Allocation Parameter Overrides (//PARMOVRD).
Example 1:
INCRDETAIL STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
STORCLAS
Specifies a storage class for a new data set.
Limits:
8 characters
SPACE
Specifies how much disk space to provide for a new data set being allocated
xxxx
TRK, CYL, or blklen
pp
the primary allocation
ss
the secondary allocation
ROUND
Specifies that the allocated space must be "rounded" to a cylinder boundary when the unit specified was a block length.ROUND is ignored with the TRK or CYL options.
Example 2 (multiple lines):
INCRDETAIL STORCLAS=MICSTEMP UNIT=SYSDA INCRDETAIL SPACE=(xxxx,(pp,ss),,,ROUND)
STORCLAS
Specifies a storage class for a new data set.
Limits:
8 characters
UNIT
Specifies the generic unit for a new data set.
Limits:
8 characters
SPACE
Specifies how much disk space to provide for a new data set being allocated.
INCRDAYS
This statement is required if you specify either of the following values:
INCRDB TAPE
INCRDB DYNAM
Otherwise, this statement is optional. There is no default.
Specify this to define data set allocation parameters for the incremental update DAYS data set (IUDAYS):
INCRDAYS data_set_allocation_parameters
Note:
INCRDAYS is ignored when you specify INCRUPDATE NO.
The incremental update DAYS data set (IUDAYS) contains the current incremental update days-level database files, and the DAYS "to-date" data for the current daily update cycle. You should allocate DASD space equivalent to two cycles of this product's DAYS timespan data.
If you specify INCRDB PERM (the default), your INCRDAYS parameter specifications are used in generating the cccIUALC job (where ccc is the product ID).
  • Execute the cccIUALC job to allocate and initialize the incremental update database and checkpoint files.
  • Omit the INCRDAYS parameter if you prefer to specify data set allocation parameters directly in the generated
    prefix.MICS.CNTL(cccIUALC)
    job.
If you specify INCRDB TAPE or INCRDB DYNAM, your INCRDAYS parameter specifications are used in incremental update DAYS data set dynamic allocation during incremental update or DAILY job step execution.
  • The INCRDAYS parameter is required for the TAPE or DYNAM option.
  • Specify data set allocation parameters, separated by blanks, according to SAS LIBNAME statement syntax. If you need multiple lines, repeat the INCRDAYS keyword on the continuation line.
  • INCRDAYS accepts the engine/host options documented in the SAS Companion for the z/OS Environment, including STORCLAS, UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and VOLSER.
    Do not specify the DISP parameter.
  • You can override the INCRDAYS data set allocation parameters at execution-time using the //PARMOVRD facility. For more information about execution-time override of dynamic data set allocation parameters, see Dynamic Allocation Parameter Overrides (//PARMOVRD).
Example 1:
INCRDAYS STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
STORCLAS
Specifies a storage class for a new data set.
Limits:
8 characters
SPACE
Specifies how much disk space to provide for a new data set being allocated.
xxxx
TRK, CYL, or blklen
pp
the primary allocation
ss
the secondary allocation
ROUND
Specifies that the allocated space must be "rounded" to a cylinder boundary when the unit specified was a block length. ROUND is ignored with the TRK or CYL options.
Example 2 (multiple lines):
INCRDAYS STORCLAS=MICSTEMP UNIT=SYSDA INCRDAYS SPACE=(xxxx,(pp,ss),,,ROUND)
STORCLAS
Specifies a storage class for a new data set.
Limits:
8 characters
UNIT
Specifies the generic unit for a new data set.
Limits:
8 characters
SPACE
Specifies how much disk space to provide for a new data set being allocated.
INCRCKPT
(Optional) Specify this to override default data set allocation parameters for the incremental update checkpoint data set:
INCRCKPT data_set_allocation_parameters
Note:
INCRCKPT is ignored when you specify INCRUPDATE NO.
The incremental update checkpoint data set tracks incremental update job status and the data that were processed during the current daily update cycle. The incremental update checkpoint is used to detect and block the input of duplicate data during incremental update processing. This data set is exactly the same size as
prefix.MICS.CHECKPT.DATA
(the unit checkpoint data set), usually 20K to 200K depending on the
prefix.MICS.PARMS(SITE)
CKPTCNT
parameter (100-9999).
Your INCRCKPT parameter specifications are used in generating the cccIUALC job (where ccc is the product ID).
  • Execute the cccIUALC job to allocate and initialize the incremental update checkpoint file. If you specify INCRDB PERM, then the cccIUALC job will also allocate the incremental update DETAIL and DAYS database files.
  • By default, the incremental update checkpoint data set is allocated as SPACE=(TRK,(5,2)) using the value you specified for the
    prefix.MICS.PARMS(JCLDEF) DASDUNIT
    parameter.
  • Omit the INCRCKPT parameter if you prefer to override data set allocation parameters directly in the generated
    prefix.MICS.CNTL(cccIUALC)
    job.
Specify data set allocation parameters, separated by blanks, according to SAS LIBNAME statement syntax. If you need multiple lines, repeat the INCRCKPT keyword on the continuation line.
INCRCKPT accepts the engine/host options documented in the SAS Companion for the MVS Environment, including STORCLAS, UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and VOLSER.
Do not specify the DISP Parameter.
Example 1:
INCRCKPT STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
STORCLAS
Specifies a storage class for a new data set.
Limits:
8 characters
SPACE
Specifies how much disk space to provide for a new data set being allocated.
xxxx
TRK, CYL, or blklen
pp
the primary allocation
ss
the secondary allocation
ROUND
Specifies that the allocated space must be "rounded" to a cylinder boundary when the unit specified was a block length. ROUND is ignored with the TRK or CYL options.
Example 2 (multiple lines):
INCRCKPT STORCLAS=MICSTEMP UNIT=SYSDA INCRCKPT SPACE=(xxxx,(pp,ss),,,ROUND)
STORCLAS
Specifies a storage class for a new data set.
Limits:
8 characters
UNIT
Specifies the generic unit for a new data set.
Limits:
8 characters
SPACE
Specifies how much disk space to provide for a new data set being allocated.
INCRSPLIT
(Optional) This statement defaults to the following value:
INCRSPLIT IGNORE
Specify the following if you want the incremental update job for this product to get input measurement data from the output of the SPLITSMF job. The optional data_set_allocation_parameters are used by the SPLITSMF job when creating the measurement data file for this product.
INCRSPLIT USE data_set_allocation_parameters
Note:
INCRSPLIT is ignored when you specify INCRUPDATE NO.
This option is used when multiple products in a single unit database are enabled to incremental update. The SPLITSMF job performs the same function for incremental update jobs as the DAILY job DAYSMF step performs for the DAYnnn database update steps.
  • The SPLITSMF job dynamically allocates, catalogs, and populates
    prefix.MICS.ccc.IUSPLTDS
    data sets for each product in the unit database for which you specified both the INCRUPDATE YES and INCRSPLIT USE parameters. These data sets are then deleted after processing by the appropriate INCRccc job.
  • Specify data set allocation parameters, separated by blanks, according to SAS LIBNAME statement syntax. If you need multiple lines, repeat the INCRSPLIT keyword on each continuation line.
  • INCRSPLIT accepts the engine/host options documented in the SAS Companion for the MVS Environment, including STORCLAS, UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and VOLSER.
Do not specify the DISP parameter.
Specify the following or accept the default if you want the incremental update jobs for this product to get their input measurement data from the data sets specified in the INPUTccc (or INPUTSMF) member of
prefix.MICS.PARMS
:
INCRSPLIT IGNORE
When you specify INCRSPLIT IGNORE, this product will NOT participate in SPLITSMF job processing.
Example 1:
INCRSPLIT USE STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
STORCLAS
Specifies a storage class for a new data set.
Limits:
8 characters.
SPACE
Specifies how much disk space to provide for a new data set being allocated, where:
xxxx
TRK, CYL, or blklen
pp
the primary allocation
ss
the secondary allocation
ROUND
Specifies that the allocated space must be "rounded" to a cylinder boundary when the unit specified was a block length.ROUND is ignored with the TRK or CYL options.
Example 2 (multiple lines):
INCRSPLIT USE STORCLAS=MICSTEMP UNIT=SYSDA INCRSPLIT SPACE=(xxxx,(pp,ss),,,ROUND)
STORCLAS
Specifies a storage class for a new data set.
Limits:
8 characters.
UNIT
Specifies the generic unit for a new data set.
Limits:
8 characters.
SPACE
Specifies how much disk space to provide for a new data set being allocated.
DYNAMWAIT
(Optional) Specify the following
DYNAMWAIT minutes
to override the default amount of time, in minutes, the DAILY and/or INCRccc job will wait for an unavailable data set.
Note:
This optional parameter is not normally specified. The system default is adequate for most data centers.
Internal Step Restart and Incremental Update facilities use z/OS dynamic allocation services to create new data sets and to access existing data sets. Data set naming conventions and internal program structure are designed to minimize data set contention. If data set allocation fails because another batch job or online user is already using a data set, DAILY and/or INCRccc processing will wait 15 seconds and then try the allocation again. By default, the allocation is attempted every 15 seconds for up to 15 minutes. After 15 minutes, the DAILY or INCRccc job will abort.
If data set contention in your data center causes frequent DAILY or INCRccc job failures, and you are unable to resolve the contention through scheduling changes, use the DYNAMWAIT parameter to increase the maximum number of minutes the DAILY and/or INCRccc jobs will wait for the data set to become available.
On the other hand, if your data center standards require that the DAILY and/or INCRccc jobs fail immediately if required data sets are unavailable, specify the following:
DYNAMWAIT 0
You can override the DYNAMWAIT parameter at execution-time using the //PARMOVRD facility. For more information about execution-time override of dynamic data set allocation parameters, see Dynamic Allocation Parameter Overrides (//PARMOVRD).
Implement Incremental Update
To implement incremental update in the
MICS
MQS Analyzer, follow these steps:
  1. Edit
    prefix.MICS.PARMS(cccOPS)
    , where (ccc) is the component identifier:
    • Specify the following:
      INCRUPDATE YES
    • If you want to store the incremental update database files on tape between incremental updates, specify the following:
      INCRDB TAPE #gdgs
    • If you want to allocate the incremental update database files during the first incremental update of the day and delete the incremental update data sets at the end of the DAILY job step, specify the following:
      INCRDB DYNAM
    • If you specify INCRDB TAPE or INCRDB DYNAM, then you must also specify the following:
      INCRDETAIL data_set_allocation_parameters INCRDAYS data_set_allocation_parameters
    • If you want the incremental update job for this product to get input measurement data from the output of the SPLITSMF job, specify the following:
      INCRSPLIT USE data_set_allocation_parameters
    • For additional details about related topics, review the documentation for INCRCKPT, INCRDETAIL, INCRDAYS, and INCRSPLIT parameters to override default data set allocation parameters.
  2. Submit the job in
    prefix.MICS.CNTL(cccPGEN)
    .
  3. Edit
    prefix.MICS.PARMS(JCLGENU)
    so that it contains two or more lines reading:
    DAILY INCRccc cccIUALC cccIUGDG
  4. Submit the job in
    prefix.MICS.CNTL(JCLGENU)
    . Ensure that there are no error messages in MICSLOG or SYSTSPRT, that the MICSLOG contains the normal termination message, BAS10999I, and that the job completes with a condition code of zero.
  5. Edit the job in
    prefix.MICS.CNTL(cccIUALC)
    .
    • Inspect or specify data set allocation parameters for the incremental update database and checkpoint files. If you specify INCRDB TAPE or INCRDB DYNAM, the cccIUALC job only allocates the incremental update checkpoint data set.
    • Submit the job. Ensure that there are no error messages in MICSLOG or SASLOG, and that the job completes with a condition code of zero.
  6. If you specify INCRDB TAPE, submit the job in
    prefix.MICS.CNTL(cccIUGDG)
    to define generation group indexes for the incremental update DETAIL and DAYS tape data sets. To verify that the generation group indexes were correctly defined, examine SASLOG, MICSLOG, and SYSPRINT.
    Note:
    Error messages for the DLTX (or DELETE) statements are not a cause for concern. cccIUGDG deletes each index before defining it, and an error message is issued if the index does not yet exist, for example if you run the cccIUGDG job for the first time.
  7. The following operational job(s) have changed:
    DAILY INCRccc
    If your site implemented the operational
    MICS
    processes in a scheduling product, refresh the JCL in that product. Consult the scheduling product administrator for the exact processes involved in updating that product's representation of the
    MICS
    jobs.
  8. Implement operational procedures for gathering input measurement data and executing incremental updates (INCRccc) during the day.
    Modify operational procedures for the DAILY job to ensure that processing is limited to input measurement data that were not input to one of the daily incremental update executions.