Swapping Space Data Sets

Swapping is the means by which MVS frees real storage frames occupied by inactive or low priority address spaces and makes them available for use by higher priority active address spaces.
micsrm140
Swapping is the means by which MVS frees real storage frames occupied by inactive or low priority address spaces and makes them available for use by higher priority active address spaces.
This section describes the swapping algorithms and explains swap data set configuration changes. A brief introduction to the evolution of the swap process provides a framework for this discussion. Some recommendations on swap data set sizing are given.
Before the availability of logical swapping and the extended swap IUP, only LSQA pages were written to swap data sets, and a physical swap-out was initiated when SRM analysis showed that an address space would be, or should be, made idle for an extended period. An address space was swapped:
  • Between transactions (for TSO address spaces).
  • For all explicit long wait conditions.
  • In other wait conditions that were "too long" (i.e., detected waits or ENQ contention).
  • Unilaterally by the SRM for multiprocessing level contention and resource under or over utilization.
A swap-out is performed in three steps:
  1. Execution is halted if the address space is not already in a wait state and the address space is quiesced (i.e., put in a wait state).
  2. The pages held by the address space are trimmed. Non-LSQA pages unchanged since their last page-in have their frames placed on the available frame queue, where they are immediately available for stealing.
  3. Non-LSQA working set pages that have been changed since their last page-in are paged-out to the local data sets, and their slot locations recorded in a control block in the LSQA. The LSQA is then paged-out to the swap data sets, if available; otherwise, the LSQA is paged-out to the local data sets.
The swap-in process is the reverse of a swap-out:
  1. LSQA is paged-in.
  2. Page-ins are scheduled for non-LSQA pages marked as referenced at swap-out.
  3. The address space is restored and marked dispatchable, possibly before the page-ins from step two have completed.
Starting with MVS/SE2, logical swapping was introduced for TSO address spaces. The effect of this change was to avoid physically swapping a TSO address space if: (1) the user is actively entering commands, and (2) there is enough real storage available.
A second major change was in the form of an installed user program (IUP) to modify the swap process. With the extended swap IUP, those pages marked as referenced at swap-out are treated as LSQA and paged-out to the swap data sets. Swap-in step one brings in all the pages necessary to continue execution of the address space, so swap-in step two is effectively skipped. Swap-in step three is also significantly shortened, because it is unnecessary to wait for page-ins scheduled by step two.
The swap process is discussed in the following sections.

Physical Swap Algorithm

Ignoring the quiesce/restore and trimming steps, the swap process of the extended swap IUP and MVS/SP 1.3 and beyond is really a two-stage swap-out and a single-stage swap-in.
Changed but unreferenced pages are paged-out to local data sets, saving the ASM slot locations in LSQA control blocks. MVS/XA and MVS/ESA use the sequential slot allocation algorithm to select slots on the local page data sets.
Then LSQA and non-LSQA referenced pages are paged-out to swap data sets. All the pages needed for a swap-in receive the advantage of the sequential slots of swap sets.
If multiple swap sets are required for the larger working sets, they will be split among multiple swap data sets and the I/O will run concurrently (assuming a sufficient number of allocated data sets). When an address space is swapped back into real memory, all of its pages are brought in from the swap data sets in a single stage operation.

Working Set Trim

In this section the term working set is used to be compatible with the IBM references on ASM operation. You should be aware that this is a third definition of working set. This version of the working set is a pragmatic attempt to keep enough pages at swap-out to ensure minimal paging at swap-in. As such, the swap working set determination includes more pages than might be in the Denning working set.
Working set trim makes page frames available for immediate stealing and indirectly determines the number of non-LSQA pages that will be swapped. An address space can be trimmed down to its minimum for storage isolation (see the PWSS parameter in the Initialization and Tuning Guide) if such a minimum was specified.
Working set trim attempts to selectively protect unreferenced pages from being stolen. Referenced pages are never stolen, because they must be paged-out to the swap data sets by the extended swap algorithm. The method by which working set trim discriminates between pages that should and should not be made available for stealing is different for physical and logical swaps.
Physical Swap Working Set Trim
For address spaces that are being physically swapped-out, working set trim is based on elapsed time since the last unreferenced interval count (UIC) update. When a physical swap is initiated within 1/2 SRM second of the last UIC update, then unreferenced pages with a UIC of zero are not trimmed. This protects pages that might be frequently referenced, but were unlucky enough to have their reference bit turned off by the very recent UIC update. It is assumed there has not been sufficient time for the page to be referenced again.
Logical Swap Working Set Trim
For address spaces that are being logically swapped-out, working set trim is based on current utilization of real storage. Unreferenced pages within a UIC of up to 23 are not trimmed if their UIC is less than the integer part of the following calculation:
1 + (System Average Max UIC - 30) / 10
Thus, logically swapped address spaces are always trimmed less severely than physically swapped address spaces because this calculation has a minimum value of one. The purpose of this algorithm is to protect more pages from being trimmed as more real storage is made available.

Logical Swapping

MVS/SE1 introduced the concept of logical swapping. The consumption of CPU and I/O resources by swapping had limited the throughput of large TSO systems, primarily because completion of each TSO transaction initiated a physical swap-out. The purpose of logical swapping is, where possible, to avoid actual swap-out page transfer to the ASM data sets. To describe logical swapping, we first consider the steps in the life of a transaction in a system prior to MVS/SE1:
  1. Swap-in of the LSQA pages (stage 1).
  2. Swap-in of the private pages (stage 2).
  3. Restore processing ready the address space for execution.
  4. Execution phase.
  5. Quiesce processing stop all tasks, SRBs, and I/O.
  6. Working set trim.
  7. Swap-out of changed non-LSQA pages and LSQA pages.
The decision to do a physical or logical swap is made at the completion of quiesce processing and is based on two variables: (1) the user think time and (2) the system think time. Prior to MVS/SP 1.3, only TSO address spaces that were quiesced due to a terminal wait condition (input or output) were considered for logical swapping.
Starting in MVS/SP 1.3, any swappable task that enters a detected or long wait condition is considered for logical swapping. User think time is calculated as the time from the end of quiesce processing to the end of restore processing, and hence accounts for user idle time, time keying in data, swap-in time, and any communications delays.
System think time is calculated as a function of the available storage. Consult the Initialization and Tuning Guide for the appropriate level of your system for the exact calculation used. The value of system think time may be plotted over time. This value is controlled by parameters in the SYS1.PARMLIB member IEAOPTnn that define the range of values, the length of the available frame queue, and the system unreferenced interval count (UIC) value.
With MVS/SP 1.3 and beyond, the amount of fixed storage and the amount of storage fixed or allocated from page-in or page-out operations is also considered. An address space is logically swapped if the previous user think time is less than the current value of system think time. When a logically swapped address space remains idle for longer than the system think time, it is physically swapped.
TSO users receive an extra 2-second grace period, but the physical swap is not actually performed until real storage frames are needed. Hence, an address space may remain logically swapped far longer than might be expected.
Non-TSO address spaces are also candidates for logical swapping. The criteria for non-TSO logical swapping are:
  • System think time must be greater than five seconds.
  • The current high UIC must be greater than logical swap high threshold (default is 30).
  • The average number of fixed frames below 16 megabytes is below that logical swap threshold.
When all the above conditions are met, a swappable non-TSO address space entering a detected or long wait will be considered for logical swapping.
Given sufficient real storage, logical swapping greatly improves TSO response. There are so many interactions between an ASM configuration and the amount of real memory, the type of swapping (extended swap or not), and page stealing that it is difficult to give firm guidelines on how to use these facilities. You should perform all basic system tuning first, however, before you make any changes to the system-provided defaults. Some general strategies are discussed in Section 2.3.2 of this guide.
In an expanded storage environment, logical swapping is still utilized in an attempt to save the processor cycles and expanded storage or I/O overhead of physical swapping. As noted above, a TSO address space in a terminal wait condition or any class of address space in a long or detected wait condition is eligible for logical swapping. When this happens, the LSQA, fixed, and most recently referenced pages are kept in central storage. The only real impact of expanded storage on the logical swap process is to mitigate the degradation that might occur when a logical swap must be converted to a physical swap.
Just as a logical swap is effective only if the address space is swapped back in before being converted to a physical swap, an expanded storage physical swap is effective only if the address space is swapped back in before being migrated through central to auxiliary storage.