MVS Virtual Storage Overview

This section describes the individual parts of the MVS virtual memory, identifying those parts of the virtual memory that are permanently page-fixed (that is, cannot be paged out).
micsrm140
The size of these areas affects your overall real storage management strategy, because the total size of these areas is subtracted from the page frames available to the workload. It also affects the size of the private area (the amount of virtual memory available to programs).
MVS/370 provided 16 MB of virtual storage to each address space. This storage is divided into the three areas: the common area, the private area, and the nucleus area.
In MVS/XA and later systems, 2 GB of virtual storage are available to each address space. This storage is divided into four areas: the common area below the 16 MB line, the private area below the 16 MB line, the extended common area above the 16 MB line, and the extended private area above the 16 MB line. In current systems the nucleus, which spans the 16 MB line, is conceptually part of the common/extended common area.
In considering the virtual address space, there are both real storage and virtual storage issues to be considered. This section defines each of the parts of the virtual address space, and details the real and virtual storage issues pertinent to each part.
Real storage issues arise because portions of the virtual address space remain fixed in real storage at all times. A page fixed in real storage is one that remains in real storage and will not be paged out. Using Figure 2-27 as a reference, the fixed portions of real storage in MVS/370 are:
  • The entire nucleus area.
  • The system queue area (SQA) and parts of the common storage area (CSA) from the common area.
  • The local system queue area (LSQA) from the private areas of resident address spaces.
Using Figure 2-28 as a reference, the fixed portions of real storage are:
  • The nucleus and extended nucleus area.
  • The LSQA and extended LSQA from the private areas of resident address spaces.
  • The fixed link pack area (FLPA) and extended FLPA (in MVS/370 the FLPA was an extension of the nucleus).
  • The SQA and extended SQA.
Note:
In both MVS/370 and current systems, LSQA is swappable. Thus, the real storage occupied by LSQA and the private area fixed pages are also determined by the number of non-swappable tasks and the number of active tasks (the system MPL).
The size of the nucleus and common areas must all be considered, because the total virtual space is limited. Starting with the Systems Product releases of MVS, the size of the available private area is the remainder after the nucleus and the common area are subtracted.
In earlier systems, the size of the common area was fixed at 8 MB. This could be changed by modifying a constant in one of the IPL modules. In the larger MVS/370 systems today, private areas are as small as 5 MB. In these systems, reduction of the virtual sizes is an important issue.
MVS itself requires parts of both real and virtual storage. In determining how much storage is actually available to users, the basic requirements of MVS must be subtracted from the total configuration. The real storage manager (RSM) controls how real storage is allocated to system and user address spaces that are swapped in for execution.
The primary MVS requirements for real storage are:
  • Fixed storage for resident supervisor code and basic system control blocks. This includes, for example, the code for the System Resource Manager (SRM); the real, virtual, and auxiliary storage management routines (RSM, VSM, ASM); the dispatcher; basic systems services; and control blocks for defining the real/virtual storage mappings and for address space control.
  • o Fixed storage requirements to support user address spaces. This includes the page and segment tables for the address spaces, address space control blocks, and address space-related SRM/RSM/VSM/ASM control blocks.
Figures 2-27 and 2-28 show the virtual storage layout for all MVS-based systems. The System Storage Usage Report described in RSM Standard Output provides the capability to track most of the different parts of virtual memory. In storage- constrained systems, a great deal of real storage can be made available by tailoring the system's use of real memory.
As shown in Figures 2-27 and 2-28, there is a private area (and an extended private area in later systems) for each address space, that is, one for the Master Scheduler, JES2/3, VTAM or TCAM, any started system task, each batch job, and each TSO user. Each address space in memory has fixed storage requirements: its LSQA and address space related control blocks in SQA. When planning for the amount of pageable storage, the fixed storage required to support swapped-in address spaces must be considered.
Part of real storage is partitioned into a system preferred area to support storage reconfiguration for multiprocessors. Allocations for SQA, LSQA, and fixed page assignments (including frames for V=R jobs) are made from the system preferred area if possible, in order to facilitate storage reconfiguration.
Figure 2-27. MVS/370 Virtual Storage Layout
+--- ============== | | SQA | | |------------| | | PLPA | | |------------| Common | | MLPA | Area < |------------| | | BLDL | | |------------| | | SYSGEN PSA | | |------------| | | CSA | +--- |============|============|============| | | LSQA | LSQA | LSQA | | |------------|------------|------------| | | SWA | SWA | SWA | | |------------|------------|------------| | | 229/230 | 229/230 | 229/230 | | |============|============|============| Private | | | | | Area < | MASTER | address | address | | | SCHEDULER | space | space | ... | | PRIVATE | 1 | 2 | | | AREA | user region| user region| | | USER REGION| | | | | |------------|------------| | | | Sys. region| Sys. region| +--- |============|============|============| | | RMS | | |------------| | | ASM tables | | |------------| | | fixed BLDL | | |------------| Nucleus < | fixed LPA | Area | |------------| | | | | | NUCLEUS | | | | | | | +--- |============|
Figure 2-28. Current systems Virtual Storage Layout
2Gb +--- |================|============|============| | | EXTENDED LSQA | EXT LSQA | EXT LSQA | | |----------------|------------|------------| | | EXTENDED CSA | EXT CSA | EXT CSA | | |----------------|------------|------------| Extended | |EXTENDED 229/230|EXT 229/230 |EXT 229/230 | Private | |================|============|============| Area < | MASTER |addr. space |addr. space | | | SCHEDULER | 1 | 2 | | | PRIVATE AREA | extended | extended | | | EXTENDED | user region| user region| | | USER REGION | | | | | | | | +--- |================|============|============| | | EXTENDED CSA | Extended| |----------------| Common | | EXTENDED PLPA/ | Area < | FLPA/MLPA | | |----------------| | | EXTENDED SQA | | |----------------| +--- |EXTENDED NUCLEUS| 16 MB +--- ================== | | NUCLEUS | | |----------------| Common | | SQA | Area < |----------------| | |PLPA/FLPA/MLPA | | |----------------| | | CSA | +--- |================|============|============| | | LSQA | LSQA | LSQA | | |----------------|------------|------------| | | SWA | SWA | SWA | | |----------------|------------|------------| | | 229/230 | 229/230 | 229/230 | Private | |================|============|============| Area < | MASTER | address | address | | | SCHEDULER | space | space | | | PRIVATE | 1 | 2 | | | AREA | user region| user region| 20K | | USER REGION |------------|------------| | | | Sys. region| Sys. region| 4K +--- |================|============|============| Common < | PSA | 0K +--- |================|
The following sections explain the portions of the virtual storage defined in Figures 2-27 and 2-28, describe the MVS real storage algorithms, and describe a method of determining the working set of a program.

MVS Virtual Storage Layout

This section defines the different portions of MVS virtual storage. (The basic reference for this material is the Initialization and Tuning documentation.) Real storage management considerations specific to each area are included following the description of each section.
System Queue Area (SQA and Extended SQA)
Generally speaking, the system queue area (SQA) is an area containing global system control blocks. At system initialization, a minimum of 64 pages are allocated to SQA. A number of additional segments may be added by PARMLIB or SYSGEN options. These parameters specify the upper limit on the number of frames that will be allocated to SQA. Real frames are allocated as required. SQA is fixed storage acquired and freed by various MVS components (for example, SRM, RSM, VSM, ASM, ENQ/DEQ). If the system requirement for SQA exceeds the amount of pages specified by the SQA parameter, the system attempts to allocate virtual SQA from the common service area (CSA).
In MVS/370, if the amount remaining for allocation purposes falls below 6 pages, the creation of new address spaces is suspended. In later systems the threshold is 8 pages below 16 MB.
You should monitor the size of SQA on a long-term basis for memory planning considerations. System upgrades, maintenance, and new program products may all increase the size of SQA. You should also review the SQA on a daily basis to ensure there are no SQA growth problems.
SQA pages are allocated from the system preferred area if possible. If preferred area pages are not available, and if all attempts to make a page frame available (both by page stealing and page relocation) have failed, then the reconfigurable storage (storage available to be assigned to a processor that is varied offline in a multiprocessor) is reduced by one storage unit (a processor-dependent value, 4 MB on a 3081) and the storage unit is marked as being part of the preferred area. As a last resort, a page is allocated from the V=R area.
Pageable Link Pack Area (PLPA and Extended PLPA)
The Pageable Link Pack Area contains the code for SVC (supervisor call interruption) routines, access methods, and other system routines. A packing algorithm is used during PLPA initialization to minimize fragmentation of this area. In addition, a packing list should be provided to minimize the working set for PLPA by placing high activity modules, or modules that are used together, in the same page.
Expected sizes of PLPA range from 2 to 4 MB. In systems with virtual space sizing problems, the removal of unused modules from PLPA offers the potential for large gains. A field-developed program (FDP) that provides LPA analysis, MVS LPA Optimizer, SB21-3011, is available to help with this process.
Modified Link Pack Area (MLPA and Extended MLPA)
The modified link pack area can be used to temporarily include reentrant modules from the linklist libraries in the PLPA. The intent is to provide a testing capability. The MLPA is serially searched after the FLPA list, but prior to the LPA directory search. The LPA directory searches for PLPA modules with a hash algorithm. Thus, a long and "permanent" MLPA list negates the benefits of the more efficient PLPA lookup.
A large MLPA list also causes excessive dispatching overhead because each MLPA search/update requires the CMS lock. A long MLPA list also increases SQA size, because a Contents Directory Entry (CDE) and Extent List (XTLST) are required for each entry. The list of modules to be included in MLPA is in a member of SYS1.PARMLIB, IEALPAnn. Modules may be included from any linklist library. When modules are included from libraries other than SYS1.LPALIB, SYS1.SVCLIB, or SYS1.LINKLIB, use SYS1.LINKLIB as the data set name for the module.
Fixed BLDL List (BLDLF)/Pageable BLDL List (BLDL)
A BLDL list is an in-storage copy of all, or a portion of, the directory for a linklist data set. The fixed BLDL list should point to modules that are frequently used but cannot be placed in PLPA because they are not reentrant or because their size and/or frequency of use do not justify placement in PLPA or MLPA. Either a BLDLF or BLDL list may be defined in MVS/370. In later systems, the BLDL/BLDLF lists have been eliminated.
The BLDL list should also include modules that are referenced by few address spaces and modules that are infrequently used by many address spaces. The primary purpose of the BLDL list is to reduce the I/O overhead from linklist directory lookups. There is no advantage in specifying a pageable BLDL list. The BLDL list is searched after the resident lists (FLPA, MLPA, and PLPA), but before the linklist library directories are searched.
Establishing the content of a BLDL list is a time-consuming task that does not address shifts in the character or volume of the system's workload. For example, a BLDL list designed to optimize TSO work is not of much use at night when the production batch is run. A better solution to this problem may be to use one of the dynamic BLDL packages that change the content of the list based on frequency of module use.
The size of a BLDL list is not an issue as it requires very little storage.
Sysgen Prefix Storage Area (PSA)
The prefix storage area (PSA) represents the first 4K of storage starting at address 0. There is one PSA for each processor on a multiprocessor (MP) or dyadic processor (3081). The PSA generally contains CPU-dependent, hardware- related information. For systems with multiple processors, an uninitialized copy of the PSA is allocated to initialize the PSA for each CPU brought online after system initialization (IPL). There is no extended PSA allocated above the 16 megabyte line.
Common Service Area (CSA and Extended CSA)
The Common Service Area (CSA) contains pageable and fixed data areas that must be addressed by more than one address space. The determination of whether a CSA frame is pageable or fixed is based on which virtual storage subpool the area is to be allocated from. Subsystems such as TCAM, VTAM, JES3, HSM, and IMS use large amounts of CSA to pass data from one address space to another. The virtual size of CSA is a consideration if there are problems with the size of the private area.
Local System Queue Area (LSQA and Extended LSQA)
The Local System Queue Area (LSQA) is allocated from the private area and is fixed in storage while the address space is resident. It contains local system control blocks. LSQA is allocated from the system-preferred area following the same scheme described for SQA allocation. The LSQA for each address space is unique. The sum of LSQA allocations for address spaces that are resident represents a block of fixed pages that is not available for demand paging. There are two ways to minimize the amount of storage used by the LSQAs in a storage-constrained system. First, reduce the system MPL. Second, limit the number of non-swappable address spaces.
Scheduler Work Area (SWA and Extended SWA)
The scheduler work area (SWA) is a pageable area allocated at the top of the private area that contains control blocks that exist over the life of the address space. The SWA eliminates the job queue that existed as a direct access data set in earlier IBM operating systems.
There are no real storage considerations associated with the SWA. The only virtual storage consideration is that this area is in the user private areas.
Subpools 229/230 (and Extended 229/230)
The subpools 229/230 area contains pageable space for system control blocks within a virtual address space. These subpools are used to provide data areas that may be referenced by MVS components with an appropriate storage protect key.
The MVS/SP Tuning Cookbook recommends that LSQA allocations that do not need to be fixed be made from these areas instead. For systems that are severely memory-constrained, this may provide some temporary relief. Because system code modification is required, and this change may impact installation exits that may reference the control blocks, this should be evaluated against the other ways to make more real storage available.
System Region
The System Region is reserved for use by the address space management functions of MVS. It comprises the bottom 16 KB of each private address space other than the Master Scheduler address space. In the Master Scheduler address space, the system region is a maximum of 200 KB. In later systems there is no extended version of this area.
Private User Region (Private and Extended Private)
The Private User Region contains user code and data. In MVS/370 systems, its virtual size ranges from 5 (or less) to 12 MB, depending on the size of the other areas comprising the virtual space. In current systems, with a virtual address space of 2 GB, its size is considerably larger than that found in MVS/370.
The minimum acceptable size for the private area is determined by installation needs. For systems that are having problems with the size of the private area, scaling down the ASM configuration, reducing the virtual size of PLPA, and minimizing the number of concurrent address spaces is required.
Fixed Link Pack Area (FLPA and Extended FLPA)
The Fixed Link Pack Area (FLPA) is intended for modules that execute more effectively when page-fixed. In MVS/370 systems, the fixed link pack area is an extension of the nucleus. Its size is included in the nucleus figure reported by the Real Storage Usage Report for MVS/SP 1.3 systems. In current systems, it is allocated separately and is no longer an extension of the nucleus.
For storage-constrained systems, only essential modules should be kept in this area. Because the paging algorithms normally keep frequently used modules in real storage, this area should be used only to hold modules that are not frequently used but are needed for fast response to some terminal-oriented action.
Modules are loaded into the FLPA area in the order specified by the IEAFIXnn member of SYS1.PARMLIB. Thus it is important to minimize the amount of space lost to fragmentation. One method of doing this is to order the fix list by descending module size.
NUCLEUS and Extended Nucleus
The nucleus contains the nucleus load module and some key control blocks. The IPL extensions are discussed separately, so this discussion is limited to the contents of the base nucleus.
From the view of recovering real storage from the nucleus, the I/O generation should be reviewed periodically to remove unit control blocks (UCBs) for devices that are no longer used. Because the nucleus high address is rounded to a segment boundary (64K) in MVS/370 systems, a larger than expected amount of storage may be recovered. On the other hand, in MVS/370 systems, you may find that additional modules may be added to the FLPA list, or the BLDL list may be extended "for free" due to the already existing dead space that results from this rounding.
In later systems, because the BLDL list does not exist and because the FLPA is allocated separately, this possibility for using the wasted space does not exist.
After the installation of upgrades or system maintenance, you should review the impact on the size of the nucleus and all fixed areas. For example, after the addition of more real storage, the size of the nucleus increases, because the page frame tables are in the nucleus area. Addition of 8 MB, for example, would increase the nucleus by 32 KB (2048 frames and 16 bytes per entry in the frame table).

MVS Working Set Determination

MVS/SE2 introduced a facility to guarantee a performance group a fixed amount of real memory. To use this facility, storage isolation (fencing) it is desirable to be able to characterize the working set of a program that is to be fenced.
MVS maintains a measure of a program's working set, but to be able to use fencing fully, you would like to be able to estimate the paging that would be expected with various amounts of real storage available to the program.
With this information, you can use fencing to minimize the page faulting of a program, or you can allocate the amount of real storage expected to yield a desired paging rate. The Real Storage Analysis portion of the
MICS Performance Manager Option
Option provides reports and plots that support the working set characterization method described in this section.
You could characterize the working set requirements of a program by repeatedly executing the program, limiting to it different amounts of real storage in fixed increments. Starting at 50 pages and adding 50 pages with each execution, you should expect (in theory) to see a curve something like that shown in Figure 2-29.
Note that this procedure gives an estimate of the theoretical working set of the program. In applying storage isolation, an estimate of the theoretical working set is what is desired. When real storage is in plentiful supply, programs that are storage isolated are allowed to accumulate more pages, just as the other address spaces are. In times of increased storage demand, however, it serves no purpose to allow the storage- isolated tasks to retain more storage than they require, as they will, in most cases, reach the same paging rates after some period of time.
Figure 2-29. A Working Set Characterization
| | * Average | Page 30 + Faults | * per | CPU | * Second 25 + | * | | * 20 + | * | | * 15 + * | | * | 10 + * | * | * | * 5 + * | * | * | * ----+----+----+----+----+----+----. 50 100 150 200 250 300 Working Set (Frames)
Arthur Petrella and Harold Farrey proposed a functional definition of the working set in MVS that has worked well in practice. The working set determination method used here is based on their ideas.
The working set is, at any point in time, the number of pages referenced during the last interval (t). Since the real memory available to the program differs over the course of execution, as well as from run to run, a function describing the working set must represent the dynamics of the available real storage as well as the characteristics of the program. Thus, a probabilistic representation is required. Petrella and Farrey proposed a binomial probability function.
A set of events may be represented by a binomial distribution if three conditions are met:
  • There are only two things that can happen (success or failure).
  • The probability of success (p) is constant from trial to trial.
  • There are n independent trials. That is, the outcome of one trial does not depend on the outcome of any of the others.
If you define a trial as the occurrence or non-occurrence of a page fault, then conditions 1 and 3 are certainly met. And if a measurement interval of length t is chosen such that there can be at most one page fault in each interval, then condition 2 is also met.
Now, given that the probability of success (a page fault) is small, if you consider the probability of N page faults over a time interval T, with T much larger than t, the binomial distribution can now be approximated by the Poisson distribution shown below. For a more complete discussion of the Poisson process, refer to page 117 of Probability, Statistics, and Queueing Theory by Arnold O. Allen.
_ _ | N | -k | k | P (T) = e | ---- | | N! | |_ _|
So, we expect that Figure 2-29 is an exponential relation. With this relation, we can estimate the results of the fencing experiment from the standard SMF data collected during regular production runs of the program. In the fencing experiment, over any single run what we are really measuring is the expected value of this function.
MICS
provides three measures that can be used to plot the relation in Figure 2-29:
PGMPGSEC
in the batch program file is the product of the number of pages used by the program and CPU time.
PGMPGIN
The number of page-ins during the program's execution.
PGMPRCLM
The number of page reclaims during the program's execution.
From these variables,
MICS
computes average working set size in pages (PGMAVWSS), and the Real Storage Analysis component calculates the paging rate as PGMPGIN + PGMPRCLM. Thus, by retrieving and analyzing several executions of a program, the relationship suggested by Figure 2-29 can be reproduced.
Because we are dealing with a relatively small number of samples, it is important to be able to test the hypothesis, "Does the data have an exponential relation?" and also to be able to project an optimal amount of memory to assign to the program. An exponential function may be transformed into a linear one by:
(mx+b) ln e = mx+b
We add one to the CPU page fault rate to allow for the case where there is no paging. Now, the relation shown below is linear.
ln (1+page faults/CPU second) vs. working set size If these values are plotted for a set of executions of a program, the linear relation shown above can be estimated by regression techniques. The X intercept of this plot approximates the virtual memory used by the program. An even more interesting result is that the slope of the line indicates whether a program that has already been fenced has more real storage than is required for its working set via the storage isolation definitions.
The final step in the process is to investigate the quality of the linear model. Does the data indeed have a linear relation? How well does the regression model describe the variation in the data? To do this, the SAS procedure REG can be used.