RMF zSeries Processors' Pools
The following sections describe the easiest way to summarize resource pool consumption data for zSeries and earlier models. They also focus on an important performance issue related to the shared special purpose processors' weight management.
micsrm140
Summarizing Processor Pool Data
You should be able to perform the most common analysis at the summarized processor pool level by taking advantage of the PR/SM LPAR Config/Activity (HARLPC) file's organization. This straightforward process is described in the following example.
Assume that the HARLPC file's granularity at the DAYS timespan is:
SYSID PRSMLPNM PRSMLPTP YEAR MONTH DAY HOUR
This organization basically means that the HARLPC file contains one observation per logical partition for each hour of a day within a given recording system. Besides its specific data elements pertaining to a single LPAR, each observation also carries various PR/SM common data elements that total consumptions by processors' pool.
You can report on the totals in the HARLPC file's specific data elements using the
MICS
standard data summarization technique, by selecting the following new key/sequence:SYSID YEAR MONTH DAY HOUR
The following data elements directly provide processor pool measurements, at the CEC level:
LPCTODTM
This element now represents the total dispatch time for all the processors in the standard CP pool.
LPCTOMSU
This is the total MSUs consumption for the whole CEC. Note that only standard CPs contribute to this total.
LPCTIDTM
This element now represents the total dispatch time for all the processors in the special purpose (ICF) pool.
Additionally, you can compute a number of new measurements based on the PR/SM common data elements, such as:
CP pool total utilization:
To know how much of the total physical standard CP processors' capacity was used, apply the following formula:
PRSMTCDT TOTCP_BS = ------------------- x 100 PRSMTCP DURATION x -------- INTERVLS
Number of standard CPs used:
The above percentage can also be represented as a number of physical standard CP processors used regarding the pool's size:
TOTCP_BS x PRSMACP NBRCP_BS = ------------------ 100
For example, a total utilization of 24% from a 7 physical CP processors' pool would result in the number of processors used being equal to 1.68, meaning the system load would actually only require 2 physical processors.
Shared CP pool utilization:
To know how much of the shared physical standard CP processors' capacity was used, apply the following formula:
PRSMTCST SHRCP_BS = ----------------------------- x 100 PRSMTCP - PRSMTCDP DURATION x ------------------ INTERVLS
ICF pool total utilization:
To know how much of the total physical special purpose processors' capacity was used, apply the following formula:
PRSMTIDT TOTICFBS = ------------------- x 100 PRSMTIP DURATION x -------- INTERVLS
Number of special processors used:
The above percentage can also be represented as a number of physical special purpose processors used regarding the pool's size:
TOTICFBS x PRSMAIP NBRICFBS = ------------------ 100
Shared ICF pool utilization:
To know how much of the shared physical special purpose processors' capacity was used, apply the following formula:
PRSMTIST SHRICFBS = ----------------------------- x 100 PRSMTIP - PRSMTIDP DURATION x ------------------ INTERVLS
Special Purpose Processors' Weight Management
The distribution of the shared CPU resources is determined by the weight assigned to each LPAR configured with shared processors. PR/SM manages shared standard CP and shared special purpose processors as separate pools of physical resources. As such, the processing weights assigned to logical partitions using shared special purpose processors are totaled and managed independently from the total weights derived from all of the logical partitions using shared standard CP processors. As a result, any shared IFLs or zAAPs in the configuration are combined with the shared ICFs (Integrated Coupling Facilities).
When a logical partition uses shared processors, the zAAPs defined to the partition use shared special purpose processors and the weight given to the zAAPs is equal to the partition's weight. Since logical special purpose processors (zAAP, IFL, ICF) are dispatched on the same type of physical processing unit, their relative weights need to be carefully adjusted to avoid a negative impact on performance, especially when the configuration includes shared coupling facility LPARs. These logical partitions tend to be continuously dispatched due to their "active wait" polling algorithm; and, if their weights are too high compared to other LPARs sharing the special purpose processors (for example, a z/OS partition with shared zAAPs), they dominate the usage of the processors' time slices, thus possibly interfering with work that wants to be dispatched on zAAPs.
The following example demonstrates the requirement to update weights of Coupling Facility or Linux logical partitions when a shared zAAP is added to the configuration:
Base configuration
NUMBER OF PHYSICAL PROCESSORS: 5 CP : 4 ICF : 1 PROCESSOR ALLOWED LPAR NAME WEIGHT NUM TYPE RESOURCE ---------- ------- --- ---- -------- ZOS1 900 4 CP 90% ZOS2 100 4 CP 10% ======= ======== 1000 100% ICF1 550 1 ICF 55% ICF2 450 1 ICF 45% ======= ======== 1000 100%
In this configuration, the ZOS1 LPARs may use the capacity of 3.6 standard CP processors, and ZOS2 may use 0.4 processor.
On the other hand, the two coupling facility LPARs sharing a single special purpose processor (ICF) each use about half of the ICF.
Adding a zAAP
Adding a zAAP to the ZOS2 LPAR without adjusting the weights of the special purpose processors' pool results in the following:
NUMBER OF PHYSICAL PROCESSORS: 5 CP : 4 ICF : 2 (1 ICF + 1 zAAP) PROCESSOR ALLOWED LPAR NAME WEIGHT NUM TYPE RESOURCE ---------- ------- --- ---- -------- ZOS1 900 4 CP 90% ZOS2 +--> 100 4 CP 10% ! ======= ======== ! 1000 100% ! ICF1 ! 550 1 ICF 50% ICF2 ! 450 1 ICF 41% ZOS2 +--> 100 1 ICF 9% ======= ======== 1100 100%
Looking at the above numbers, nothing changed for the z/OS LPARs in regards to their relative share of the standard CP processors; but instead of adding a full zAAP to the ZOS2 LPAR, we actually only added a small portion of a zAAP. Considering how PR/SM manages weights, the ICF1 coupling facility LPAR has now 50% of all shared special purpose processors' capacity available. The pool being now comprised of 2 processors (the ICF plus the zAAP we just added), ICF1 can actually use up to one full physical engine. Similarly, ICF2 now has 41% of these total resources, meaning it can use 0.82 physical engine. This only allows 0.18 for the zAAP work on the ZOS2 LPAR.
To allow the full zAAP capacity to be used by the ZOS2 LPAR, adjust the weights of the shared special purpose processors' pool as follows:
- Always start with zAAP weight over which we have no control, and sum up the weights of the partitions that have shared zAAPs, as well as the total number of shared zAAPs:TOT_ZAAP_WEIGHT = sum of LPARs' weights with shr zAAPsTOT_ZAAP_PROCS = total number of shared zAAPsIn our example we only have one LPAR configured with zAAPs and only one shared zAAP; therefore this total is equal to 100.
- The above total divided by the number of shared zAAPs gives the weight of one processor in the special purpose processors' pool:ONE_SPPROC_WEIGHT = TOT_ZAAP_WEIGHT / TOT_ZAAP_PROCS
- Use the above value to reset the weight of the other special purpose LPARs (in our example ICF1 and ICF2) to the same ratio we had before the zAAP, using the following formula:LPAR_NEW_WEIGHT = ONE_SPPROC_WEIGHT x (previous share)In our example the adjusted weights for the two ICF1 and ICF2 LPARs are:ICF1_NEW_WEIGHT = 100 x 55% = 55ICF2_NEW_WEIGHT = 100 x 45% = 45
Once the weights are adjusted, the configuration should look like this:
NUMBER OF PHYSICAL PROCESSORS: 5 CP : 4 ICF : 2 (1 ICF + 1 zAAP) PROCESSOR ALLOWED LPAR NAME WEIGHT NUM TYPE RESOURCE ---------- ------- --- ---- -------- ZOS1 900 4 CP 90% ZOS2 100 4 CP 10% ======= ======== 1000 100% ICF1 55 1 ICF 27.5% ICF2 45 1 ICF 22.5% ZOS2 100 1 ICF 50% ======= ======== 200 100%