Configure Agent to Policy Server Communication Using a Hardware Load Balancer

Contents
casso10
Contents
2
Hardware Load Balancing
CA Single Sign-On
supports the use of hardware load balancers that are configured to expose multiple Policy Servers through one or more virtual IP addresses (VIPs). The hardware load balancer then dynamically distributes request load between all Policy Servers that are associated with that VIP. The following hardware load balancing configurations are supported:
  • Single VIP with multiple Policy Servers exposed by each VIP
  • Multiple VIPs with multiple Policy Servers exposed by each VIP
Single VIP, Multiple Policy Servers Per VIP
Load balancer with single VIP for multiple Policy Servers
Load balancer with single VIP for multiple Policy Servers
In the configuration that is shown in the previous diagram, the load balancer exposes multiple Policy Servers using a single VIP. This scenario presents a single point of failure if the load balancer handling the VIP fails.
Multiple VIPs, Multiple Policy Servers Per VIP
Load balancer with multiple VIPs and multiple Policy Servers per VIP
Load balancer with multiple VIPs and multiple Policy Servers per VIP
In the configuration that is shown in the previous diagram, groups of Policy Servers are exposed as separate VIPs by one or more load balancers. If multiple load balancers are used, this amounts to failover between load balancers, thus eliminating a single point of failure. However, all major hardware load balancer vendors handle failover between multiple similar load balancers internally such that only a single VIP is required. If you are using redundant load balancers from the same vendor, you can therefore configure Agent to Policy Server communication with a single VIP and you can still have robust load balancing and failover.
If you are using a hardware load balancer to expose Policy Servers as multiple virtual IP addresses (VIPs), we recommend that you configure those VIPs in a failover configuration. Round robin load balancing is redundant as the hardware load balancer performs the same function more efficiently.
Configure 
CA Single Sign-On
Agent to Policy Server Connection Lifetime
Once established, the connection between an Agent and a Policy Server is maintained during the session. Therefore, a hardware load balancer only handles the initial connection request. All further traffic on the same connection goes to the same Policy Server until that connection is terminated and new Agent connections established.
By default, the Policy Server connection lifetime is 360 minutes—typically too long to be effective using a hardware load balancer. To ensure that all Agent connections are renewed frequently for effective load balancing, configure the maximum Agent connection lifetime on the Policy Server.
To configure the maximum connection lifetime for a Policy Server, set the following parameter:
AgentConnectionMaxLifetime
Specifies the maximum Agent connection lifetime in minutes.
Default:
0. Sets no specific value; only the
CA Single Sign-On
default connection lifetime (360 minutes) limit is enforced.
Limits:
0 - 360
Example:
15
If you do not have write access to the
CA Single Sign-On
installation folder, an Administrator must grant you permission to use the related XPS command-line tools using the Administrative UI or the XPSSecurity tool. Check the NETE_PS_ROOT environment variable if you do not know the installation folder path
To configure the maximum Agent connection lifetime for hardware load balancers
  1. Open a command line on the Policy Server, and enter the following command:
    xpsconfig
    The tool starts and displays the name of the log file for this session, and a menu of choices opens.
  2. Enter the following command:
    sm
    A list of options appears.
  3. Enter the numeric value corresponding to the AgentConnectionMaxLifetime parameter: For example, 4.
    The AgentConnectionMaxLifetime parameter menu appears.
  4. Type c to change the parameter value.
    The tool prompts you whether to apply the change locally or globally.
  5. Enter one of the following values:
    • l—The parameter value is changed for the local Policy Server only, overriding the global value.
    • g—The parameter value is changed globally for all Policy Servers (that do not have a local value override set) using the same policy store.
  6. Enter the new maximum Agent connection lifetime, in minutes, for example:
    30
    The AgentConnectionMaxLifetime parameter menu reappears, showing the new value. If a local override value is set, both the global and local values are shown.
  7. Enter Q three times.to end your XPSConfig session.
    Your changes are saved and the command prompt appears.
  8. Restart the Policy Server.
Monitoring the Health of Hardware Load Balancing Configurations
Different hardware load balancers provide various methods of determining the health of the hardware and applications that they are serving. This section describes general recommendations rather than vendor-specific cases.
Complicating the issue of server health determination is that health and load might not be the only consideration for the load balancer. For example, a relatively unburdened Policy Server can be running on a system that is otherwise burdened by another process. The load balancer should therefore also consider the state of the server itself (CPU, Memory Usage, and Disk Activity).
Active Monitors
Hardware load balancers can use active monitors to poll the hardware or application for status information. Each major vendor supports various active monitors. This topic describes several of the most common monitors and their suitability for monitoring the Policy Server.
  • TCP Half Open
    The TCP Half Open monitor performs a partial TCP/IP handshake with the Policy Server. The monitor sends a SYN packet to the Policy Server. If the Policy Server is up, it sends a SYN-ACK back to the monitor to indicate that it is healthy.
  • Simple Network Management Protocol (SNMP)
    An SNMP monitor can query the MIB to determine the health of the Policy Server. A sophisticated implementation can query values in the MIB to determine the queue depth, socket count, threads in-use, and threads available, and so on. SNMP monitoring is therefore the most suitable method for getting an in-depth sense of Policy Server health.
    To enable SNMP monitoring, configure the OneView Monitor and SNMP Agent on each Policy Server. 
     
    Not all hardware load balancers provide out-of-the-box SNMP monitoring.
  • Internet Control Message Protocol (ICMP)
    The ICMP health monitor pings the ICMP port of almost any networked hardware to see if it is online. Because the ICMP monitor does little to prove that the Policy Server is healthy, it is not recommended for monitoring Policy Server health.
  • TCP Open
    The TCP Open Monitor performs a full TCP/IP handshake with a networked application. The monitor sends well-known text to a networked application; the application must then respond to indicate that it is up. Because the Policy Server uses end-to-end encryption of TCP/IP connections and a proprietary messaging protocol, TCP Open Monitoring is unsuitable for monitoring Policy Server health.
Passive Monitors
In-band health monitors run on the hardware load balancer and analyze the traffic that flows through them. They are lower impact than active monitors and impose little overhead on the load balancer.
In-band monitors can be configured to detect a particular failure rate before failing over. In-band monitors on some load balancers can detect issues with an application and can specify an active monitor that determines when the issue has been resolved and the server is available once again.