CSA: Application Servers, Clusters, Multicast Messaging, and Load Balancers

Set up multicast messaging, load balancers, and session persistence (sticky sessions). CSA also helps you scale, share disks, distribute files to servers, and manage multiple application or background service instances. You can also configure dedicated reporting databases, perform Oracle database clustering, and tune Sun HotSpot JVMs.
ccppmop152
Set up multicast messaging, load balancers, and session persistence (sticky sessions). CSA also helps you scale, share disks, distribute files to servers, and manage multiple application or background service instances. You can also configure dedicated reporting databases, perform Oracle database clustering, and tune Sun HotSpot JVMs.
2
 
Scale 
CA PPM
Scaling 
describes the complex activity of deciding which services to run and which computers to run them. When scaling up or down, you want to balance performance with reliability. Even the smallest 
CA PPM
installations have more than one computer involved. For example, an installation typically has the following configuration:
  • One server for the database and another for everything else, or
  • One computer for 
    CA PPM
    , which connects to a data center owned by a group that externally manages the database.
Medium-to-large 
CA PPM
installations, depending on performance and reliability requirements, usually have redundant services running on several dedicated computers.
Multicast Messaging
CA PPM
 uses multicast messaging extensively in a cluster. The Beacon is a bootstrapping service running on all managed machines in a cluster. The Beacon is used to manage and monitor the 
CA PPM
services on each box. The Beacon is also used to apply the patches and upgrades that are delivered from the 
CA PPM
application server.
The Beacon services employ a dynamic discovery mechanism using multicast. Each beacon sends a discovery message every 5 seconds telling any server listening in the cluster that it exists. 
CA PPM
System Administration listens for these Beacon discovery messages, using them to register the active cluster nodes. Upon receiving a Beacon discovery message, 
CA PPM
System Administration verifies the Beacon password against its own. If the verification is successful, 
CA PPM
System Administration adds the server to its list of servers.
CA PPM
System Administration also pings each Beacon directly every ten (10) seconds to determine whether the Beacon is alive. The ping is a TCP (unicast) message, so one message for each registered beacon is sent over the network. Here is the advantage of multicast: a multicast message is sent once over the network and received multiple times by interested parties. Given that it is UDP (as opposed to TCP), it is a lighter-weight message. The unicast message must be sent out over the network once for each interested party. Therefore, multicasting is perfect for dynamic discovery and monitoring applications like the Beacon.
The Beacon is not the only service to use multicasting. In addition to the Beacons, the cache management services within the application and background servers broadcast their own messages to maintain the cache consistency. These messages contain no actual data. They only inform remote servers when resident data is stale and must be reloaded from the database. We refer to this process as flushing the cache. Whenever a cache is flushed on a given server in a cluster, a message is sent over the network. All other app and bg services receive the message that informs them to flush their own caches of the data.
CA PPM
 uses a session monitor thread to keep sessions on disparate servers from timing out prematurely. This thread broadcasts every 5 minutes with a longer message containing active session ids. When a session is no longer active on one server, it is flushed from all servers. When a session remains active, it is marked as such on all other servers to keep them from logging out the user.
The servers in a 
CA PPM
cluster must be able to send and receive multicast messages. In a normal subnet, this activity is allowed by default.
: As a best practice, keep all servers in the same subnet. If you are forced to use servers in different locations with different subnets, create a multicast bridge between them.
This practice could seem like extra UDP traffic. However, when you compare the amount of data traveling between the database, reporting server, application servers, and clients, the cluster messaging is inconsequential. The extra traffic is a small percentage of overall network traffic. Often, people hear broadcast and think their networks are overloaded. The fact is, all network traffic is broadcast. All TCP (unicast) messages on a subnet touch every node in the subnet, exactly like UDP (multicast). The difference is, the TCP messages are two to three times larger than UDP messages. Because the TCP messages are guaranteed to arrive, they require several handshakes per packet. This process means that TCP messages are larger. Furthermore, these multicast messages in 
CA PPM
are tiny compared to the average database request. With multiple application, background, and reporting servers on a high-performance system, hundreds of such database requests per second are made. The tiny UDP messages firing per-server every 5 seconds are nothing in comparison.
Load Balancers and Session Persistence (Sticky Sessions)
CA PPM
 supports any hardware or software load balancer. 
CA PPM
 is truly stateless and designed to function with a round-robin or other distribution model. However, it is most efficient when a user session remains on one server. The reason that it is more efficient is a memory issue. By adding more application servers you gain performance. Session persistence is required in a load-balanced environment. Session persistence is required regardless of the algorithm that is used or the number of resources that are contained on the server.
To illustrate, consider five application servers where the load balancing algorithm spreads the individual requests for a single user session across all servers. In this case, each server loads and caches that user session data. You use five times more memory than you would use with Session Persistence enabled so that the user session remains on one box.
: As a best practice, enable the Session Persistence option on the load balancer.
Configure the load balancer to use soft session persistence. A soft session persistence sends requests from the same user session to the same box. If that box is overloaded or another server is idle, it moves the stickiness from the overloaded box to the idle box. Because 
CA PPM
is stateless, it supports this process. Furthermore, if an overloaded box goes down, those sessions are not lost. Presuming the load balancer correctly detects the downed server and redirects requests to another, those user sessions are fully available on the new server.
Share Disks
In a 
CA PPM
cluster, multiple app and bg services must use the same disk for search indexing. Unless the files are stored in the database, the services must also use the same disk for document storage. In 
CA PPM
System Administration, ensure that each server with application or background services points the Search Index Directory property to the same shared disk. Unless you store files in the database, the File Store Directory property must also point to the same shared disk.
You can most effectively share disks using a Storage Area Network or Network Attached Storage solution. Unix NFS or Windows file sharing is also acceptable.
Distribute Files to Servers in a Cluster
Distribute updated files to all servers in the cluster. The updated files include files on the application server that get updated through customizing UI themes or installing a hot fix, patch, or upgrade.
You can also view the status of the distribution by clicking NSA Logs and choosing nsa-ca.log. When complete, the status window closes and the parent page appears. The distribution page shows the latest distributed date and version.
Follow these steps:
  1. Log in to 
    CA PPM
    System Administration (CSA).
  2. Open Distribution, and click Distribute All.
    This option distributes all updated files under the 
    CA PPM
     home directory.
  3. Select one or more servers, and click Distribute.
Multiple Application or Background Service Instances
If you use big-iron machines with large amounts of available physical memory, run multiple app and bg service instances on those machines. From the 
CA PPM
 perspective, it is no different from running services on two different computers. You can use the full power of a computer, with the benefits of increased performance and reliability that come from multiple services.
CSA makes multiple instances easy by providing a Clone action. This action creates a copy of the desired app or bg service with incremented, available ports and service names to avoid collisions.
After you clone a service, you can start, stop, and otherwise manage the new service instance as you would the original.
Follow these steps:
  1. Log in to CSA.
  2. Open Home, and click All Services.
  3. Select the check box for the service type you want to clone, and click Clone.
  4. If necessary, navigate to the server on which you created a service and modify the cloned settings.
Configure Dedicated Reporting Databases
You can configure 
CA PPM
to use a secondary database against which you want to execute reports. Ensure that a secondary database is reasonably synchronized with your production 
CA PPM
database. When the reporting database is too far behind the production database, you can encounter problems. In this case, users or instance data to be included in the report do not exist in the reporting database. When a report is configured as shown in the following procedure, the report runs solely against the reporting database. All tables required by the report, including user and security tables, must be synchronized. If you synchronize a subset of the production database tables, select the correct tables to support your reports.
Follow these steps:
  1. Log in to CSA, and from Home, click Servers.
  2. Click the Properties icon of the server you want to configure.
  3. Click the Database sub tab.
  4. In the Internal Connection: Niku section, click New External Connection.
  5. Complete the appropriate properties for your reporting database:
    • ID
      Defines the ID that is used to identify this connection later.
    • Service Name
      Refers to a valid TNS entry (Oracle) or ODBC entry (MS SQL Server) on the reporting server.
  6. Save the changes.
  7. Click the Reporting sub tab.
  8. Complete the following field:
    • Database ID
      The 
      CA PPM
      database ID used to retrieve database information when executing reports. This ID corresponds to IDs of database connections that are defined on the database
      Server: Properties
      page.
      Values:
      Niku and System
      Default:
      Niku
      Required:
      No
  9. Save the changes.
  10. Repeat the preceding steps for all servers in your 
    CA PPM
    cluster.
  11. Restart all 
    CA PPM
    Application (app) and 
    CA PPM
    Background (bg) services in your cluster.
  12. On each reporting server in your cluster:
    1. Create a TNS entry (Oracle) or ODBC entry (SQL Server) with the appropriate connection properties that points to your dedicated reporting database server.
    2. Ensure that the name you select matches the service name for your external connection in 
      CA PPM
      System Administration.
  13. Install reports.
    This step installs the reports, the
    CA PPM
    /
    CA PPM
    universe, and other reporting content on the BusinessObjects Enterprise report server.
Oracle Database Clustering
CA PPM
 supports using an Oracle cluster to provide higher scalability, redundancy, and failover than is possible with a single Oracle server.
Follow these steps:
  1. If necessary, export your existing single-server Oracle database from the single node instance and import it into the cluster.
  2. Log in to CSA.
  3. Open Home, and click Servers.
  4. Click the Properties icon for the server for which you want to edit properties.
  5. Select the Database sub tab.
  6. Edit the following properties for the database connection:
    • Specify URL
      Selected.
    • JDBC Url
      Fully qualified Oracle cluster JDBC URL. This URL is a jdbc prefix followed by the full TNS specification.
      The JDBC URL must contain the ServiceName parameter referencing a TNS entry on the specified Oracle host with the desired RAC configuration.
      For example:
      jdbc:clarity:oracle://server:1521;ServiceName=serviceTNS;BatchPerformanceWorkaround=true;InsensitiveResultSetBufferSize=0;ServerType=dedicated;supportLinks=true
      Alternative examples:
      Embed the RAC servers in the URL itself with the following DataDirect syntax:
      jdbc:clarity:oracle://server1:1521;ServiceName=serviceTNS;BatchPerformanceWorkaround=true;InsensitiveResultSetBufferSize=0;ServerType=dedicated;supportLinks=true;AlternateServers=(server2:1521;server3:1521);LoadBalancing=true
      Oracle RAC servers with SCAN listener:
      jdbc:clarity:oracle://oracscan:1521;ServiceName=serviceTNS;BatchPerformanceWorkaround=true;InsensitiveResultSetBufferSize=0;ServerType=dedicated;supportLinks=true;AlternateServers=(oracscan:1521);FailoverMode=Select;ConnectionRetryCount=20;ConnectionRetryDelay=15;LoadBalancing=true"
      Oracle DataGuard:
      jdbc:clarity:oracle://PRIMARY_SERVER:1521;ServiceName=CLARITY_RW;AlternateServers=(PHYSICAL_STANDBY_SERVER:1521);ConnectionRetryCount=20;ConnectionRetryDelay=15;;BatchPerformanceWorkaround=true;InsensitiveResultSetBufferSize=0;ServerType=dedicated;supportLinks=true
      For more information, see these resources:
      Oracle documentation for RAC and DataGuard setup, SCAN, and services setup.
      DataDirect Web site. Search for information about using DataDirect Connect for JDBC with Oracle Real Application Clusters (RAC).
  7. Save the changes.
  8. To validate the database settings, run a system health report for each server. See CA PPM System Health Report, Customization Discovery, Statistics, and Log Analysis.
  9. For the Apache Tomcat application servers, restart all services in 
    CA PPM
    System Administration.
 
Tune Sun HotSpot JVMs
This information applies only to environments with Sun HotSpot JVMs.
Proper tuning of the Sun HotSpot JVM is an important task when configuring and maintaining 
CA PPM
. While proper tuning is important for the background service, it is more important for any application services running in the cluster. This article focuses on the application.
See the documentation for these settings on the Oracle website.
Many options are available for tuning a HotSpot JVM.
Best Practice: 
At a minimum, use the following values:
  • Maximum Heap
    -Xmx<size>m
    The maximum heap setting determines the most memory that the local operating system gives to the Java VM. The local operating system does not allocate this much memory immediately on startup, but it can do so as the process runs. As a best practice, set this value to at least 2048m (2 GB), even for small installations.
  • Minimum Heap
    -Xms<size>m
    The minimum heap setting is important to avoid wasted effort by the VM when expanding the heap as the application is ramped up. Specify the minimum heap as close to reality as possible. If the application typically uses 1.2 GB of RAM, set the minimum heap setting to 1200m. You can set the minimum and maximum heap sizes to be equal. This results in a simpler task for the VM garbage collector. These settings also make the JVM process allocate the full maximum heap from the operating system at startup, which is more consistent. This process requires you to measure true memory allocation requirements on your server.
  • Parallel Garbage Collector
    -XX:+UseParallelGC
    The Parallel Garbage collector is recommended for any servers with two or more CPUs. The parallel garbage collector is safe to set on all servers. Any servers with fewer than two CPUs ignore this setting.
  • New Ratio
    -XX:NewRatio=<size>
    The HotSpot VM segregates objects into New and Old Spaces based on the ages of objects in memory. The short-lived objects tend to stay in the New (or Eden) Space and are collected before going elsewhere. The longer-lived objects are migrated to the Old (or Tenured) Space. The New Ratio setting does not actually define the explicit size of the New Space, but rather a ratio between the old and the new. A setting of -XX:NewRatio=3 translates to a ratio of 1 to 3, where the New generation is 1/3 the size of the Old generation. Applications that create and destroy many short-lived temporary objects quickly, as in a server-side application like 
    CA PPM
    , require a larger-than-average New Space. Otherwise, the New Space is overflowing while the Old Space is under populated. The default for New Ratio varies by platform. To avoid problems in 
    CA PPM
    , regardless of the platform, set the New ratio to 1 to 2, which means
     XX:NewRatio=2
    .
  • Maximum Permanent Size
    -XX:MaxPermSize=<size>m
    Besides the New and Old Spaces, there is a third space that is named the Permanent space. In this space resides permanent objects, primarily Java class definitions. This space grows not with the usage of the application, but with the size of the application. The more classes that are loaded in the application, the greater the permanent size. The default setting of 64m has proven too small. In Apache Tomcat, the default 
    CA PPM
     setting for this space is 256m.