CSA: Application Servers, Clusters, Multicast Messaging, and Load Balancers (On-Premise Only)

Set up multicast messaging, load balancers, and session persistence (sticky sessions). CSA also helps you scale, share disks, distribute files to servers, and manage multiple application or background service instances. You can also configure dedicated reporting databases, perform Oracle database clustering, and tune Sun HotSpot JVMs.
Classic PPM
describes the complex activity of deciding which services to run and which computers to run them. When scaling up or down, you want to balance performance with reliability. Even the smallest 
Classic PPM
 installations have more than one computer involved. For example, an installation typically has the following configuration:
  • One server for the database and another for everything else, or
  • One computer for 
    Classic PPM
    , which connects to a data center owned by a group that externally manages the database.
Classic PPM
 installations, depending on performance and reliability requirements, usually have redundant services running on several dedicated computers.
Multicast Messaging
Classic PPM
 uses multicast messaging extensively in a cluster. The Beacon is a bootstrapping service running on all managed machines in a cluster. The Beacon is used to manage and monitor the 
Classic PPM
 services on each box. The Beacon is also used to apply the patches and upgrades that are delivered from the 
Classic PPM
 application server.
The Beacon services employ a dynamic discovery mechanism using multicast. Each beacon sends a discovery message every 5 seconds telling any server listening in the cluster that it exists. 
Classic PPM
 System Administration listens for these Beacon discovery messages, using them to register the active cluster nodes. Upon receiving a Beacon discovery message, 
Classic PPM
 System Administration verifies the Beacon password against its own. If the verification is successful, 
Classic PPM
 System Administration adds the server to its list of servers.
Classic PPM
 System Administration also pings each Beacon directly every ten (10) seconds to determine whether the Beacon is alive. The ping is a TCP (unicast) message, so one message for each registered beacon is sent over the network. Here is the advantage of multicast: a multicast message is sent once over the network and received multiple times by interested parties. Given that it is UDP (as opposed to TCP), it is a lighter-weight message. The unicast message must be sent out over the network once for each interested party. Therefore, multicasting is perfect for dynamic discovery and monitoring applications like the Beacon.
The Beacon is not the only service to use multicasting. In addition to the Beacons, the cache management services within the application and background servers broadcast their own messages to maintain the cache consistency. These messages contain no actual data. They only inform remote servers when resident data is stale and must be reloaded from the database. We refer to this process as flushing the cache. Whenever a cache is flushed on a given server in a cluster, a message is sent over the network. All other app and bg services receive the message that informs them to flush their own caches of the data.
Classic PPM
 uses a session monitor thread to keep sessions on disparate servers from timing out prematurely. This thread broadcasts every 5 minutes with a longer message containing active session ids. When a session is no longer active on one server, it is flushed from all servers. When a session remains active, it is marked as such on all other servers to keep them from logging out the user.
The servers in a 
Classic PPM
 cluster must be able to send and receive multicast messages. In a normal subnet, this activity is allowed by default.
 As a best practice, keep all servers in the same subnet. If you are forced to use servers in different locations with different subnets, create a multicast bridge between them.
This practice could seem like extra UDP traffic. However, when you compare the amount of data traveling between the database, reporting server, application servers, and clients, the cluster messaging is inconsequential. The extra traffic is a small percentage of overall network traffic. Often, people hear broadcast and think their networks are overloaded. The fact is, all network traffic is broadcast. All TCP (unicast) messages on a subnet touch every node in the subnet, exactly like UDP (multicast). The difference is, the TCP messages are two to three times larger than UDP messages. Because the TCP messages are guaranteed to arrive, they require several handshakes per packet. This process means that TCP messages are larger. Furthermore, these multicast messages in 
Classic PPM
 are tiny compared to the average database request. With multiple application, background, and reporting servers on a high-performance system, hundreds of such database requests per second are made. The tiny UDP messages firing per-server every 5 seconds are nothing in comparison.
introduced JGroups into the architecture to control multicast messaging within the application tier. You could run the application tier without multicasting previously, but now it is much more involved with the background and process engine. These two services would likely not perform as expected.
14.x and newer releases typically require multicast to be active at the router layer so that the
cluster services correctly communicate.
Load Balancers and Session Persistence (Sticky Sessions)
Classic PPM
 supports hardware or software load balancers. 
Classic PPM
 is truly stateless and designed to function with round-robin and other distribution models. However, it is most efficient in terms of memory and performance when a user session remains on one server. By adding more application servers you gain performance.
Session persistence is required in a load-balanced environment. Session persistence is required regardless of the algorithm that is used or the number of resources that are contained on the server.
To illustrate, consider a load balancing algorithm that spreads the individual requests for a single user session across five application servers. In this case, each server loads and caches that user session data. You use five times more memory than you would use with Session Persistence enabled so that the user session remains on one server.
 We recommend that you enable the Session Persistence option on the load balancer.
Configure the load balancer to use soft session persistence. Soft session persistence sends requests from the same user session to the same server. If that server is overloaded or another server is idle, it moves the stickiness from the overloaded server to the idle one. Because 
Classic PPM
 is stateless, it supports this process. Furthermore, if an overloaded server goes down, those sessions are not lost. Presuming the load balancer correctly detects the downed server and redirects requests to another one, those user sessions are fully available on the new server.
Share Disks
In a 
Classic PPM
 cluster, multiple app and bg services must use the same disk for search indexing. Unless the files are stored in the database, the services must also use the same disk for document storage. In 
Classic PPM
 System Administration, ensure that each server with application or background services points the Search Index Directory property to the same shared disk. Unless you store files in the database, the File Store Directory property must also point to the same shared disk.
You can most effectively share disks using a Storage Area Network or Network Attached Storage solution. Unix NFS or Windows file sharing is also acceptable.
Distribute Files to Servers in a Cluster
Distribute updated files to all servers in the cluster. The updated files include files on the application server that get updated through customizing UI themes or installing a hot fix, patch, or upgrade.
You can also view the status of the distribution by clicking NSA Logs and choosing nsa-ca.log. When complete, the status window closes and the parent page appears. The distribution page shows the latest distributed date and version.
Follow these steps:
  1. Log in to 
    Classic PPM
     System Administration (CSA).
  2. Open Distribution, and click Distribute All.
    This option distributes all updated files under the 
    Classic PPM
     home directory.
  3. Select one or more servers, and click Distribute.
Multiple Application or Background Service Instances
If you use big-iron machines with large amounts of available physical memory, run multiple app and bg service instances on those machines. From the 
Classic PPM
 perspective, it is no different from running services on two different computers. You can use the full power of a computer, with the benefits of increased performance and reliability that come from multiple services.
CSA makes multiple instances easy by providing a Clone action. This action creates a copy of the desired app or bg service with incremented, available ports and service names to avoid collisions.
 After you clone a service, you can start, stop, and otherwise manage the new service instance as you would the original.
Follow these steps:
  1. Log in to CSA.
  2. Open Home, and click All Services.
  3. Select the check box for the service type you want to clone, and click Clone.
  4. If necessary, navigate to the server on which you created a service and modify the cloned settings.
Configure a Dedicated External Data Source
You can configure 
Classic PPM
 to use a secondary database to execute reports. Ensure that a secondary database is reasonably synchronized with your production 
Classic PPM
 database. When the reporting database is too far behind the production database, you can encounter problems. For example, users or instance data to be included in the report do not exist in the reporting database.
When a report is configured as shown in the following procedure, the report runs solely against the reporting database. All tables required by the report, including user and security tables, must be synchronized. If you synchronize a subset of the production database tables, select the correct tables to support your reports.
Follow these steps:
  1. Log in to CSA, and from Home, click Servers.
  2. Click the Properties icon of the server you want to configure.
  3. Click the Database sub tab.
  4. In the Internal Connection: Niku section, click New External Connection.
  5. Complete the appropriate properties for your reporting database:
      Defines the ID that is used to identify this connection later.
      Service Name
      Refers to a valid TNS entry (Oracle) or ODBC entry (MS SQL Server) on the reporting server.
  6. Save the changes.
  7. Click the Reporting sub tab.
  8. Complete the following field:
      Database ID
      Classic PPM
       database ID used to retrieve database information when executing reports. This ID corresponds to IDs of database connections that are defined on the database 
      Server: Properties
       Niku and System
  9. Save the changes.
  10. Repeat the preceding steps for all servers in your 
    Classic PPM
  11. Restart all 
    Classic PPM
     Application (app) and 
    Classic PPM
     Background (bg) services in your cluster.
  12. On each reporting server in your cluster:
    1. Create a TNS entry (Oracle) or ODBC entry (SQL Server) with the appropriate connection properties that points to your dedicated reporting database server.
    2. Ensure that the name you select matches the service name for your external connection in 
      Classic PPM
       System Administration.
  13. Install reports.
In Release 14.4 and older, you could use these steps to install the reports, the 
Classic PPM
 universe, and other reporting content on the BusinessObjects Enterprise report server. In 15.1 and newer releases, you might use these steps to set up a parallel transactional database to run reports instead of, or in addition to, using the data warehouse schema. The steps apply for adding any additional data source to on-premise editions of
. For example, add a replicated transaction schema, external data warehouse, or any third-party application schema that resides in an Oracle or MS-SQL database.
Oracle Database Clustering
Classic PPM
 supports using an Oracle cluster to provide higher scalability, redundancy, and failover than is possible with a single Oracle server.
Follow these steps:
  1. If necessary, export your existing single-server Oracle database from the single node instance and import it into the cluster.
  2. Log in to CSA.
  3. Open Home, and click Servers.
  4. Click the Properties icon for the server for which you want to edit properties.
  5. Select the Database sub tab.
  6. Edit the following properties for the database connection:
      Specify URL
      JDBC Url
      Fully qualified Oracle cluster JDBC URL. This URL is a jdbc prefix followed by the full TNS specification.
      The JDBC URL must contain the ServiceName parameter referencing a TNS entry on the specified Oracle host with the desired RAC configuration.
      For example:
      Alternative examples:
      Embed the RAC servers in the URL itself with the following DataDirect syntax:
      Oracle RAC servers with SCAN listener:
      Oracle DataGuard:
       For more information, see these resources:
      Oracle documentation for RAC and DataGuard setup, SCAN, and services setup.
      DataDirect Web site. Search for information about using DataDirect Connect for JDBC with Oracle Real Application Clusters (RAC).
  7. Save the changes.
  8. To validate the database settings, run a system health report for each server. See Run a Health Report.
  9. For the Apache Tomcat application servers, restart all services in 
    Classic PPM
     System Administration.
Tune Sun HotSpot JVMs
This information applies only to environments with Sun HotSpot JVMs.
Proper tuning of the Sun HotSpot JVM is an important task when configuring and maintaining 
Classic PPM
. While proper tuning is important for the background service, it is more important for any application services running in the cluster. This article focuses on the application.
 See the documentation for these settings on the Oracle website. You can also consult Broadcom Service partners for help in sizing your JVM heap size based on your implementation.
Many options are available for tuning a HotSpot JVM.
Best Practice: 
At a minimum, use the following values:
    Maximum Heap
    The maximum heap setting determines the most memory that the local operating system gives to the Java VM. The local operating system does not allocate this much memory immediately on startup, but it can do so as the process runs. As a best practice, set this value to at least 2048m (2 GB), even for small installations. For improved performance and fewer out-of-memory errors, set this value to 4 GB or 8 GB for larger datasets. For example, -Xms1024m -Xmx4096m.
    Minimum Heap
    The minimum heap setting is important to avoid wasted effort by the VM when expanding the heap as the application is ramped up. Specify the minimum heap as close to reality as possible. If the application typically uses 1.2 GB of RAM, set the minimum heap setting to 1200m. You can set the minimum and maximum heap sizes to be equal. This results in a simpler task for the VM garbage collector. These settings also make the JVM process allocate the full maximum heap from the operating system at startup, which is more consistent. This process requires you to measure true memory allocation requirements on your server.
    Parallel Garbage Collector
    The Parallel Garbage collector is recommended for any servers with two or more CPUs. The parallel garbage collector is safe to set on all servers. Any servers with fewer than two CPUs ignore this setting.
    New Ratio
    The HotSpot VM segregates objects into New and Old Spaces based on the ages of objects in memory. The short-lived objects tend to stay in the New (or Eden) Space and are collected before going elsewhere. The longer-lived objects are migrated to the Old (or Tenured) Space. The New Ratio setting does not actually define the explicit size of the New Space, but rather a ratio between the old and the new. A setting of -XX:NewRatio=3 translates to a ratio of 1 to 3, where the New generation is 1/3 the size of the Old generation. Applications that create and destroy many short-lived temporary objects quickly, as in a server-side application like 
    Classic PPM
    , require a larger-than-average New Space. Otherwise, the New Space is overflowing while the Old Space is under populated. The default for New Ratio varies by platform. To avoid problems in 
    Classic PPM
    , regardless of the platform, set the New ratio to 1 to 2, which means
    Maximum Permanent Size
    Besides the New and Old Spaces, there is a third space that is named the Permanent space. In this space resides permanent objects, primarily Java class definitions. This space grows not with the usage of the application, but with the size of the application. The more classes that are loaded in the application, the greater the permanent size. The default setting of 64m has proven too small. In Apache Tomcat, the default 
    Classic PPM
     setting for this space is 256m.