CSA: Configure JDBC Ping As An Alternative to Multicast

As an alternative to network multicast, Clarity PPM allows the optional implementation of JDBC-based messaging at the database level using JGROUPS JDBC ping discovery protocol. JDBC ping uses a common shared database to store information about cluster nodes used for discovery.
ccppmop1551
As an alternative to network multicast, Clarity PPM allows the optional implementation of JDBC-based messaging at the database level using JGROUPS JDBC ping discovery protocol. JDBC ping uses a common shared database to store information about cluster nodes used for discovery.
Disclaimer
: Limited support is available for this configuration option until further notice. After reading the documentation on this page, if you still have questions or experience issues, you can continue to contact CA Support; however expect delays for this relatively new capability growing in popularity as customers and partners evaluate their deployment strategies. The JDBC ping option is only available and supported on CA Clarity PPM Release 15.3 or higher.
Ideally, all cluster nodes can access the same database. When a node starts, it queries information about existing members from the database, determines the coordinator, and asks the CA PPM nodes to join the cluster. Each node also inserts information about itself into the table, so other node members can find it. Messages are delivered just like they are in multicast, except JDBC is now handling them.
: Jaspersoft cannot use JDBC ping. Clustered Jaspersoft environments must continue to use multicast. If you deploy Jaspersoft on a single server, you must continue to use multicast. Single-server Jaspersoft instances use a direct JDBC database connection to PPM.
2
: The new abbreviation for CA Clarity PPM System Administration (CSA) sometimes still appears by its legacy abbreviation (NSA).
JDBC Ping: Prerequisites 
1 incomplete Verify all settings in a non-production environment such as dev or test before going live in a production system. 6 incomplete Verify that your configuration of CA PPM is using either multicast or JDBC PING. You cannot mix these options. 2 incomplete If using JDBC PING, verify that all CA PPM services are configured to use JDBC PING. 3 incomplete Verify that all NSA passwords on all servers are set to the same value. One common password is shared across all NSA services by all members of the cluster. This password is used to validate multicast packets sent by the various cluster nodes. If a reset is necessary, run
admin password
. 4 incomplete If a server has multiple IP addresses (multiple NICs), configure the Beacon to bind to a single specific IP address. Redeploy Beacon Service after making any changes. 5 incomplete
Verify that all services are in the same subnet.
  • To confirm on Windows use
    ipconfig
  • To confirm on Linux, use
    ifconfig
    .
JDBC Ping: Configuration
: Make your entries in CSA and the properties.xml file carefully. A single net mask configuration mistake while provisioning the server can result in services not communicating with each other.
20180614-JDBC-PING-CSA.jpg
Follow these steps
:
  1. Open CSA by navigating to
    http://<ca_ppm_server>:<port>
    . For example, 
    http://ppm_server.my.org:8090
  2. Click
    Home
    ,
    All Services
    . Stop all CA PPM Services before making any changes.
  3. Click
    Home
    ,
    Servers
    .
  4. Repeat these steps for each server in the application cluster:
    1. Click the server. The server name appears as a link. For example, my_ca_ppm:<port> (local).
      The Properties page appears.
    2. In the
      Multicast Address
      field, to satisfy the required field, leave the default IP.
    3. In the 
      Multicast Port
      field, leave the previous value or enter a standard port value such as 8090 or 9090 to satisfy the required field.
    4. In the
      Bind Address
      field, enter the IP address for the application server. A bind address is a local network interface IP that ensures all machines are using the same IP interface on the same subnet. If you have multiple servers, each server should have the correct bind/IP address for that server. If a server has multiple NICs and, therefore, multiple IPs, specify the IP that you want CA PPM to use for network communication.
      : Unlike multicast, JDBC PING does not require that all servers belong to the same subnet.
    5. Click
      Save
      .
  5. Open the
    properties.xml
    file on each server.
    1. In the
      NSA
      section, add the
      useJDBCPing="true"
      parameter as shown in the following example:
      <nsa bindAddress="###.###.###.###" multicastAddress="###.###.###.###" multicastPort="9191" clientPort="9288" serviceName="CA PPM Beacon 123" useJDBCPing="true" />
      Replace # with your IP node values.
    2. If you did not already set the
      Bind Address
      field values in CSA, you can manually set them in the
      properties.xml
      file. The
      bindAddress
      property should use the IP address associated with the given application server.
    3. Repeat these steps for each server.
      : If the server has multiple IP addresses (NICs) then configure the Beacon to bind to a single specific IP address. This step is not required; however, if the Beacons are not configured correctly, the servers are not visible in CSA. After you make any changes, stop, remove, add, and deploy the Beacon service.
  6. In CSA, click
    Home
    ,
    All Services
    . Start all CA PPM services.
  7. Verify that the beacon services started successfully on all servers in the cluster. On each application server, run the following command:
     
    niku start beacon
    .
  8. Confirm that the beacon service remains active. On each application server, run the following command:
     
    niku status beacon
    .
JDBC Ping: Verify JDBC Ping Messaging is Enabled
After JDBC ping is enabled, the PPM database creates a table named
CMN_JGROUPS_PING
  • JDBC ping address information is stored in this table.
  • Nodes in a cluster are also registered in this table.
Follow these steps
:
  1. Access the database with permissions to view this table.
  2. Verify that information about all PPM services is populated in the
    CMN_JGROUPS_PING
    database table.
    The following image provides an example:
    image2018-6-11 12:24:49.png
    CREATED
    : The date and time of the most recent posted message.
    CLUSTER_NAME
    : The topic name of the parent cluster for each member server.
    PING_DATA
    : The message content.
    UUID
    : The unique identifier for each server consisting of an application instance ID and topic name. In this example, the topic name is set to CLRTY in all cases.
  3. When JGroups is using JDBCPing, discovery of the nodes in the cluster is done using the database. Once discovered, the protocol between the nodes is TCP and should be peer-to-peer. 
    1. Update the
      my/etc/hosts
      file to point to 127.0.0.1 to use
      $HOSTNAME
      .
    2. Use a telnet or nc command to test the port connectivity.
    3. Verify that the communication on the beacon port is
      open
      .
: Switching to JDBC ping requires no additional steps for PPM integrations with SSO.
CA PPM Hybrid Cloud Migrations
CA provides CA PPM SaaS and other CA cloud solutions. We also recognize that some customers might deploy and manage a customized AWS or Microsoft Azure configuration. As a CA PPM administrator, you can take the following high-level steps:
  1. After you upgrade, switch to JDBC ping. Multiple PPM app services can communicate with each other using JDBC ping.
  2. Single-server Jaspersoft environments can communicate with the PPM database server using a direct JDBC connection.
: At this time, we do not support clustered Jaspersoft configurations in Azure, AWS, or any other public, private, or hybrid cloud environments. However, we recognize the demand for this functionality. See the following new section below. Limited supported is available for replicated caching using JMS.
Configure Replicated Caching for JasperReports Server with JMS
On Azure, AWS, Google Cloud, and other public clouds where multicast is disabled and JDBC ping is enabled, support for clustered Jaspersoft configuration was previously not supported. Limited support is now available for using Java Message Service (JMS) for Jaspersoft server ehcache replication.
JMS is a mechanism for interacting with message queues. Open source options for Linux and Windows include Apache ActiveMQ and Oracle Open MQ. With JasperReports 6.1.0 or higher, Ehcache replication over JMS is now available and supported. (In our tests and in this documentation, we used ActiveMQ 5.12.1.)
With the Ehcache JMS replication module, your organization can leverage your message queue investment for caching.
  • Each cache node subscribes to a predefined topic, configured as the <topicBindingName> in
    ehcache.xml
    .
  • Each replicated cache publishes cache Elements to that topic. Replication is configured per cache in accordance with the Ehcache standard replication mechanism.
  • Data is pushed directly to cache nodes from external topic publishers in any language. After the data is sent to the replication topic, it is automatically picked up by the cache subscribers.
  • JMSCacheLoader sends cache load requests to one of the following queue types before sending a response:
    • Ehcache cluster node
    • External queue receiver
The following image illustrates the entity relationships*:
apache_jms_diagram.png
* Image courtesy of the Apache Software Foundation. Apache ActiveMQ, Apache, the Apache feather logo, and the Apache ActiveMQ project logo are trademarks of the Apache Software Foundation.
Follow these steps:
  1. Verify that you have already installed a supported release of Clarity PPM with JasperReports server in a cluster.
  2. Verify that all JasperReports server nodes point to a single database node.
  3. Install and configure the ActiveMQ server instance. To download and install ActiveMQ, see http://activemq.apache.org/version-5-getting-started.html.
  4. Verify that the default ActiveMQ broker port is accessible. 
  5. Start the ActiveMQ server.
  6. Download ehcache_jms.zip. Configuration is done in the
    ehcache.xml
    file.
  7. Each cluster needs to use a fixed topic name for replication. By default, ActiveMQ supports auto creation of destinations.  
  8. Configure a JMSCacheManagerPeerProviderFactory globally for a CacheManager. Perform this step once per CacheManager (once per ehcache.xml file).
  9. For each cache configuration that you want to replicate, add the JMSCacheReplicatorFactory to the cache.
    : All the above-mentioned configurations are already available in the
    ehcache.xml
    distributed with JasperReports server 6.1.0. Follow the instructions below for configuring cache distribution using JMS. Multicasting is supported on RMI, JMS, and AWS. In our testing, we follow the RMI mechanism.
  10. Stop your tomcat server.
  11. Go to
    <tomcat>/webapps/reportservice/WEB-INF
    and open the
    ehcache_hibernate.xml
    file in an editor.
  12. Comment out or remove the following section of the file in between the ***NO CLUSTERING*** and ***END of NO CLUSTERING*** comments:
    <!-- *********************** NO CLUSTERING ************************* --> <!-- <cache name="defaultRepoCache" maxElementsInMemory="10000" eternal="false" overflowToDisk="false" timeToIdleSeconds="36000" timeToLiveSeconds="180000" diskPersistent="false" diskExpiryThreadIntervalSeconds="120" statistics="true"> </cache> <cache name="aclCache" maxElementsInMemory="10000" eternal="false" overflowToDisk="false" timeToIdleSeconds="360000" timeToLiveSeconds="720000" diskPersistent="false"> </cache>--> <!-- ******************** END of NO CLUSTERING ****************** -->
  13. Uncomment the ***JMS*** and ***END OF JMS*** sections by removing the <!-- START and END --> lines. As a result, you are enabling Ehcache replication over JMS.
    <!-- ****************** JMS ****************** --> <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.jms.JMSCacheManagerPeerProviderFactory" properties="initialContextFactoryName=com.jaspersoft.jasperserver.api.engine.replication.JRSActiveMQInitialContextFactory, providerURL=tcp://EXAMPLE_SERVER-I154330:61616, replicationTopicConnectionFactoryBindingName=topicConnectionFactory, replicationTopicBindingName=ehcache, getQueueConnectionFactoryBindingName=queueConnectionFactory, getQueueBindingName=ehcacheQueue, topicConnectionFactoryBindingName=topicConnectionFactory, topicBindingName=ehcache" propertySeparator=","/> <cache name="org.hibernate.cache.StandardQueryCache" maxEntriesLocalHeap="5000" maxElementsInMemory="5000" eternal="false" timeToLiveSeconds="120"> <cacheEventListenerFactory class="net.sf.ehcache.distribution.jms.JMSCacheReplicatorFactory" properties="replicateAsynchronously=true, replicatePuts=true, replicateUpdates=true, replicateUpdatesViaCopy=false, replicateRemovals=true, asynchronousReplicationIntervalMillis=1000" propertySeparator=","/> <cacheLoaderFactory class="net.sf.ehcache.distribution.jms.JMSCacheLoaderFactory" properties="initialContextFactoryName=com.jaspersoft.jasperserver.api.engine.replication.JRSActiveMQInitialContextFactory, providerURL=tcp://EXAMPLE_SERVER-I154330:61616, replicationTopicConnectionFactoryBindingName=topicConnectionFactory, replicationTopicBindingName=ehcache, getQueueConnectionFactoryBindingName=queueConnectionFactory, getQueueBindingName=ehcacheQueue, topicConnectionFactoryBindingName=topicConnectionFactory, topicBindingName=ehcache" propertySeparator=","/> </cache> <cache name="org.hibernate.cache.UpdateTimestampsCache" maxEntriesLocalHeap="5000" eternal="true"> <cacheEventListenerFactory class="net.sf.ehcache.distribution.jms.JMSCacheReplicatorFactory" properties="replicateAsynchronously=true, replicatePuts=true, replicateUpdates=true, replicateUpdatesViaCopy=true, replicateRemovals=true, asynchronousReplicationIntervalMillis=1000" propertySeparator=","/> <cacheLoaderFactory class="net.sf.ehcache.distribution.jms.JMSCacheLoaderFactory" properties="initialContextFactoryName=com.jaspersoft.jasperserver.api.engine.replication.JRSActiveMQInitialContextFactory, providerURL=tcp://samsh06-I154330:61616, replicationTopicConnectionFactoryBindingName=topicConnectionFactory, replicationTopicBindingName=ehcache, getQueueConnectionFactoryBindingName=queueConnectionFactory, getQueueBindingName=ehcacheQueue, topicConnectionFactoryBindingName=topicConnectionFactory, topicBindingName=ehcache" propertySeparator=","/> </cache> <cache name="defaultRepoCache" maxElementsInMemory="10000" eternal="false" overflowToDisk="false" timeToIdleSeconds="36000" timeToLiveSeconds="180000" diskPersistent="false" diskExpiryThreadIntervalSeconds="120" statistics="true"> <cacheEventListenerFactory class="net.sf.ehcache.distribution.jms.JMSCacheReplicatorFactory" properties="replicateAsynchronously=true, replicatePuts=true, replicateUpdates=true, replicateUpdatesViaCopy=true, replicateRemovals=true, asynchronousReplicationIntervalMillis=1000" propertySeparator=","/> <cacheLoaderFactory class="net.sf.ehcache.distribution.jms.JMSCacheLoaderFactory" properties="initialContextFactoryName=com.jaspersoft.jasperserver.api.engine.replication.JRSActiveMQInitialContextFactory, providerURL=tcp://EXAMPLE_SERVER-I154330:61616, replicationTopicConnectionFactoryBindingName=topicConnectionFactory, replicationTopicBindingName=ehcache, getQueueConnectionFactoryBindingName=queueConnectionFactory, getQueueBindingName=ehcacheQueue, topicConnectionFactoryBindingName=topicConnectionFactory, topicBindingName=ehcache" propertySeparator=","/> </cache> <cache name="aclCache" maxElementsInMemory="10000" eternal="false" overflowToDisk="false" timeToIdleSeconds="360000" timeToLiveSeconds="720000" diskPersistent="false"> <cacheEventListenerFactory class="net.sf.ehcache.distribution.jms.JMSCacheReplicatorFactory" properties="replicateAsynchronously=true, replicatePuts=false, replicateUpdates=true, replicateUpdatesViaCopy=false, replicateRemovals=true, asynchronousReplicationIntervalMillis=1000" propertySeparator=","/> <cacheLoaderFactory class="net.sf.ehcache.distribution.jms.JMSCacheLoaderFactory" properties="initialContextFactoryName=com.jaspersoft.jasperserver.api.engine.replication.JRSActiveMQInitialContextFactory, providerURL=tcp://EXAMPLE_SERVER-I154330:61616, replicationTopicConnectionFactoryBindingName=topicMConnectionFactory, replicationTopicBindingName=ehcacheM, getQueueConnectionFactoryBindingName=queueMConnectionFactory, getQueueBindingName=ehcacheMQueue, topicConnectionFactoryBindingName=topicMConnectionFactory, topicBindingName=ehcacheM" propertySeparator=","/> </cache> <!-- ******************** END of JMS ************************* -->
  14. Specify the broker URL of the ActiveMQ instance. Search for the
    providerURL
    attribute in the XML file and provide the value of your ActiveMQ broker URL. Update the 
    providerURL
     attribute in five (5) different places in the
    ehcache.xml
    file. For example,
    providerURL=tcp://EXAMPLE_SERVER-I154330:61616
    .
    <cacheLoaderFactory class="net.sf.ehcache.distribution.jms.JMSCacheLoaderFactory" properties="initialContextFactoryName=com.jaspersoft.jasperserver.api.engine.replication.JRSActiveMQInitialContextFactory, providerURL=tcp://localhost:61616, replicationTopicConnectionFactoryBindingName=topicMConnectionFactory, replicationTopicBindingName=ehcacheM, getQueueConnectionFactoryBindingName=queueMConnectionFactory, getQueueBindingName=ehcacheMQueue, topicConnectionFactoryBindingName=topicMConnectionFactory, topicBindingName=ehcacheM" propertySeparator=","/>
  15. Save the file.
  16. Repeat the same configuration changes in the following locations:
    • <tomcat>/webapps/reportservice/WEB-INF/ehcache_hibernate.xml 
    • <tomcat>/webapps/reportservice/WEB-INF/classes/ehcache_hibernate.xml
  17. Repeat the same configuration (all steps) on all JasperReports server instances available in the cluster.
  18. Delete or otherwise clean the following directories:
    • tomcat
      temp 
    • work/Catalina/localhost
  19. Start your tomcat server.
Configure ActiveMQ for JasperReports on Windows Server
  1. Download Activemq server from Apache at http://activemq.apache.org/activemq-5158-release.html.
  2. Extract the Zip file for Windows.
  3. Install Java Jdk 1.7 and above. In this Doc we are using JDK 11.
  4. Set JAVA_HOME environment variable on the server.
  5. Navigate to the directory where ActiveMQ is extracted. In this example it's 
    E:\activemq\bin
    .
  6. Run the command "
    activemq start
    " to start the activemq server. 
    E:\activemq\bin>activemq start
  7. After the server is started we can access the Admin console on port 8161. Open a browser and enter 
    http://hostname:8161/admin
    . Default username/password is admin.
Configure JasperReports Server with ActiveMQ
  1. Stop Jasper service.
  2. Navigate to 
    %TOMCAT_HOME%/webapps/reportservice/WEB-INF
    .
  3. Edit the file named "
    ehcache_hibernate.xml
    ".
  4. Comment out the lines between "
    RMI
    " and "
    END OF RMI
    " sections. 
    image2019-4-3_12-51-54.png
  5. Uncomment the lines between "
    JMS
    " and "
    END OF JMS
    " sections.
  6. Replace the 
    ProvideURL
     value from "" to "" where hostname refers to the FQDN of ActiveMQ server. Between the above sections there are 5 placed where this needs to be changed.
  7. Copy the "
    ehcache_hibernate.xml
    " to 
    classes
     folder under 
    %TOMCAT_HOME%/webapps/reportservice/WEB-INF
     folder.
  8. Start jasper service.
  9. If you login to ActiveMQ admin page we should see two cache providers 
    ehcacheQueue
     & 
    ehcacheMQueue
    .
Monitor the ActiveMQ Console
  1. Download, install, and configure the ActiveMQ server instance. See http://activemq.apache.org/version-5-getting-started.html.
  2. Access the web console from http://localhost:8161/admin. Replace localhost with the server IP to access the ActiveMQ web console remotely.
  3. To verify the total number of Jaspersoft instances connected to this ActiveMQ instance, navigate to the
    Queues
    tab.
    For example, http://localhost:8161/admin/queues.jsp. The table on this page shows how many Jaspersoft instances are connected to the ActiveMQ server instances.
    image2019-4-29_11-14-5.png
  4. To verify the list of Jaspersoft instances connected to ActiveMQ, navigate to the
    Connections
    tab.
    For example, https://localhost:8161/admin/connections.jsp.
    image2019-4-29_11-15-11.png
Change the Default ActiveMQ 'admin' Password
  1. Look at the file
    <activemq-installer>/conf/jetty-realm.properties
    .
  2. Notice the format is: 
      username:password,rolename
    Default values are:
    admin:admin,admin
  3. Change the second value to update the password to:
      admin: <new-password>, admin