Log Analytics

Contents
uim901
log_analytics
 
Contents
 
 
 
Business Challenge
As we move to the next paradigm of infrastructure monitoring, it is important to provide context to infrastructure performance issues as quickly as possible. Log data is an important source of information to troubleshoot problems in your applications or IT infrastructure. However, it is cumbersome to log in to individual servers and to read the log files manually to find the relevant information.
Solution
Log Analytics streamlines the log analysis process and helps you troubleshoot faster and more effectively by:
  • Collecting and aggregating logs from multiple sources (individual servers, devices, and applications). You can gain insights from data using analytics dashboards.
  • Providing out-of-the-box dashboards (blueprints) based on the collected data for supported log types and patterns.
  • Providing full text search on all the stored log files.
  • Performing near real-time and historical search on all the log data from one centralized location.
  • Performing a periodic query of the log data and sending notifications (alarm, email, and snmp) when matches are found. You can also save and schedule a log query or pattern to receive notifications when a match is found.
Benefits
The following table includes some Log Analytics benefits.
 
Benefit
Explanation
Use routine data to expose larger issues.
You can use syslog data to answer the following questions:
  • What kinds of events are occurring?
  • When did the event happen?
  • Are the events happening in clusters?
  • Are there any deviations in the events that are occurring?
  • Which sources are generating the most events?
  • Which key events are happening the most often?
  • Are there any security issues occurring?
  • What severity trends are occurring?
Monitor first-time messages from logs.
You can monitor initial messages that could potentially predict larger issues (for example, low memory messages).
Monitor drops and spikes
You can detect deviations in the rate of events across technologies, apps, or tools.
Monitor unusual rates of outbound requests and users attempting unusual URL access.
 
Monitor syslog events, Windows events, and log information over a configurable time frame.
You can use this data see all the information across a time frame.
Use Logs and performance data for your capacity planning.
You can log baseline average, peak users, and performance metrics to help define capacity utilization.
Log Analytics Example: Monitor a Retail Website
In the following diagram, Log Analytics monitors a retail website. Each service in the diagram is a separate system/server:
Log Analytics - Business Flow Part 1
Log Analytics - Business Flow Part 1
In the following diagram, the product search becomes slow during the course of normal operations.
Business Workflow Part 2
Business Workflow Part 2
The following diagram lists the steps that you can take to use Log Analytics to detect and solve the issue with the product search.
Business Flow Part 3
Business Flow Part 3
Required Components
Log Analytics requires both the Agile Operations Analytics Base Platform, CA UIM, and the following probes:
  • Log Forwarder (log_forwarder)
  • AXA Log Gateway (axa_log_gateway)
  • Log Monitoring Service (log_monitoring_service)
The following Agile Operations Analytics components are mandatory for Log Analytics:
  • Data Studio (Kibana dashboards)
  • Kafka and Zookeeper
  • Jarvis (Includes Elasticsearch and Jarvis Ingestion, Verifier, and Indexer components)
  • Read Server, UI Server, and RDBMS
 
Data Studio
 
Primary user interface for Log Analytics. Data Studio provides out-of-the-box dashboards for the supported log types, full-text search, and ad-hoc data exploration.
 
Log Collector
 
The AXA Log Collector receives syslog and eventlog data from remote devices over TCP (
Default Port: 6514
) and writes that data to a Kafka topic for further processing by Log Parser. After receiving the log events, the Log Collector validates the Tenant ID in the log message based on a tenant white-list and publishes the valid log data to the Kafka topic. The TCP channel receives syslog and eventlog data without installing any log agent.
Windows Event logs are also received through the syslog channel. You can use the open source tool nxlog to send the event logs through the syslog channel. For more information about configuration, see
 
the Agile Operations Analytics Base Platform documentation.
 
 
Log Parser 
 
Log Parser receives log data from Kafka, parses the log data, extracts relevant fields, transforms the log data in to JSON format, and sends to Jarvis/Elasticsearch. For each supported log type, specific patterns are defined to parse and transform the data. This configuration is stored in the config files. 
Data sent in any unsupported log file format is stored under 
 
generic
 
. You can search this data in Data Studio but specific fields from the log data are not extracted for generic log type. And, the Out-of-the-box dashboards are not available.
 
CA Analytics Platform (Jarvis)
 
Jarvis is used as the data store and the analytics platform to store the log data. Log ingestion to Jarvis is done by Log Parser. Each type of log data is stored as a separate document_type in Jarvis.
 
Log Forwarder Probe (log_forwarder)
 
A light-weight log data collection agent. This component reads log data from log files on the monitored servers or devices and publishes the data to a CA UIM Queue (Default Subject: LOG_ANALYTICS_LOGS) through the CA UIM Message Bus. You can deploy and configure this probe using Monitoring Configuration Service (MCS). For more information about configuration, see the Log Forwarder probe documentation on the Probes Documentation Space.
 
AXA Log Gateway Probe (axa_log_gateway)
 
The axa_log_gateway probe receives log data from CA UIM through a specific queue (Default Subject: LOG_ANALYTICS_LOGS) and writes the data to the Kafka topic (Default: logAnalyticsLogs) for further processing by the Log Parser. For more information, see the AXA Log Gateway probe documentation on the Probes Documentation Space.
 
Log Monitoring Service Probe (log_monitoring_service)
 
This component is implemented as a CA UIM probe and can be configured using MCS or Admin Console (AC). This probe periodically queries log data that is stored in Jarvis and raises notifications based on the predefined queries. You can create one or more profiles. Each profile includes a query to be executed for a particular log type and the interval. 
For example, "response_time:[10 TO *] AND url:*ServiceDesk*" for apache access logs scheduled every 5 minutes. The Monitoring Service queries the Elasticsearch component in Jarvis at the predefined schedule and provides the following output:
    • Match_Count metric for the count of matches found
    • Alarm if the match count exceeds a predefined threshold
    • Alarms containing sample matched logs lines (number of sample lines configurable)
The Log Monitoring Service alarms can be forwarded as email or SNMP TRAP using the emailgtw or snmpgtw probe respectively. For more information, see the Log Monitoring Service 
 
 
 
probe documentation on the Probes Documentation Space.
Port Requirements
Open the following ports to allow communication between CA UIM and Log Analytics
  • AXA Elasticsearch port (default 9200) - Open this port between the Agile Operations Base Platform and the location of the log_monitoring_service probe
  • AXA Kafka Port (default 9092) - Open this port between the Agile Operations Base Platform and the location of the axa_log_gateway probe
Deploy Log Analytics
You can deploy Log Analytics using the associated templates in MCS.
 
Follow these steps: 
 
  1. Verify that all of the required probes are downloaded to your archive. For more information about downloading probes, see the topic Download, Update, or Import Packages.
  2. If necessary, create groups for the devices that you want to collect log data from. For more information about setting up groups, see the topic Create and Manage Groups in USM.
  3. Configure the axa_log_gateway probe using the 
    Setup axa_log_gateway
     MCS template.
  4. Deploy the log_forwarder probe to your target devices using the 
    Setup log_forwarder
     MCS template.
  5. Configure log forwarding for your target devices or services using one or more of the following MCS templates:
    1.  
      Log Forwarding
       - Configure log forwarding for any type of log file.
    2.  
      Apache Log Forwarding
       - Configure log forwarding for Apache access logs.
    3.  
      Log4j Log Forwarding
       - Configure log forwarding for java log4j logs.
    4.  
      Catalina Log Forwarding
       - Configure log forwarding for Tomcat Catalina logs.
    5.  
      Oracle Alert Log Forwarding
       - Configure log forwarding for Oracle Alert logs.
  6. Configure the log_monitoring_service on a robot by using the 
    Setup log_monitoring_service template
    .
    We recommend using the primary hub robot.
  7. Create your desired profiles using the 
    Log Monitoring Service
     template. You can use this template to query the log data that is stored in Jarvis and send alarms based on your defined criteria.
  8.  
    (Optional) 
    Configure the 
    Email Gateway (emailgtw)
     MCS template to receive email notifications when alarms occur.
  9.  
    (Optional)
     Configure the 
    SNMP Gateway (snmpgtw)
     MCS template to receive SNMP notifications when alarms occur.
Configure Cross-Launch
Before you can launch the Log Analytics dashboard from a CA UIM alarm, you must create a URL action to enable cross-launch.
 To launch a custom URL action, you must have the 
Launch URL Actions
 ACL permission set. With this permission, you can select an alarm, then launch an alarm action from the 
Actions
 menu.
 
Follow these steps:
 
  1. In USM, select the 
    Alarms
     tab.
  2. Click the 
    Actions
     menu above the list or table of alarms, then 
    Edit
     
    URL Actions
    . The 
    Edit URL Actions
     dialog opens.
  3. Click 
    New URL action
    . Specify the name
     Log Analytics
     and enter the following URL:
    http://<server_host>:<server_port>/mdo/v2/dashboard/loganalytics?query="host:${host}"&timestamp=${TIME_LAST}&probe=${PROBE}&customAttributes='${CUSTOM_1}'
  4. Change the following parameters in the URL:
After configuring cross-launch, the 
Log Analytics Launch
 icon appears for each UIM alarm. Clicking this icon opens the Agile Operations Analytics Base Platform log in page. After logging in, you are then redirected to the
 Log Analytics
 Dashboard in the Data Studio. The following in-context parameters can be passed in the URL:
  •  
    The Log Analytics Dashboard is launched from an alarm generated by the log_monitoring_service probe 
    - The query parameter uses the value provided in the log_monitoring_service profile configuration.
  •  
    The Log Analytics Dashboard is launched from an alarm generated by any other probe
     - The query parameter uses the host value from the UIM alarm.
     If you have not registered your app, clicking the 
    Log Analytics Launch
     icon redirects you to the app registration page in CA App Experience Analytics.
More Information
For more information about deploying Log Analytics, see the following topics: