Performance

CA Live API Creator
 delivers on the best-practice patterns. It revisits relevant optimizations on each logic change.
lac31
Automation has value only in so far as there are no compromises in architecture (to integrate with existing systems), extensibility (to address elements that are not automated), and performance. 
CA Live API Creator
 delivers on the best-practice patterns. It revisits relevant optimizations on each logic change. 
CA Live API Creator
 performance remains at a high level over maintenance iterations the same as database management system (DBMS) optimizers maintain high performance by revising retrieval plans.
This article details how 
CA Live API Creator
 delivers enterprise-class performance.
In this article:
 
 
2
 
 
Minimize Client Latency
Modern applications might often be required to support clients that are connected through high-latency cloud-based connections. The following requirements minimize client-connection latency:
Rich Resource Objects
When retrieving objects for presentation, you can define resources that include multiple types. For example, a Customer with their payments, orders, and items. These resources are delivered in a single response message, so that only a single trip is required.
This requirement is not fully satisfied by views. You cannot update views and joins result in cartesian products when joining multiple child tables for the same parent. In our example, a Customer with five Payments and 10 Orders returns 50 rows. This requirement is unreasonable for the client to decode and present.
For more information about defining resources, see Customize your API.
Leverage Relational Database Query Power
Each resource/subresource can be a full relational query that you can send. You can send this query in a single trip to the REST (and then database) server. Contrast this solution to less powerful retrieval engines, where the client must compute common requirements such as sums and counts. This solution drives the number of queries up 
n-
fold, which can affect performance.
Pagination
Large result sets can effect on the client, network, server, and database. You can truncate large results, with provisions to retrieve remaining results, such as when the end user scrolls, using pagination.
Pagination can be a complex problem. Consider a resource of Customer, Orders, and Items. If there are many orders, pagination must occur at this level, with provision for including the line items on subsequent pagination requests.
Batched Updates
Network considerations apply to update and retrieval. Consider many rows retrieved into a client, followed by an update. Clients can send only the changes, instead of the entire set of objects, using APIs. Clients can also send multiple row types (for example, an Order and Items) in a single message using APIs. This results in single, small update message.
Single Message Update/Refresh
Business logic consists not only of validations, but derivations. These derivations can often involve rows visible but not directly updated by the client. For example, saving an order might update the customer's balance. The updated balance must be reflected on the screen.
Clients typically solve this problem by re-retrieving the data. This is unfortunate in a number of ways. First, it is an extra client/server trip over a high latency network. And sometimes, it is difficult to program, for example when the order's key is system assigned. 
CA Live API Creator
 might know the computed key and might need to re-retrieve the entire rich result set.
 
CA Live API Creator
 
 
solves this by returning the refresh information in the update response. The client can show the computations on related data by communicating a set of updates with a single message and can use  the response.
Server-Enforced Integrity Minimizes Client Traffic
An infamous anti-pattern is to place business logic in the client. Placing the business logic in the client does not ensure integrity (particularly when the clients are partners), and causes multiple client/server trips. For example, inserting a new Line Item might require business logic that updates the Order, the Customer, and the Product. If these are issued from the client, the result is four client/server trips when only one should be required.
Minimize DBMS Load
The logic engine minimizes the cost and number of SQL operations as described in the following sections.
Minimize Server/DB Latency
You can define the desired region for API Creator. This minimizes latency for SQL operations issued by the API Server.
Update Logic Pruning Eliminates SQLs
The logic engine prunes (eliminates) SQL operations where possible. For example:
  • Parent Reference Pruning.
     SQLs to access parent rows are averted if other other (local) expression values are unchanged. For example, if 
    attribute-X
     is derived as 
    attribute-Y * parent.attribute-1
    , the retrieval for parent is eliminated if 
    attribute-Y
     is not altered.
  • Cascade Pruning.
     If you alter parent attributes that child logic references, 
    CA Live API Creator
     cascades the change to each child row. If the parent attribute is not altered, cascade overhead is pruned. In the same example above, the value of 
    parent.attribute-1
     is cascaded if it is altered.
Update Adjustment Logic Eliminates Multi-level Aggregate SQLs
The logic engine minimizes the cost of SQL operations. For example:
  • Adjustment.
     For persisted sum/count aggregates, 
    CA Live API Creator
     adjusts the parent based on the old/new values in the child by making a single-row update. Aggregate queries can be particularly costly when they cascade. For example, the Customer's balance is the sum of the order Amount, which is the sum of each Order's Lineitem amounts.
  • Adjustment pruning.
     Adjustment occurs only when the summed attribute changes, the foreign key changes, or the qualification condition changes. If none of these occur, 
    CA Live API Creator
     averts parent access/chaining.
For more information about the best practices for persisting derived data, see Non-Persistent Attributes.
Transaction Caching
Consider inserting an Order with multiple line items. Per the logic shown in the following image, 
CA Live API Creator
 must update ("adjust") the Order total and Customer balance for each line item:
  Screen%20Shot%202016-06-27%20at%2011.18.57%20AM.png  
 
CA Live API Creator
 must not retrieve these objects multiple times. This kind of retrieval can incur substantial overhead and can make it difficult to ensure consistent results. Instead, 
CA Live API Creator
 maintains a cache for each transaction. All reads and writes go through the cache, and are flushed at the end of the transaction. This eliminates many SQLs, and ensures a consistent view of updated data.
Locking
Good performance dictates that data not be locked on retrieval. Optimistic locking typically addresses concurrency. API Server automates optimistic locking for all transactions. The automation can be based on a configured time-stamp column, or, if there is none, a hash of all resource attributes.
Transaction bracketing is automatic. API Server automatically bundles PUT/POST/DELETE requests (which might be comprised of multiple roles) into a transaction, including all logic-triggered updates.
GET: Optimistic Locking
A well-known pattern is optimistic locking. Acquiring locks while viewing data can reduce concurrency. Locks are not acquired while processing GET requests. API Server ensures that updated data has not been altered since it initially retrieved the data.
For more information about optimistic locking, see optimistic concurrency control on Wikipedia.
PUT, POST and DELETE: Leverage DBMS Locking and Transactions
Update requests are locked using DBMS Locking services. Consider the following cases:
  • Client updates.
     In accordance with optimistic locking, 
    CA Live API Creator
     ensures that client-submitted rows have not been altered since retrieved. This is done by write-locking the row using a time stamp, or (if one is not defined) by a hash code of all retrieved data. This strategy means that a time stamp is not required. This process is done as the first part of the transaction, so optimistic locking issues are detected before SQL overhead is incurred.
  • Rule chaining.
     All rows that are processed in a transaction as a consequence of logic execution, such as adjusting parent sums or counts, are read locked. Write locks are acquired at the end of the transaction, during the "flush" phase. Many other transactions' read locks could have been acquired and released between the time of the initial read lock and the flush.
  • Referential integrity.
     Such data is read in accordance with DBMS policy.
Server Optimizations
The logic server promotes good performance.
Load Balanced Dynamic Clustering
Cloud-based 
CA Live API Creator
 implementations meet the load and provide for failover. These implementations use the standard load balancer services by scaling as many server instances as required. Each server is stateless and incoming requests are load balanced over the set of running servers.
Meta Data Caching (Logic and Security)
API Creator processes each request by reading the logic and security information you specify into cache. This cache is persisted over transactions until you alter your logic.
Direct Execution (No Code Generation)
Reactive logic is more expressive that procedural code. Compiling logic into JavaScript would therefore represent a significant performance issue. Reactive logic is therefore executed directly and not compiled into JavaScript.
Measurements
Transparent information about system performance is an important requirement.
Logging
You can view the logs of SQL and rule execution.
For more information about viewing the logs, see the View Logging Information.
Statistics
You can obtain aggregate information.
For more information about using the Metrics page, see Analyze Metrics.