Lazy finish

From Pearl Language
Revision as of 12:51, 3 March 2014 by Martien (talk | contribs) (Typo.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

…some operations perform side effects, are handled by another party and return a status summary.

✣  ✣  ✣

Some perations are taking too long to complete. Alternatively or in addition, overall system performance is degraded due to excessive system resources being allocated but inactive in a blocked or stalled state.

Forces:

  • Response and cycle times must be reduced or kept within system tolerance limits.
  • More callers must be accommodated, which can further increase the contention over shared resources.
  • Some operations require a high degree of certainty that the operation will eventually complete, perhaps within a certain time period or when certain conditions are met.
  • The steps required to satisfy a completion guarantee may themselves be expensive.

This approach exploits a situation in which

  1. the time to perform the handoff or update the transitional queue is less than that required to finalize the operation directly;
  2. either the finalization can be done in parallel with other operations or can be done at a time when fewer requests are being made.

The response to the caller can be quicker, and contention on the shared resource can be minimized by optimizing access to it across several calls.

From a system perspective, resource usage overall can be minimized with this approach. The operation state stored in a transitional queue can require significantly fewer resources than a blocked thread (or process), especially when considering that blocked threads deep in the system often leave behind a trail of blocked threads on the client and intermediate tiers.

This solution provides a variety of options regarding when to complete the transaction, which is usually a reflection of various lower-level implementation considerations. It could be scheduled to occur at fixed intervals, or by operator intervention, or when the transitional queue becomes full or nearly full. When I/O is involved, bulk data transfers can be made in increments optimized for the target medium. This in addition can amortize I/O overhead among several calls and increase throughput overall.

Therefore:

Hand-off or stow all or part of the operation request to another entity or to a place where it can later be retrieved and completed.

✣  ✣  ✣

If the caller can be satisfied with a sufficient completion guarantee (service level agreement, the operation can be finished at a later time. The operation can be handed off directly between the initiating and finalizing threads, or the operation can be placed in a transitional queue like the blocker waiting room for later retrieval and finalizing. If that queue is persistent, more robustness can be achieved since the request can remain in the queue even if the process owning the finalizing thread must be restarted.

Each operation ‘delegated’ in this manner must be completed in accordance with its completion guarantee. If the completion guarantee includes serializability, then the completion would have to be finalized prior to the initiation of another operation that might be affected by its results. This might occur, for example, if a query is made against data that is updated by the original operation. A design might still delay finalization by incorporating the transitional queue in its operation; in the query example a check would have to be made against the transitional queue in computing the query results.


✣  ✣  ✣