Using Pumps to Create Processing Pipelines

By default, each stream query in SQLstream is run independently from all other stream queries. However, it is often desirable for either performance or consistency reasons to share the output of one query as input to one or more other queries.

A potential complexity arises from three facts:

  1. Queries can be inserted or closed dynamically while SQLstream is processing streams.
  2. Data can arrive at varying rates.
  3. Data can be emitted at varying rates after being processed.

In this context, ensuring that all such queries receive identical input from the time each of them becomes active requires some forethought.

In s-Server, you can accomplish this goal by defining a stream that all such queries will listen for, and then creating a pump to feed that stream. The pump is based on the source views or queries. Using the pump compensates for the variations in the timing of the data sources. Using the stream that the pump feeds ensures that every query listening for that stream sees the same set of results.

This procedure enables processing pipelines that is, modular sequences of processing steps, where each step performs filtering, aggregation, and transformation, providing its results to downstream consumers. Each such step thus also provides a public junction where its results may be

  • inspected for debugging purposes,
  • analyzed for SLAs or regulatory compliance,
  • selected and repurposed by streams in other processing pipelines,
  • pumped into sink adapters or other streams, or
  • subscribed by JDBC client applications.