skip to Main Content

Creating a Custom Multithreaded Origin for StreamSets Data Collector

By Posted in Data Integration April 25, 2017

Multithreaded PipelineMultithreaded Pipelines, introduced a couple of releases back, in StreamSets Data Collector (SDC), enable a single pipeline instance to process high volumes of data, taking full advantage of all available CPUs on the machine. In this blog entry I’ll explain a little about how multithreaded pipelines work, and how you can implement your own multithreaded pipeline origin thanks to a new tutorial by Guglielmo Iozzia, Big Data Analytics Manager at Optum, part of UnitedHealth Group.

Multithreaded Origins

To take advantage of multithreading, a pipeline’s origin must itself be capable of multithreaded operation. At present, in SDC, there are six such origins:

  • Elasticsearch – Reads data from an Elasticsearch cluster.
  • HTTP Server – Listens on a HTTP endpoint and processes the contents of all authorized HTTP POST requests.
  • JDBC Multitable Consumer – Reads database data from multiple tables through a JDBC connection.
  • Kinesis Consumer – Reads data from a Kinesis cluster.
  • WebSocket Server – Listens on a WebSocket endpoint and processes the contents of all authorized WebSocket requests.
  • Dev Data Generator – Generates random data for development and testing.

Each origin can spawn a configurable number of threads to read incoming data in parallel, the details varying across the origins. For example, the HTTP Server origin spawns multiple threads to enable parallel processing of data from multiple HTTP clients: as data from one client is being processed, the origin can accept a connection and process data from another client. Each thread spawned by the Kinesis Consumer origin, on the other hand, maintains a connection to a Kinesis shard, so incoming data from different shards is processed in parallel.

SDC creates a pipeline runner for each thread in the origin; each pipeline runner is responsible for managing its own copy of the processors and destinations that comprise the remainder of the pipeline. The origin threads each create batches of records, which can then be processed in parallel in the downstream pipeline stages. Conceptually, the multithreaded pipeline looks like this:

Multithreaded Pipeline

Multithreaded Origin Tutorial

As you might expect, creating a multithreaded origin is somewhat more complex than creating a ‘traditional’ origin for SDC, but Guglielmo does a great job explaining the process in his tutorial, Creating a Custom Multithreaded StreamSets Origin. Guglielmo has kindly contributed his tutorial to the project, so, if you’re looking to ingest data as efficiently as possible, take a look, and let us know what you come up with in the comments!

Conduct Data Ingestion and Transformations In One Place

Deploy across hybrid and multi-cloud
Schedule a Demo
Back To Top