Origins

An origin stage represents the source for the pipeline. You can use a single origin stage in a pipeline.

You can use different origins based on the execution mode of the pipeline.

In standalone pipelines, you can use the following origins:
  • Amazon S3 - Reads objects from Amazon S3.
  • CoAP Server - Listens on a CoAP endpoint and processes the contents of all authorized CoAP requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • Directory - Reads fully-written files from a directory.
  • Elasticsearch - Reads data from an Elasticsearch cluster. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • File Tail - Reads lines of data from an active file after reading related archived files in the directory.
  • HTTP Client - Reads data from a streaming HTTP resource URL.
  • HTTP Server - Listens on an HTTP endpoint and processes the contents of all authorized HTTP POST requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • HTTP to Kafka - Listens on a HTTP endpoint and writes the contents of all authorized HTTP POST requests directly to Kafka.
  • JDBC Multitable Consumer - Reads database data from multiple tables through a JDBC connection. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
  • JDBC Query Consumer - Reads database data using a user-defined SQL query through a JDBC connection.
  • JMS Consumer - Reads messages from JMS.
  • Kafka Consumer - Reads messages from Kafka.
  • Kinesis Consumer - Reads data from Kinesis Streams.
  • MapR DB JSON - Reads JSON documents from MapR DB JSON tables.
  • MapR FS - Reads files from MapR FS.
  • MapR Streams Consumer - Reads messages from MapR Streams.
  • MongoDB - Reads documents from MongoDB.
  • MongoDB Oplog - Reads entries from a MongoDB Oplog.
  • MQTT Subscriber - Subscribes to a topic on an MQTT broker to read messages from the broker.
  • MySQL Binary Log - Reads MySQL binary logs to generate change data capture records.
  • Omniture - Reads web usage reports from the Omniture reporting API.
  • Oracle CDC Client - Reads LogMiner redo logs to generate change data capture records.
  • RabbitMQ Consumer - Reads messages from RabbitMQ.
  • Redis Consumer - Reads messages from Redis.
  • Salesforce - Reads data from Salesforce.
  • SDC RPC - Reads data from an SDC RPC destination in an SDC RPC pipeline.
  • SDC RPC to Kafka - Reads data from an SDC RPC destination in an SDC RPC pipeline and writes it to Kafka.
  • SFTP/FTP Client - Reads files from an SFTP or FTP server.
  • TCP Server - Listens at the specified ports and processes incoming data over TCP/IP connections.
  • UDP Source - Reads messages from one or more UDP ports.
  • UDP to Kafka - Reads messages from one or more UDP ports and writes the data to Kafka.
  • WebSocket Server - Listens on a WebSocket endpoint and processes the contents of all authorized WebSocket requests. Creates multiple threads to enable parallel processing in a multithreaded pipeline.
In cluster pipelines, you can use the following origins:
To help create or test pipelines, you can use the following development origins:
  • Dev Data Generator
  • Dev Random Source
  • Dev Raw Data Source
  • Dev SDC RPC with Buffering

For more information, see Development Stages.

Comparing HTTP Origins

We have several HTTP origins, make sure to use the best one for your needs. Here's a quick breakdown of some key differences:
Origin Description
HTTP Client
  • Initiates HTTP requests for an external system.
  • Processes data synchronously.
  • Processes JSON, text, and XML data.
  • Can process a range of HTTP requests.
  • Can be used in a pipeline with processors.

HTTP Server
  • Listens for incoming HTTP requests and processes them while the sender waits for confirmation.
  • Processes data synchronously.
  • Creates multithreaded pipelines, thus suitable for high throughput of incoming data.
  • Processes virtually all data formats. Processes HTTP POST requests only.
  • Can be used in a pipeline with processors.
HTTP to Kafka
  • Listens for incoming HTTP requests and writes them immediately to Kafka with no additional processing.
  • Processes data asynchronously. Suitable for very high throughput of incoming data.
  • Writes all data to Kafka, regardless of the data format.
  • Processes HTTP POST requests only.

  • Cannot be used in a pipeline with processors. For more flexibility, use the HTTP Server origin.

Batch Size and Wait Time

For origin stages, the batch size determines the maximum number of records sent through the pipeline at one time. The batch wait time determines the time that the origin waits for data before sending a batch. At the end of the wait time, it sends the batch regardless of how many records the batch contains.

For example, a File Tail origin is configured for a batch size of 20 records and a batch wait time of 240 seconds. When data arrives quickly, File Tail fills a batch with 20 records and sends it through the pipeline immediately, creating a new batch and sending it again as soon as it is full. As incoming data slows, a remaining batch contains a few records, gaining an extra record periodically. 240 seconds after creating the batch, File Tail sends the partially-full batch through the pipeline. It immediately creates a new batch and starts a new countdown.

Configure the batch wait time based on your processing needs. You might reduce the batch wait time to ensure all data is processed within a specified time frame or to make regular contact with pipeline destinations. Use the default or increase the wait time if you prefer not to process partial or empty batches.

Maximum Record Size

Most data formats have a property that limits the maximum size of the record that an origin can parse. For example, the delimited data format has a Max Record Length property, the JSON data format has Max Object Length, and the text data format has Max Line Length.

When the origin processes data that is larger than the specified length, the behavior differs based on the origin and the data format. For example, with some data formats, oversized records are handled based on the record error handling configured for the origin. While in other data formats, the origin might truncate the data. For details on how an origin handles size overruns for each data format, see the "Data Formats" section of the origin documentation.

When available, the maximum record size properties are limited by the Data Collector parser buffer size, which is 1048576 bytes by default. So, when raising the maximum record size property in the origin does not change the origin's behavior, you might need to increase the Data Collector parser buffer size by configuring the parser.limit property in the Data Collector configuration file.

Note that most of the maximum record size properties are specified in characters, while the Data Collector limit is defined in bytes.

File Compression Formats

Origins that read files can read uncompressed, compressed files, archives, and compressed archives.

Hadoop FS reads compressed files automatically. For all other file-based origins, you indicate the compression format in the origin.

The following table lists the supported file types by extension:
Compression Format Description
Uncompressed Processes uncompressed files of the configured data format.
Compressed Processes files compressed by the following compression formats:
  • gzip
  • bgzip2
  • xz
  • lzma
  • Pack200
  • DEFLATE
  • Z
Archive Processes files archived by the following archive formats:
  • 7z
  • ar
  • arj
  • cpio
  • dump
  • tar
  • zip
Compressed Archive Processes files in compressed archives created by supported compression and archive formats.

Previewing Raw Source Data

You can preview raw source data for Directory, File Tail, and Kafka Consumer origins. Preview raw source data when reviewing the data might help with origin configuration.

When you preview file data, you can use the real directory and actual source file. Or when appropriate, you might use a different file that is similar to the source.

When you preview Kafka data, you enter the connection information for the Kafka cluster.

The data used for the raw source preview in an origin stage is not used when previewing data for the pipeline.

  1. In the Properties panel for the origin stage, click the Raw Preview tab.
  2. For a Directory or File Tail origin, enter a directory and file name.
  3. For a Kafka Consumer, enter the following information:
    Kafka Raw Preview Property Description
    Topic Kafka topic to read.
    Partition Partition to read.
    Broker Host Broker host name. Use any broker associated with the partition.
    Broker Port Broker port number.
    Max Wait Time (secs) Maximum amount of time the preview waits to receive data from Kafka.
  4. Click Preview.
The Raw Source Preview area displays the preview.