skip to Main Content

Streaming & Batch Ingest Data Collector

Build reusable data ingestion pipelines in one interface. From any source, to any destination.

Data Ingestion Pipelines, Simplified

Spend more time building data smart pipelines, enabling self-service and innovating without the noise. StreamSets Data Collector Engine is an easy-to-use data pipeline engine for streaming, CDC and batch ingestion from any source to any destination.

Build pipelines for streaming, batch and change data capture (CDC) in minutes

Eliminate 90% of break-fix and maintenance time

Port data pipelines to new data platforms without rewrites

StreamSets Data Collector screenshot shows fast data ingestion pipelines

Connectors

100+ connectors get your pipelines up and running fast without special skills.

Fast Data Ingestion for Amazon Web Services
Fast Data Ingestion for Cloudera
Fast Data Ingestion for Salesforce
Fast Data Ingestion for Oracle
Fast Data Ingestion for Redis
Fast Data Ingestion for Microsoft Azure

Awards and Recognition

Top 50 IT Infrastructure Products G2 Badge

Operationalize Your Data Collection

Data Collector: pipelines designed for change

Single Experience for All Design Patterns

Build schema-agnostic smart data pipelines with pre-built sources and destinations in minutes for streaming, batch, and change data capture (CDC), using a single, visual tool. StreamSets Data Collector Engine makes it easy to run data pipelines from Kafka, Oracle, Salesforce, JDBC, Hive, and more to Snowflake, Databricks, S3, ADLS, Kafka and more. Data Collector Engine runs on-premises or any cloud, wherever your data lives.

Ingest Data Across Multiple Platforms

Run your data in a development environment on multiple platforms without rework. Data Collector pipelines are platform agnostic by design so you can reuse them across data platforms in hybrid and multi-cloud environments. With a few configuration settings, any data professional can start ingesting data from any source to multiple platforms, giving your organization the flexibility to adapt more quickly to new business needs. 

Handle data drift
Go fast and innovate with StreamSets Data Collector

Smart Data Pipelines Built for Change

Worst case scenario: an upstream change doesn’t break your pipeline, it flows unreliable, incorrect, or unusable data into your analytics platform undetected. Intent-driven pipelines built for data drift, reducing risk of bad data downstream and outages. When data drift happens, Data Collector pipelines alert you to remediate issues or embrace emergent design.

The StreamSets Data Integration Platform

Build smart data pipelines in minutes and deploy across hybrid and multi-cloud platforms from a single log in.

Data Engineering for DataOps on AWS
Data Engineering for DataOps on Azure
Data Engineering for DataOps on Google Cloud
Data Engineering for DataOps on Snowflake
Data Engineering for DataOps on Databricks

Data Engineers Gain Efficiencies With StreamSets

  8/1/23

"The best feature of StreamSets is its intuitive visual interface, allowing us to effortlessly design, monitor, and manage data pipelines without the need for complex coding. This has significantly reduced our development time and made the process highly accessible to both technical and non-technical team members."

See full Review on G2

Mili M., Senior System Analyst
Mid-Market, (51-1000 emp.)

  8/3/23

"StreamSets has lot of out of box features to use for data pipelines and connect AWS Kinesis, DB or Kafka and send to HDFS & Hive."

Read full review on G2

Sanath V.
Enterprise (> 1000 emp.)

Frequently Asked Questions

What is StreamSets Data Collector?

  • StreamSets Data Collector is a data pipeline engine for building reliable, smart data pipelines for streaming, batch, and change data capture (CDC) from a wide variety of sources and destinations.
  • What is the difference between StreamSets Data Collector and Transformer?

    StreamSets Data Collector runs data ingestion for cloud data pipelines in streaming, CDC, or batch modes, whereas StreamSets Transformer performs ETL, ELT and data transformations such as joins, aggregates, and unions directly on Apache Spark and Snowflake platforms. They are both part of the StreamSets platform.

    How is processed data in StreamSets Data Collector tracked?

    Processed data is tracked in Data Collector through the Orchestration Record, which contains details about the task that it performed, such as the IDs of the jobs or pipelines that it started and the status of those jobs or pipelines.

    Can the StreamSets Data Collector engine be deployed in the cloud?

    Yes. StreamSets Data Collector can be deployed to Amazon EC2, Azure Virtual Machine, or Google Compute Engine. Review the documentation for more information.

    Helpful Resources
    Whitepapers & Ebooks

    Data Engineers Handbook for Snowflake

    Whitepapers & Ebooks

    Five Data Principles for Ensuring Effective Operational Analytics

    Operational analytics can drive continual improvement with its real-time insights and prescriptive recommendations.
    Back To Top