skip to Main Content

StreamSets Data Integration Blog

Where change is welcome.

Ingest Data into Azure Data Lake Store with StreamSets Data Collector

By February 20, 2017

SDC and Power BIAzure Data Lake Store (ADLS) is Microsoft’s cloud repository for big data analytic workloads, designed to capture data for operational and exploratory analytics. StreamSets Data Collector (SDC) version 2.3.0.0 included an Azure Data Lake Store destination, so you can create pipelines to read data from any supported data source and write it to ADLS.

Since configuring the ADLS destination is a multi-step process; our new tutorial, Ingesting Local Data into Azure Data Lake Store, walks you through the process of adding SDC as an application in Azure Active Directory, creating a Data Lake Store, building a simple data ingest pipeline, and then configuring the ADLS destination with credentials to write to an ADLS directory.

Replicating Relational Databases with StreamSets Data Collector

By February 3, 2017

HiveDrift2StreamSets Data Collector Engine has long supported both reading and writing data from and to relational databases via Java Database Connectivity (JDBC). While it was straightforward to configure pipelines to read data from individual tables, ingesting records from an entire database was cumbersome, requiring a pipeline per table. StreamSets Data Collector Engine Now introduces the JDBC Multitable Consumer, a new pipeline origin that can read data from multiple tables through a single database connection. In this blog entry, I’ll explain how the JDBC Multitable Consumer can implement a typical use case – replicating relational databases (an entire one) into Hadoop.

Ingest Data into Splunk with StreamSets Data Collector

By January 18, 2017

Splunk Chart

UPDATE – Data Collector’s HTTP Client destination can send a single request per batch of records, providing an easier way to send data to Splunk than the Jython script evaluator. See the blog post, Efficient Splunk Ingest for Cybersecurity for an example.

Splunk indexes and correlates log and machine data, providing a rich set of search, analysis and visualization capabilities. In this blog post, I’ll explain how to efficiently send high volumes of data to Splunk’s HTTP Event Collector via the StreamSets Data Collector Jython Evaluator. I’ll present a Jython script with which you’ll be able to build pipelines to read records from just about anywhere and send them to Splunk for indexing, analysis and visualization.

Calling External Java Code from Script Evaluators

By December 21, 2016

groovy logoWhen you’re building a pipeline with StreamSets Data Collector (SDC), you can often implement the data transformations you require using a combination of ‘off-the-shelf’ processors. Sometimes, though, you need to write some code. The script evaluators included with SDC allow you to manipulate records in Groovy, JavaScript and Jython (an implementation of Python integrated with the Java platform). You can usually achieve your goal using built-in scripting functions, as in the credit card issuing network computation shown in the SDC tutorial, but, again, sometimes you need to go a little further. For example, a member of the StreamSets community Slack channel recently asked about computing SHA-3 digests in JavaScript. In this blog entry I’ll show you how to do just this from Groovy, JavaScript and Jython.

Continuous Data Integration with StreamSets Data Collector and Spark Streaming on Databricks

By December 19, 2016

Databricks LogoI’m frequently asked, ‘How does StreamSets Data Collector integrate with Spark Streaming? How about on Databricks?’. In this blog entry, I’ll explain how to use Data Collector to ingest data into a Spark Streaming app running on Databricks, but the principles apply to Spark apps running anywhere. This is one solution for continuous data integration that can be used in cloud data platforms.

Creating a Custom Origin for StreamSets Data Collector

By December 12, 2016

Git Commit Log RecordSince writing tutorials for creating custom destinations and processors for StreamSets Data Collector (SDC), I’ve been looking for a good use case for a custom origin tutorial. It’s been trickier than I expected, partly because the list of out of the box origins is so extensive, and partly because the HTTP Client origin can access most web service APIs, rendering a custom origin redundant. Then, last week, StreamSets software engineer Jeff Evans suggested Git. Creating a custom origin for StreamSets to read the Git commit log turned into the perfect tutorial.

Running Apache Spark Code in StreamSets Data Collector

By December 8, 2016

Spark LogoNew in StreamSets Data Collector (SDC) 2.2.0.0 is the Spark Evaluator, a processor stage that allows you to run an Apache Spark application, termed a Spark Transformer, as part of an SDC pipeline. With the Spark Evaluator, you can build a pipeline to ingest data from any supported origin, apply transformations, such as filtering and lookups, using existing SDC processor stages, and have the Spark Evaluator hand off the data to your Java or Scala code as a Spark Resilient Distributed Dataset (RDD). Your Spark Transformer can then operate on the records, creating an output RDD, which is passed through the remainder of the pipeline to any supported destination.

Upgrading From Apache Flume to StreamSets Data Collector

By December 1, 2016

Apache Flume “is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data”. The typical use case is collecting log data and pushing it to a destination such as the Hadoop Distributed File System. In this blog entry we’ll look at a couple of Flume use cases, and see how they can be implemented with an Apache Flume alternative, such as StreamSets Data Collector.

Back To Top