Pat Patterson

Pat

Pat is StreamSets' Community Champion. An outgoing personality and self-described 'articulate techie', Pat is an accomplished international public speaker, presenting at events such as Dreamforce, JavaOne (both San Francisco and Tokyo), the RSA Conference, Defrag and more.

Streaming Data from Twitter for Analysis in Spark

FootballHappy New Year! Our first blog entry of 2018 is a guest post from Josh Janzen, a data scientist based in Minnesota. Josh wanted to ingest tweets referencing NFL games into Spark, then run some analysis to look for a correlation between Twitter activity and game winners. Josh originally posted this entry on his personal blog, and kindly allowed us to repost it here. Over to you, Josh:

Tis the season of NFL football, and one way to capture excitement is Twitter data. I’ve tickered around with Twitter’s Developer API before, but this time I wanted to use a streaming product I’ve heard good things about: StreamSets Data Collector.

Pat PattersonStreaming Data from Twitter for Analysis in Spark
Read More

Speed up Hive Data Retrieval using Spark, StreamSets and Predera

Predera logoIn this guest blog, Predera‘s Kiran Krishna Innamuri (Data Engineer), and Nazeer Hussain (Head of Platform Engineering and Services) focus on building a data pipeline to perform lookups or run queries on Hive tables with the Spark execution engine using StreamSets Data Collector and Predera’s custom Hive-JDBC lookup processor.

Pat PattersonSpeed up Hive Data Retrieval using Spark, StreamSets and Predera
Read More

Generate your Avro Schema – Automatically!

Avro Logo
In a previous blog post, I explained how StreamSets Data Collector (SDC) can work with Apache Kafka and Confluent Schema Registry to handle data drift via Avro schema evolution. In that blog post, I mentioned SDC's Schema Generator processor; today I'll explain how you can use the Schema Generator to automatically create Avro schemas.

Pat PattersonGenerate your Avro Schema – Automatically!
Read More

Bulk Loading Data into Snowflake Data Warehouse

Truck and LoaderMike Fuller, a consultant at Red Pill Analytics, has been busy integrating an Oracle RDS database with Snowflake's cloud data warehouse via StreamSets Data Collector. His blog post on bulk loading data into Snowflake is a great description of a real-world data integration use case. We're reposting it here courtesy of Mike and Red Pill.

Pat PattersonBulk Loading Data into Snowflake Data Warehouse
Read More

Fun with FileRefs – Manipulating Whole File Data

rot13 processorAs well as parsing incoming data into records, many StreamSets Data Collector (SDC) origins can be configured to ingest Whole Files. The blog entry Whole File Transfer with StreamSets Data Collector provides a basic introduction to the concept.

Although the initial release of the Whole File feature did not allow file content to be accessed in the pipeline, we soon added the ability for Script Evaluator processors to read the file, a feature exploited in the custom processor tutorial to read metadata from incoming image files. In this blog post, I'll show you how a custom processor can both create new records with Whole File content, and replace the content in existing records.

Pat PattersonFun with FileRefs – Manipulating Whole File Data
Read More

Evolving Avro Schemas with Apache Kafka and StreamSets Data Collector

Avro LogoApache Avro is widely used in the Hadoop ecosystem for efficiently serializing data so that it may be exchanged between applications written in a variety of programming languages. Avro allows data to be self-describing; when data is serialized via Avro, its schema is stored with it. Applications reading Avro-serialized data at a later time read the schema and use it when deserializing the data.

While StreamSets Data Collector (SDC) frees you from the tyranny of schema, it can also work with tools that take a more rigid approach. In this blog, I'll explain a little of how Avro works and how SDC can integrate with Confluent Schema Registry’s distributed Avro schema storage layer when reading and writing data to Apache Kafka topics and other destinations.

Pat PattersonEvolving Avro Schemas with Apache Kafka and StreamSets Data Collector
Read More

Ask StreamSets: Questions and Answers for the StreamSets Community

Ask StreamSetsIt's fair to say that most developers are familiar with Stack Overflow and the Stack Exchange network of question and answer sites. Q&A sites such as Stack Overflow serve communities of users focused around a particular topic or discipline – in the case of Stack Overflow, programming. Today, we're launching Ask StreamSets, a Q&A site for the StreamSets community.

Pat PattersonAsk StreamSets: Questions and Answers for the StreamSets Community
Read More

Cache Salesforce Data in Redis with StreamSets Data Collector

Redis LogoRedis is an open-source, in-memory, NoSQL database implementing a networked key-value store with optional persistence to disk. Perhaps the most popular key-value database, Redis is widely used for caching web pages, sessions and other objects that require blazingly fast access – lookups are typically in the millisecond range.

At RedisConf 2017 I presented a session, Cache All The Things! Data Integration via Jedis (slides), looking at how the open source Jedis library provides a small, sane, easy to use Java interface to Redis, and how a StreamSets Data Collector (SDC) pipeline can read data from a platform such as Salesforce, write it to Redis via Jedis, and keep Redis up-to-date by subscribing for notifications of changes in Salesforce, writing new and updated data to Redis. In this blog entry, I'll describe how I built the SDC pipeline I showed during my session.

Pat PattersonCache Salesforce Data in Redis with StreamSets Data Collector
Read More

Triggering Databricks Notebook Jobs from StreamSets Data Collector

S3 and DatabricksLast December, I covered Continuous Data Integration with StreamSets Data Collector and Spark Streaming on Databricks. In StreamSets Data Collector (SDC) version 2.5.0.0 we added the Spark Executor, allowing your pipelines to trigger a Spark application, running on Apache YARN or Databricks. I'm going to cover the latter in this blog post, showing you how to trigger a notebook job on Databricks from events in a pipeline, generating analyses and visualizations on demand.

Pat PattersonTriggering Databricks Notebook Jobs from StreamSets Data Collector
Read More

Visualizing and Analyzing Salesforce Data with Neo4j

Cases in Neo4jGraph databases represent and store data in terms of nodes, edges and properties, allowing quick, easy retrieval of complex hierarchical structures that may be difficult to model in traditional relational databases. Neo4j is an open source graph database widely deployed in the community; in this blog entry I'll show you how to use StreamSets Data Collector (SDC) to read case data from Salesforce and load it into the graph database using Neo4j's JDBC driver.

Pat PattersonVisualizing and Analyzing Salesforce Data with Neo4j
Read More