In this blog post, I will present a step-by-step guide on how to scale Data Collector instances on Azure Kubernetes Service (AKS) using provisioning agents—which help automate upgrading and scaling resources on-demand, without having to stop execution of pipeline jobs. AKS removes the complexity of implementing, installing, and maintaining Kubernetes in Azure and you only […]
In my previous blog post, I demonstrated how to achieve low-latency inference using Databricks ML models in StreamSets. Now let’s say you have a dataflow pipeline that is ingesting data, enriching it, performing transformations, and based on certain condition(s), you’d like to (re)train the Databricks ML model. For instance, using different value for hyperparameter n_estimators […]
Although the recent public preview of Amazon Managed Streaming for Kafka (MSK) certainly made headlines, Kinesis remains Amazon’s supported, production, real-time streaming service. In this blog post, I’ll show you how to get started using StreamSets Data Collector to build dataflow pipelines to send data to and receive data from Amazon Kinesis Data Streams.
This post is summarized content from a full tutorial at https://academy.datastax.com/content/ingesting-data-relational-databases-cassandra-streamsets How do you ingest from an existing relational database (RDBMS) to an Apache Cassandra or DataStax Enterprise cluster? What about a one-time batch loading of historical data vs. streaming changes? I know what some of you are […]
Mike Fuller, a consultant at Red Pill Analytics, has been working on ingesting data into Snowflake’s cloud data warehouse using StreamSets for Snowflake. In this guest blog post, Mike explains how he was able to replicate an Oracle database to Snowflake using the new functionality, both for initial load and with change data capture.
StreamSets is proud to announce their new partnership with Snowflake and the general availability release of StreamSets for Snowflake. As enterprises move more of their big data workloads to the cloud, it becomes imperative that Data Operations are more resilient and adaptive to continue to serve the business’s needs. This is why StreamSets has partnered with […]
Overview You have options when bulk loading data into RedShift from relational database (RDBMS) sources. These options include manual processes or using one of the numerous hosted as-a-service options. But, if you have broader requirements than simply importing, you need another option. Your company may have requirements such as adhering to enterprise security policies which […]
The Encrypt and Decrypt processor, introduced in StreamSets Data Collector 3.5.0, uses the Amazon AWS Encryption SDK to encrypt and decrypt data within a dataflow pipeline, and a variety of mechanisms, including the Amazon AWS Key Management Service, to manage encryption keys. In this blog post, I’ll walk through the basics of working with encryption […]
StreamSets is excited to announce the immediate availability of StreamSets for Snowflake, the first DataOps platform for Snowflake. Now users can extend their Dataops environments to the popular Snowflake service. StreamSets makes copying data from databases, streams, and event processing directly into your cloud EDW simple, without complex schema design and hand-coding. Users get high […]
Introduction Data warehouses are a critical component of modern data architecture in enterprises that leverage massive amounts of data to drive quality of their products and services. A data warehouse is an OLAP (Online Analytical Processing) database that collects data from transactional databases such as Billing, CRM, ERP, etc. and provides a layer on top […]