skip to Main Content
Upload Files to Snowflake

Upload Data to Snowflake with Pre-defined Processors for Free

Moving data from storage or directories to Snowflake or any other destination? Standardize, normalize, and load file data automatically with over 60 pre-built processors.

Copy Pristine Datasets into Snowflake on Autopilot

Receiving files with little to no standardization and consistency in structure? It’s a common frustration for data engineers. When new files land in places such as AWS S3 or your local directory behind your firewall, let StreamSets go to work, automatically reading the files and applying the processors to the data to ensure it’s complete, consistent and valid. Best of all, get automatically notified of processing errors when something falls beyond the boundaries you’ve set.

Field Remover to Eliminate Unnecessary Data

Got a file from your sales team in Germany with junky personal notes included? Or maybe a file with duplicate data? Remove a local teams’ “notes” field along with any other fields that are not needed for down-stream analytics. With Field Remover, easily select and keep the fields you want and discard those you don’t before copying into Snowflake.

Try Field Remover Now
Upload Data to Snowflake

Expression Evaluator to Normalize Fields or Perform Calculations

Our uniquely flexible expression evaluator can perform calculations and write the results to new or existing fields, add or modify record header attributes and field attributes, and more. Easily drop in custom Jython, Java or Groovy libraries to apply virtually any processing logic to your data.

Try Expression Evaluator Now
data-loading-snowflake

Stream Selector for Conditional Routing

Pulling in CSV data from several different customer reports in a single data pipeline? It’s unlikely you’ll need to process these files in the same way before uploading the data to Snowflake. Select and route records through your pipeline based on pre-set conditions with our stream selector.

Try Stream Selector Now
Stream Selector ETL

Field Renamer to Rename Record Fields

Matching source and destination fields is a constant battle. Now, you could explicitly specify each field, but that’s a bit laborious, not to mention brittle in the face of data drift. Handle new fields appearing in the input data automatically. With Field Renamer, configure behavior when a source field does not exist, when a target field with a matching name already exists, when a source field matches multiple source field expressions, and more.

Try Field Renamer Now
upload-data-to-snowflake-rename

Field Flattener to Ensure Files Meet Conditions Before Loading

Data formats such as Avro and JSON sometimes represent hierarchical structures, where records contain fields that are themselves a collection of fields, such as an address field. Many destinations such as Snowflake or your Delta Lake, however, require a ‘flat’ record, where each field is a single string, integer, etc. Use Field Flattener to flatten the structure of your entire record or just a specific field automatically before uploading data to Snowflake.

Try Field Flattener Now
copy-into-snowflake

Our customers run millions of data pipelines using StreamSets

StreamSets Does More Than Simplify Data Processing

Easy to Start, Easy
to Run, Easy to Expand

Handle Data
Drift Automatically

Deploy Securely
in Any Environment

What Our Customers Say

Acertus Logo
“There is no data drift coming into play and I usually never get a surprise about how the data changes. StreamSets picks up the hiccups in the transactional system.”
Jeffrey Jennings, VP of Data, Acertus
SME Solutions Group Logo
"With StreamSets, we've helped customers in multiple verticals to collect data from a wide variety of different sources outside of just databases, including data lakes, APIs, and sensors. The time between data collection and delivery reduces from overnight to near real-time while increased monitoring capabilities result in cost savings and resiliency."
George Barrett, Solutions Engineer, SME
Back To Top