Post Upgrade Tasks

In some situations, you must complete tasks within Data Collector or your Control Hub on-premises installation after you upgrade.

Update Value Replacer Pipelines

With version, Data Collector introduces a new Field Replacer processor and has deprecated the Value Replacer processor.

The Field Replacer processor lets you define more complex conditions to replace values. For example, unlike the Value Replacer, the Field Replacer can replace values that fall within a specified range.

You can continue to use the deprecated Value Replacer processor in pipelines. However, the processor will be removed in a future release - so we recommend that you update pipelines to use the Field Replacer as soon as possible.

To update your pipelines, replace the Value Replacer processor with the Field Replacer processor. The Field Replacer replaces values in fields with nulls or with new values. In the Field Replacer, use field path expressions to replace values based on a condition.

For example, let's say that your Value Replacer processor is configured to replace null values in the product_id field with "NA" and to replace the "0289" store ID with "0132" as follows:

In the Field Replacer processor, you can configure the same replacements using field path expressions as follows:

Update Einstein Analytics Pipelines

With version, the Einstein Analytics destination introduces a new append operation that lets you combine data into a single dataset. Configuring the destination to use dataflows to combine data into a single dataset has been deprecated.

You can continue to configure the destination to use dataflows. However, dataflows will be removed in a future release - so we recommend that you update pipelines to use the append operation as soon as possible.

Update Control Hub On-premises

If you use StreamSets Control Hub on-premises and you upgrade registered Data Collectors to a version higher than your current version of Control Hub, you must modify the Data Collector version range within your Control Hub installation.

By default, Control Hub can work with registered Data Collectors from version to the current version of Control Hub. You can customize the Data Collector version range. For example, if you use Control Hub on-premises version 2.7.2 and you upgrade registered Data Collectors to version, you must configure the maximum Data Collector version that can work with Control Hub to version

To modify the Data Collector version range:

  1. Log in to Control Hub as the default system administrator - the admin@admin user account.
  2. In the Navigation panel, click Administration > Data Collectors.
  3. Click the Component Version Range icon: .
  4. Enter as the maximum Data Collector version that can work with Control Hub.

Update Pipelines using Legacy Stage Libraries

With version, a set of older stage libraries are no longer included with Data Collector. Pipelines that use these legacy stage libraries will not run until you perform one of the following tasks:
Use a current stage library
We strongly recommend that you upgrade your system and use a current stage library in the pipeline:
  1. Upgrade the system to a more current version.
  2. Install the stage library for the upgraded system.
  3. In the pipeline, edit the stage and select the appropriate stage library.
Install the legacy library
Though not recommended, you can still download and install the older stage libraries as custom stage libraries. For more information, see Legacy Stage Libraries.

Disable Cloudera Navigator Integration

With version, the beta version of Cloudera Navigator integration is no longer available with Data Collector. Cloudera Navigator integration now requires a paid subscription. For more information about purchasing Cloudera Navigator integration, contact us.

When upgrading from a Data Collector version with Cloudera Navigator integration enabled to version without a paid subscription, perform the following post-upgrade task:

Do not include the Cloudera Navigator properties when you configure the Data Collector configuration file, The properties to omit are:
  • lineage.publishers
  • lineage.publisher.navigator.def
  • All other properties with the lineage.publisher.navigator prefix

JDBC Multitable Consumer Query Interval Change

With version, the Query Interval property is replaced by the new Queries per Second property.

Upgraded pipelines with the Query Interval specified using a constant or the default format and unit of time, ${10 * SECONDS}, have the new Queries per Second property calculated and defined as follows:
Queries per Second = Number of Threads / Query Interval (in seconds)
For example, say the origin uses three threads and Query Interval is configured for ${15 * SECONDS}. Then, the upgraded origin sets Queries per Seconds to 3 divided by 15, which is .2. This means the origin will run a maximum of two queries every 10 seconds.

The upgrade would occur the same way if Query Interval were set to 15.

Pipelines with a Query Interval configured to use other units of time, such as ${.1 *MINUTES}, or configured with a different expression format, such as ${SECONDS * 5}, are upgraded to use the default for Queries per Second, which is 10. This means the pipeline will run a maximum of 10 queries per second. The fact that these expressions are not upgraded correctly is noted in the Data Collector log.

If necessary, update the Queries per Second property as needed after the upgrade.

Update JDBC Query Consumer Pipelines used for SQL Server CDC Data

With version, the Microsoft SQL Server CDC functionality in the JDBC Query Consumer origin has been deprecated and will be removed in a future release.

For pipelines that use the JDBC Query Consumer to process Microsoft SQL Server CDC data, replace the JDBC Query Consumer origin with another origin:

Update MongoDB Destination Upsert Pipelines

With version, the MongoDB destination supports the replace and update operation codes, and no longer supports the upsert operation code. You can use a new Upsert flag in conjunction with Replace and Update.

After upgrading from a version earlier than, update the pipeline as needed to ensure that records passed to the destination do not use the upsert operation code (sdc.operation.type = 4). Records that use the upsert operation code will be sent to error.

In previous releases, records flagged for upsert were treated in the MongoDB system as Replace operations with the Upsert flag set.

If you want to replicate the upsert behavior from earlier releases, perform the following steps:
  1. Configure the pipeline to use the Replace operation code.

    Make sure that the sdc.operation.type is set to 7 for Replace instead of 4 for Upsert.

  2. In the MongoDB destination, enable the new Upsert property.

Time Zones in Stages

With version, time zones have been organized and updated to use JDK 8 names. This should make it easier to select time zones in stage properties.

In the rare case that an upgraded pipeline uses a format not supported by JDK 8, edit the pipeline to select a compatible time zone.

Update Kudu Pipelines

Consider the following upgrade tasks for Kudu pipelines, based on the version that you are upgrading from:

Upgrade from versions earlier than
With version, if the destination receives a change data capture log from the following source systems, you must specify the source system so that the destination can determine the format of the log: Microsoft SQL Server, Oracle CDC Client, MySQL Binary Log, or MongoDB Oplog.
Previously, the Kudu destination could not directly receive changed data from these source systems. You either had to include a scripting processor in the pipeline to modify the field paths in the record to a format that the destination could read. Or, you had to add multiple Kudu destinations to the pipeline - one per operation type - and include a Stream Selector processor to send records to the destination by operation type.
If you implemented one of these workarounds, then after upgrading, modify the pipeline to remove the scripting processor or the Stream Selector processor and the multiple destinations. In the Kudu destination, set the Change Log Format to the appropriate format of the log: Microsoft SQL Server, Oracle CDC Client, MySQL Binary Log, or MongoDB Oplog.
Upgrade from versions earlier than
With version, Data Collector provides support for Apache Kudu version 1.0.x and no longer supports earlier Kudu versions. To upgrade pipelines that contain a Kudu destination from Data Collector versions earlier than, upgrade your Kudu cluster and then add a stage alias for the earlier Kudu version to the Data Collector configuration file, $SDC_CONF/

The configuration file includes stage aliases to enable backward compatibility for pipelines created with earlier versions of Data Collector.

To update Kudu pipelines:

  1. Upgrade your Kudu cluster to version 1.0.x.

    For instructions, see the Apache Kudu documentation.

  2. Open the $SDC_CONF/ file and locate the following comment:
    # Stage aliases for mapping to keep backward compatibility on pipelines when stages move libraries
  3. Below the comment, add a stage alias for the earlier Kudu version as follows:
    stage.alias.streamsets-datacollector-apache-kudu-<version>-lib, com_streamsets_pipeline_stage_destination_kudu_KuduDTarget = streamsets-datacollector-apache-kudu_1_0-lib, com_streamsets_pipeline_stage_destination_kudu_KuduDTarget
    Where <version> is the earlier Kudu version: 0_7, 0_8, or 0_9. For example, if you previously used Kudu version 0.9, add the following stage alias:
    stage.alias.streamsets-datacollector-apache-kudu-0_9-lib, com_streamsets_pipeline_stage_destination_kudu_KuduDTarget = streamsets-datacollector-apache-kudu_1_0-lib, com_streamsets_pipeline_stage_destination_kudu_KuduDTarget
  4. Restart Data Collector to enable the changes.

Update JDBC Multitable Consumer Pipelines

With version, the JDBC Multitable Consumer origin can now read from views in addition to tables. The origin now reads from all tables and all views that are included in the defined table configurations.

When upgrading pipelines that contain a JDBC Multitable Consumer origin from Data Collector versions earlier than, review the table configurations to determine if any views are included. If a table configuration includes views that you do not want to read, simply exclude them from the configuration.

Update Vault Pipelines

With version, Data Collector introduces a credential store API and credential expression language functions to access Hashicorp Vault secrets.

In addition, the Data Collector Vault integration now relies on Vault's App Role authentication backend.

Previously, Data Collector used Vault functions to access Vault secrets and relied on Vault's App ID authentication backend. StreamSets has deprecated the Vault functions, and Hashicorp has deprecated the App ID authentication backend.

After upgrading, update pipelines that use Vault functions in one of the following ways:

Use the new credential store expression language functions (recommended)
To use the new credential functions, install the Vault credential store stage library and define the configuration properties used to connect to Vault. Then, update each upgraded pipeline that includes stages using Vault functions to use the new credential functions to retrieve the credential values.
For details on using the Vault credential store system, see Vault Credential Store.
Continue to use the deprecated Vault functions
You can continue to use the deprecated Vault functions in pipelines. However, the functions will be removed in a future release - so we recommend that you use the credential functions as soon as possible.
To continue to use the Vault functions, make the following changes after upgrading:
  • Uncomment the single Vault EL property in the $SDC_CONF/ file.
  • The remaining Vault configuration properties have been moved to the $SDC_CONF/ file. The properties use the same name, with an added "credentialStore.vault.config" prefix. Copy any values that you customized in the previous file into the same property names in the file.
  • Define the Vault Role ID and Secret ID that Data Collector uses to authenticate with Vault in the file. Defining an App ID for Data Collector is deprecated and will be removed in a future release.
For details on using the Vault functions, see Accessing Vault Secrets with Vault Functions (Deprecated).

Configure JDBC Producer Schema Names

With Data Collector version, you can use a Schema Name property to specify the database or schema name. In previous releases, you specified the database or schema name in the Table Name property.

Upgrading from a previous release does not require changing any existing configuration at this time. But we recommend using the new Schema Name property, since the ability to specify a database or schema name with the table name might be deprecated in the future.

Evaluate Precondition Error Handling

With Data Collector version, precondition error handling has changed.

The Precondition stage property allows you to define conditions that must be met for a record to enter the stage. Previously, records that did not meet all specified preconditions were passed to the pipeline for error handling. That is, the records were processed based on the Error Records pipeline property.

With version, records that do not meet the specified preconditions are handled by the error handling configured for the stage. Stage error handling occurs based on the On Record Error property on the General tab of the stage.

Review pipelines that use preconditions to verify that this change does not adversely affect the behavior of the pipelines.

Authentication for Docker Image

With Data Collector version, the Docker image now uses the form type of file-based authentication by default. As a result, you must use a Data Collector user account to log in to the Data Collector. If you haven't set up custom user accounts, you can use the admin account shipped with the Data Collector. The default login is: admin / admin.

Earlier versions of the Docker image used no authentication.

Configure Pipeline Permissions

Data Collector version is designed for multitenancy and enables you to share and grant permissions on pipelines. Permissions determine the access level that users and groups have on pipelines.

In earlier versions of Data Collector without pipeline permissions, pipeline access is determined by roles. For example, any user with the Creator role could edit any pipeline.

In version, roles are augmented with pipeline permissions. In addition to having the necessary role, users must also have the appropriate permissions to perform pipeline tasks.

For example, to edit a pipeline in, a user with the Creator role must also have read and write permission on the pipeline. Without write permission, the user cannot edit the pipeline. Without read permission, the user cannot see the pipeline at all. It does not display in the list of available pipelines.

Note: With pipeline permissions enabled, all upgraded pipelines are initially visible only to users with the Admin role and the pipeline owner - the user who created the pipeline. To enable other users to work with pipelines, have an Admin user configure the appropriate permissions for each pipeline.

In Data Collector version, pipeline permissions are disabled by default. To enable pipeline permissions, set the pipeline.access.control.enabled property to true in the Data Collector configuration file.

Tip: You can configure pipeline permissions when permissions are disabled. Then, you can enable the pipeline permissions property after pipeline permissions are properly configured.

For more information about roles and permissions, see Roles and Permissions. For details about configuring pipeline permissions, see Sharing Pipelines.

Update Elasticsearch Pipelines

Data Collector version includes an enhanced Elasticsearch destination that uses the Elasticsearch HTTP API. To upgrade pipelines that use the Elasticsearch destination from Data Collector versions earlier than, you must review the value of the Default Operation property.

Review all upgraded Elasticsearch destinations to ensure that the Default Operation property is set to the correct operation. Upgraded Elasticsearch destinations have the Default Operation property set based on the configuration for the Enable Upsert property:

  • With upsert enabled, the default operation is set to INDEX.
  • With upsert not enabled, the default operation is set to CREATE which requires a DocumentId.
Note: The Elasticsearch version 5 stage library is compatible with all versions of Elasticsearch. Earlier stage library versions have been removed.