Redis

Supported pipeline types:
  • Data Collector

The Redis destination writes data to Redis.

When you configure the destination, you specify the mode that the destination uses to process records. You also specify properties to connect to Redis, including the URI of the Redis server.

In batch mode, the Redis destination can use CRUD operations defined in the sdc.operation.type record header attribute to process data. When CRUD operations are not specified in a record, the destination treats it like an Upsert record. For information about Data Collector change data processing and a list of CDC-enabled origins, see Processing Changed Data.

Mode

The Redis destination writes data to Redis using one of the following modes:

Batch mode
In batch mode, the destination writes data to Redis key-value pairs. You configure each key-value pair by selecting the incoming fields to use as the key and the value. You also select the data type of the Redis value. You can configure the destination to write to multiple key-value pairs.
For example, you are processing records that contain a String city field and a Map latitude_longitude field. Let’s assume that one record contains the following data:
city: {String} "San Francisco"
latitude_longitude: {Map}
    latitude: {String} "37.7749"
    longitude: {String} "-122.4194"
You select the city field as the Redis key, and the latitude_longitude field as the Redis value. Data Collector can convert the Map data type to the Redis Hash data type, so you select Hash for the data type of the Redis value, as follows:

When you run the pipeline, the Redis destination writes the key "San Francisco" to Redis with a Hash value containing the latitude and longitude.
Publish mode
In publish mode, the destination publishes data as messages to a Redis channel. You specify the channel to use and the data format of the messages. The publish mode pushes each record as one message to the specified Redis channel.

Data Types for Batch Mode

When you configure the destination for batch mode, you select the incoming fields to use as the Redis key and value. You also select the data type of the Redis value. If needed, the Redis destination converts the Data Collector data type of the incoming value field to the selected Redis data type.

When appropriate, use a Field Type Converter processor earlier in the pipeline to convert data types.

The following table lists the Data Collector data types that can be converted to Redis data types:

Data Collector Data Type Redis Data Type
String String
List List or Set
Map Hash
Note: The remaining Data Collector and Redis data types are not supported.

Data Formats for Publish Mode

When you configure the destination for publish mode, the destination publishes messages to Redis based on the data format that you select. You can use the following data formats:
Avro
The stage writes records based on the Avro schema. You can use one of the following methods to specify the location of the Avro schema definition:
  • In Pipeline Configuration - Use the schema that you provide in the stage configuration.
  • In Record Header - Use the schema included in the avroSchema record header attribute.
  • Confluent Schema Registry - Retrieve the schema from Confluent Schema Registry. The Confluent Schema Registry is a distributed storage layer for Avro schemas. You can configure the destination to look up the schema in the Confluent Schema Registry by the schema ID or subject.

    If using the Avro schema in the stage or in the record header attribute, you can optionally configure the stage to register the Avro schema with the Confluent Schema Registry. You can also optionally include the schema definition in the message. Omitting the schema definition can improve performance, but requires the appropriate schema management to avoid losing track of the schema associated with the data.

You can include the Avro schema in the output.
You can also compress data with an Avro-supported compression codec. When using Avro compression, avoid configuring any other compression properties in the stage.
Binary
The stage writes binary data to a single field in the record.
Delimited
The destination writes records as delimited data. When you use this data format, the root field must be list or list-map.
You can use the following delimited format types:
  • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
  • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
  • MS Excel CSV - Microsoft Excel comma-separated file.
  • MySQL CSV - MySQL comma-separated file.
  • Tab-Separated Values - File that includes tab-separated values.
  • PostgreSQL CSV - PostgreSQL comma-separated file.
  • PostgreSQL Text - PostgreSQL text file.
  • Custom - File that uses user-defined delimiter, escape, and quote characters.
  • Multi Character Delimited - File that uses multiple user-defined characters to delimit fields and lines, and single user-defined escape and quote characters.
JSON
The destination writes records as JSON data. You can use one of the following formats:
  • Array - Each file includes a single array. In the array, each element is a JSON representation of each record.
  • Multiple objects - Each file includes multiple JSON objects. Each object is a JSON representation of a record.
Protobuf
Writes one record in a message. Uses the user-defined message type and the definition of the message type in the descriptor file to generate the message.
For information about generating the descriptor file, see Protobuf Data Format Prerequisites.
SDC Record
The destination writes records in the SDC Record data format.
Text
The destination writes data from a single text field to the destination system. When you configure the stage, you select the field to use.
You can configure the characters to use as record separators. By default, the destination uses a UNIX-style line ending (\n) to separate records.
When a record does not contain the selected text field, the destination can report the missing field as an error or to ignore the missing field. By default, the destination reports an error.
When configured to ignore a missing text field, the destination can discard the record or write the record separator characters to create an empty line for the record. By default, the destination discards the record.

Define the CRUD Operation

When using the Redis destination in batch mode, you can use CRUD operations to write to Redis. To use CRUD operations, define the CRUD operation record header attribute for each record earlier in the pipeline. Records without the attribute defined are treated as Upsert: new records are written and existing records are updated.

To use CRUD operations to write records, set the following CRUD operation record header attribute:
sdc.operation.type
When defined, the Redis destination uses the CRUD operation in the sdc.operation.type record header attribute when writing to Redis. The destination supports the following values for the sdc.operation.type attribute:
  • 1 for INSERT
  • 2 for DELETE
  • 3 for UPDATE
  • 4 for UPSERT
If your pipeline includes a CRUD-enabled origin that processes changed data, the destination simply reads the operation type from the sdc.operation.type header attribute that the origin generates. If your pipeline uses a non-CDC origin, you can use the Expression Evaluator or a scripting processor to define the record header attribute. For more information about Data Collector changed data processing and a list of CDC-enabled origins, see Processing Changed Data.

Configuring a Redis Destination

Configure a Redis destination to write data to Redis.

  1. In the Properties panel, on the General tab, configure the following properties:
    General Property Description
    Name Stage name.
    Description Optional description.
    Stage Library Library version that you want to use.
    Required Fields Fields that must include data for the record to be passed into the stage.
    Tip: You might include fields that the stage uses.

    Records that do not include all required fields are processed based on the error handling configured for the pipeline.

    Preconditions Conditions that must evaluate to TRUE to allow a record to enter the stage for processing. Click Add to create additional preconditions.

    Records that do not meet all preconditions are processed based on the error handling configured for the stage.

    On Record Error Error record handling for the stage:
    • Discard - Discards the record.
    • Send to Error - Sends the record to the pipeline for error handling.
    • Stop Pipeline - Stops the pipeline.
  2. On the Redis tab, configure the following properties:
    Redis Property Description
    URI URI of the Redis server. Use the following format:
    redis://<host name>:<port number>/<database>

    You can omit the database if the server uses the default database.

    You can optionally include your password to log in to the Redis server. For example:
    redis://:<password>@<host name>:<port number>/<database>
    Connection Timeout (sec) Maximum time in seconds to wait for a connection.

    Default is 1000 seconds.

    Mode Mode used to write to Redis:
    • Batch - Writes data to Redis key-value pairs.

      In batch mode, you can use the CRUD operation header attribute to determine how the destination writes data to Redis.

    • Publish - Publishes data as messages to a Redis channel.

    Default is Batch.

    Key Incoming field to use for the Redis key. Used in batch mode only.
    Value Incoming field to use for the Redis value. Used in batch mode only.
    Data Type Data type of the Redis value. Used in batch mode only.

    Default is String.

    Channel Redis channel to publish the messages to. Used in publish mode only.
  3. When using Publish mode, on the Data Format tab, configure the following property:
    Data Format Property Description
    Data Format Data format of the data:
    • Avro
    • Binary
    • Delimited
    • JSON
    • Protobuf
    • SDC Record
    • Text
  4. For Avro data, on the Data Format tab, configure the following properties:
    Avro Property Description
    Avro Schema Location Location of the Avro schema definition to use when writing data:
    • In Pipeline Configuration - Use the schema that you provide in the stage configuration.
    • In Record Header - Use the schema in the avroSchema record header attribute. Use only when the avroSchema attribute is defined for all records.
    • Confluent Schema Registry - Retrieve the schema from the Confluent Schema Registry.
    Avro Schema Avro schema definition used to write the data.

    You can optionally use the runtime:loadResource function to use a schema definition stored in a runtime resource file.

    Register Schema Select to register a new Avro schema with the Confluent Schema Registry.
    Schema Registry URLs Confluent Schema Registry URLs used to look up the schema or to register a new schema. To add a URL, click Add. Use the following format to enter the URL:
    http://<host name>:<port number>
    Look Up Schema By Method used to look up the schema in the Confluent Schema Registry:
    • Subject - Look up the specified Avro schema subject.
    • Schema ID - Look up the specified Avro schema ID.
    Schema Subject Avro schema subject to look up or to register in the Confluent Schema Registry.

    If the specified subject to look up has multiple schema versions, the origin uses the latest schema version for that subject. To use an older version, find the corresponding schema ID, and then set the Look Up Schema By property to Schema ID.

    Schema ID Avro schema ID to look up in the Confluent Schema Registry.
    Include Schema Includes the schema in each message.
    Note: Omitting the schema definition can improve performance, but requires the appropriate schema management to avoid losing track of the schema associated with the data.
    Avro Compression Codec The Avro compression type to use.

    When using Avro compression, do not enable other compression available in the destination.

  5. For binary data, on the Data Format tab, configure the following property:
    Binary Property Description
    Binary Field Path Field that contains the binary data.
  6. For delimited data, on the Data Format tab, configure the following properties:
    Delimited Property Description
    Delimiter Format Format for delimited data:
    • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
    • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
    • MS Excel CSV - Microsoft Excel comma-separated file.
    • MySQL CSV - MySQL comma-separated file.
    • Tab-Separated Values - File that includes tab-separated values.
    • PostgreSQL CSV - PostgreSQL comma-separated file.
    • PostgreSQL Text - PostgreSQL text file.
    • Custom - File that uses user-defined delimiter, escape, and quote characters.
    Header Line Indicates whether to create a header line.
    Replace New Line Characters Replaces new line characters with the configured string.

    Recommended when writing data as a single line of text.

    New Line Character Replacement String to replace each new line character. For example, enter a space to replace each new line character with a space.

    Leave empty to remove the new line characters.

    Delimiter Character Delimiter character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    You can enter a Unicode control character using the format \uNNNN, where ​N is a hexadecimal digit from the numbers 0-9 or the letters A-F. For example, enter \u0000 to use the null character as the delimiter or \u2028 to use a line separator as the delimiter.

    Default is the pipe character ( | ).

    Escape Character Escape character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    Default is the backslash character ( \ ).

    Quote Character Quote character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    Default is the quotation mark character ( " ).

    Charset Character set to use when writing data.
  7. For JSON data, on the Data Format tab, configure the following property:
    JSON Property Description
    JSON Content Method to write JSON data:
    • JSON Array of Objects - Each file includes a single array. In the array, each element is a JSON representation of each record.
    • Multiple JSON Objects - Each file includes multiple JSON objects. Each object is a JSON representation of a record.
    Charset Character set to use when writing data.
  8. For protobuf data, on the Data Format tab, configure the following properties:
    Protobuf Property Description
    Protobuf Descriptor File Descriptor file (.desc) to use. The descriptor file must be in the Data Collector resources directory, $SDC_RESOURCES.

    For more information about environment variables, see Data Collector Environment Configuration. For information about generating the descriptor file, see Protobuf Data Format Prerequisites.

    Message Type Fully-qualified name for the message type to use when writing data.

    Use the following format: <package name>.<message type>.

    Use a message type defined in the descriptor file.
  9. For text data, on the Data Format tab, configure the following properties:
    Text Property Description
    Text Field Path Field that contains the text data to be written. All data must be incorporated into the specified field.
    Record Separator Characters to use to separate records. Use any valid Java string literal. For example, when writing to Windows, you might use \r\n to separate records.

    By default, the destination uses \n.

    On Missing Field When a record does not include the text field, determines whether the destination reports the missing field as an error or ignores the missing field.
    Insert Record Separator if No Text When configured to ignore a missing text field, inserts the configured record separator string to create an empty line.

    When not selected, discards records without the text field.

    Charset Character set to use when writing data.