gRPC Client

Supported pipeline types:
  • Data Collector Edge

The gRPC Client origin processes data from a gRPC server by calling gRPC server methods. The origin can call Unary RPC and Server Streaming RPC methods. This origin is a Technology Preview feature. It is not meant for use in production.

Use the gRPC Client origin only in pipelines configured for edge execution mode. Run the pipeline on StreamSets Data Collector Edge (SDC Edge).

When you configure the gRPC Client origin, you specify the resource URL of the gRPC server and the service method that the origin calls. You also define whether the origin uses the unary or server streaming RPC method to call the server.

You can specify optional headers that the origin sends with the request, and configure whether the gRPC server can send default values in the response. You can also optionally configure the origin to use SSL/TLS to securely connect to the gRPC server.

For more information about installing SDC Edge, designing edge pipelines, and running and maintaining edge pipelines, see Meet StreamSets Data Collector Edge.

Prerequisite

Before the gRPC Client origin can process data from a gRPC server, you must enable reflection for the server.

To enable server reflection, import the gRPC reflection package and then register reflection service on the gRPC server, as described in this gRPC Server Reflection Tutorial.

Server Method Type

The gRPC Client origin can use one of the following method types to call the gRPC server:

Unary RPC method
With the unary RPC method, the origin sends a single request to the gRPC server and receives a single response back, just like a normal function call.
Server streaming RPC method
With the server streaming RPC method, the origin sends a request to the gRPC server and receives a stream to read a sequence of messages back. The client reads from the returned stream until there are no more messages.

For more information about these gRPC server method types, see the gRPC documentation.

Data Formats

The gRPC Client origin processes data differently based on the data format. The origin processes the following types of data:

Delimited
Generates a record for each delimited line. You can use the following delimited format types:
  • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
  • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
  • MS Excel CSV - Microsoft Excel comma-separated file.
  • MySQL CSV - MySQL comma-separated file.
  • Tab-Separated Values - File that includes tab-separated values.
  • PostgreSQL CSV - PostgreSQL comma-separated file.
  • PostgreSQL Text - PostgreSQL text file.
  • Custom - File that uses user-defined delimiter, escape, and quote characters.
  • Multi Character Delimited - File that uses multiple user-defined characters to delimit fields and lines, and single user-defined escape and quote characters.
You can use a list or list-map root field type for delimited data, and optionally include field names from a header line, when available. For more information about the root field types, see Delimited Data Root Field Type.
When using a header line, you can enable handling records with additional columns. The additional columns are named using a custom prefix and integers in sequential increasing order, such as _extra_1, _extra_2. When you disallow additional columns, records that include additional columns are sent to error.
You can also replace a string constant with null values.
When a record exceeds the maximum record length defined for the stage, the stage processes the object based on the error handling configured for the stage.
JSON
Generates a record for each JSON object. You can process JSON files that include multiple JSON objects or a single JSON array.
When an object exceeds the maximum object length defined for the origin, the origin processes the object based on the error handling configured for the stage.
Text
Generates a record for each line of text or for each section of text based on a custom delimiter.
When a line or section exceeds the maximum line length defined for the origin, the origin truncates it. The origin adds a boolean field named Truncated to indicate if the line was truncated.
For more information about processing text with a custom delimiter, see Text Data Format with Custom Delimiters.

Configuring a gRPC Client Origin

Configure a gRPC Client origin to read from a gRPC server. The gRPC Client origin is a Technology Preview feature. It is not meant for use in production.

  1. In the Properties panel, on the General tab, configure the following properties:
    General Property Description
    Name Stage name.
    Description Optional description.
    On Record Error Error record handling for the stage:
    • Discard - Discards the record.
    • Send to Error - Sends the record to the pipeline for error handling.
    • Stop Pipeline - Stops the pipeline.
  2. On the gRPC tab, configure the following properties:
    gRPC Property Description
    Resource URL URL of the gRPC server.
    Service Method Service method on the gRPC server to call. Use the following format:
    <serviceName>/<methodName>
    Request Data Optional data to send as an argument for the gRPC service method. Enter the data in the format required by the service method.
    Method Type Method type to call the gRPC server:
    • Unary RPC
    • Server streaming RPC
    Polling Interval (ms) Milliseconds to wait before checking for new data. Used for the unary RPC server method only.

    Default is 5,000 milliseconds.

    Connect Timeout (secs) Maximum number of seconds to wait for a connection.

    Use 0 to wait indefinitely. Default is 10 seconds.

    Keep Alive Time (secs) Maximum time in seconds that the connection to the gRPC server can remain idle. After receiving no response for this amount of time, the origin checks with the server to see if the transport is still alive.

    Minimum value is 10 seconds. If set to less than 10, the origin uses 10.

    Additional Headers Optional headers to include in the request. Using simple or bulk edit mode, click the Add icon to add additional headers.
    Emit Defaults Enables the gRPC server to send default values for responses.
  3. To use SSL/TLS, on the TLS tab, configure the following properties:
    TLS Property Description
    Use TLS

    Enables the use of TLS.

    Insecure Skips verifying trusted certificates in a test or development environment.

    StreamSets highly recommends that you configure the origin to verify trusted certificates in a production environment.

    Keystore File Absolute path to the keystore file.

    By default, no keystore is used.

    Keystore Type The keystore file must use the PEM format. As a result, this property is ignored.
    Keystore Password The keystore file must use the PEM format which does not require a password. As a result, this property is ignored.
    Keystore Key Algorithm This property is ignored.
    Authority Value of the :authority HTTP/2 header to use in calls to the gRPC server.

    If no value is entered, the origin uses the resource URL of the gRPC server.

    Server Name Server name used to verify the host name on the returned certificates from the gRPC server. Overrides the server name specified in the certificates.
    Truststore File Absolute path to the truststore file.

    By default, no truststore is used.

    Truststore Type The truststore file must use the PEM format. As a result, this property is ignored.
    Truststore Password The truststore file must use the PEM format which does not require a password. As a result, this property is ignored.
    Truststore Trust Algorithm This property is ignored.
    Use Default Protocols Uses the default TLSv1.2 protocol.
    Transport Protocols Only the TLSv1.2 protocol is supported. As a result, this property is ignored.
    Use Default Cipher Suites Determines the default cipher suite to use when performing the SSL/TLS handshake.
    Cipher Suites Only the default cipher suites are supported. As a result, this property is ignored.
  4. On the Data Format tab, configure the following property:
    Data Format Property Description
    Data Format Format of data. Use one of the following options:
    • Delimited
    • JSON
    • Text
  5. For delimited data, on the Data Format tab, configure the following properties:
    Delimited Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.

    In Data Collector Edge pipelines, the origin only supports uncompressed and compressed files, not archive or compressed archive files.

    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Delimiter Format Type Delimiter format type. Use one of the following options:
    • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
    • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
    • MS Excel CSV - Microsoft Excel comma-separated file.
    • MySQL CSV - MySQL comma-separated file.
    • Tab-Separated Values - File that includes tab-separated values.
    • PostgreSQL CSV - PostgreSQL comma-separated file.
    • PostgreSQL Text - PostgreSQL text file.
    • Custom - File that uses user-defined delimiter, escape, and quote characters.
    • Multi Character Delimited - File that uses multiple user-defined characters to delimit fields and lines, and single user-defined escape and quote characters.
    Header Line Indicates whether a file contains a header line, and whether to use the header line.
    Allow Extra Columns When processing data with a header line, allows processing records with more columns than exist in the header line.
    Extra Column Prefix Prefix to use for any additional columns. Extra columns are named using the prefix and sequential increasing integers as follows: <prefix><integer>.

    For example, _extra_1. Default is _extra_.

    Max Record Length (chars) Maximum length of a record in characters. Longer records are not read.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Delimiter Character Delimiter character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    You can enter a Unicode control character using the format \uNNNN, where ​N is a hexadecimal digit from the numbers 0-9 or the letters A-F. For example, enter \u0000 to use the null character as the delimiter or \u2028 to use a line separator as the delimiter.

    Default is the pipe character ( | ).

    Multi Character Field Delimiter Characters that delimit fields for multi-character delimiter format.

    Default is two pipe characters (||).

    Multi Character Line Delimiter Characters that delimit lines or records for multi-character delimiter format.

    Default is the newline character (\n).

    Escape Character Escape character for a custom or multi-character delimiter format.
    Quote Character Quote character for a custom or multi-character delimiter format.
    Enable Comments Allows commented data to be ignored for custom delimiter format.
    Comment Marker Character that marks a comment when comments are enabled for custom delimiter format.
    Ignore Empty Lines Allows empty lines to be ignored for custom delimiter format.
    Root Field Type Root field type to use:
    • List-Map - Generates an indexed list of data. Enables you to use standard functions to process data. Use for new pipelines.
    • List - Generates a record with an indexed list with a map for header and value. Requires the use of delimited data functions to process data. Use only to maintain pipelines created before 1.1.0.
    Lines to Skip Number of lines to skip before reading data.
    Parse NULLs Replaces the specified string constant with null values.
    NULL Constant String constant to replace with null values.
    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  6. For JSON data, on the Data Format tab, configure the following properties:
    JSON Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.

    In Data Collector Edge pipelines, the origin only supports uncompressed and compressed files, not archive or compressed archive files.

    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    JSON Content Type of JSON content. Use one of the following options:
    • Array of Objects
    • Multiple Objects
    Maximum Object Length (chars) Maximum number of characters in a JSON object.

    Longer objects are diverted to the pipeline for error handling.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  7. For text data, on the Data Format tab, configure the following properties:
    Text Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.

    In Data Collector Edge pipelines, the origin only supports uncompressed and compressed files, not archive or compressed archive files.

    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Max Line Length Maximum number of characters allowed for a line. Longer lines are truncated.

    Adds a boolean field to the record to indicate if it was truncated. The field name is Truncated.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Use Custom Delimiter Uses custom delimiters to define records instead of line breaks.
    Custom Delimiter One or more characters to use to define records.
    Include Custom Delimiter Includes delimiter characters in the record.
    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.