Google Cloud Storage

Supported pipeline types:
  • Data Collector

The Google Cloud Storage destination writes data to objects in Google Cloud Storage. You can use other destinations to write to Google BigQuery, Google Bigtable, and Google Pub/Sub.

The destination creates an object for each batch of data written to Google Cloud Storage.

With the Google Cloud Storage destination, you configure the bucket and common prefix to define where to write objects. You can use a partition prefix to specify the partition to write to. You can configure a prefix for the object name, and a time basis and data time zone for the stage. When using any data format except whole file, you can also configure a suffix for the object name and compress data with gzip before writing to Google Cloud Storage.

The destination can generate events for an event stream. For more information about the event framework, see Dataflow Triggers Overview.

Credentials

Before writing to Google Cloud Storage, the Google Cloud Storage destination must pass credentials to Google Cloud Storage. Configure the destination to retrieve the credentials from the Google Application Default Credentials or from a Google Cloud service account credentials file.

Default Credentials Provider

When configured to use the Google Application Default Credentials, the destination checks for the credentials file defined in the GOOGLE_APPLICATION_CREDENTIALS environment variable. If the environment variable doesn't exist and Data Collector is running on a virtual machine (VM) in Google Cloud Platform (GCP), the origin uses the built-in service account associated with the virtual machine instance.

For more information about the default credentials, see Google Application Default Credentials in the Google Developer documentation.

Complete the following steps to define the credentials file in the environment variable:

  1. Use the Google Cloud Platform Console or the gcloud command-line tool to create a Google service account and have your application use it for API access.
    For example, to use the command line tool, run the following commands:
    gcloud iam service-accounts create my-account
    gcloud iam service-accounts keys create key.json --iam-account=my-account@my-project.iam.gserviceaccount.com
  2. Store the generated credentials file on the Data Collector machine.
  3. Add the GOOGLE_APPLICATION_CREDENTIALS environment variable to the appropriate file and point it to the credentials file.

    Modify environment variables using the method required by your installation type.

    Set the environment variable as follows:

    export GOOGLE_APPLICATION_CREDENTIALS="/var/lib/sdc-resources/keyfile.json"
  4. Restart Data Collector to enable the changes.
  5. On the Credentials tab for the stage, select Default Credentials Provider for the credentials provider.

Service Account Credentials (JSON)

When configured to use the Google Cloud service account credentials file, the destination checks for the file defined in the origin properties.

Complete the following steps to use the service account credentials file:
  1. Generate a service account credentials file in JSON format.

    Use the Google Cloud Platform Console or the gcloud command-line tool to generate and download the credentials file. For more information, see generating a service account credential in the Google Cloud Platform documentation.

  2. Store the generated credentials file on the Data Collector machine.

    As a best practice, store the file in the Data Collector resources directory, $SDC_RESOURCES.

  3. On the Credentials tab for the stage, select Service Account Credentials File for the credentials provider and enter the path to the credentials file.

Partition Prefix

You can use a partition prefix to organize objects by partitions. You can use the partition prefix to write to existing partitions or to create new partitions as needed. When a partition specified in the partition prefix does not exist, the destination creates the partition.

You can specify an exact partition name for the partition prefix, or you can use an expression that evaluates to a partition name.

For example, to write to partitions based on data in the Country field, you can use the following expression as the partition prefix: ${record:value('/Country')}.

With this expression, the destination writes records to partitions based on the country data in the record, and creates partitions for countries that do not already have a partition.

If you use datetime variables in the expression, be sure to configure the time basis for the stage.

Time Basis, Data Time Zone, and Time-Based Partition Prefixes

The time basis and the data time zone comprises the time used by the Google Cloud Storage destination to write records to a time-based partition prefix. When the configured partition prefix does not include time-based functions, you can ignore the time basis property.

A partition prefix has a time component when it includes datetime variables, such as ${YYYY()} or ${DD()}, or when it includes an expression that evaluates to a datetime value, such as ${record:value("/Timestamp")}.

For details about datetime variables, see Datetime Variables.

You can use the following times as the time basis:
Processing Time
When you use processing time as the time basis, the destination performs writes based on the processing time and the configured partition prefix. The processing time is the time associated with the Data Collector running the pipeline, by default. You can specify a different time zone by configuring the Data Time Zone property. To use the processing time as the time basis, use the following expression:
${time:now()}
This is the default time basis.
Record Time
When you use the time associated with a record as the time basis, you specify a date field in the record. The destination writes data based on the datetimes associated with the records, adjusting for the value specified for the Data Time Zone property.
To use a time associated with the record, use an expression that calls a field and resolves to a datetime value, such as ${record:value("/Timestamp")}.
For example, say you define the Partition Prefix property using the following datetime variables:
logs-${YYYY()}-${MM()}-${DD()}

If you use the time of processing as the time basis, the destination writes records to partitions based on when it processes each record. If you use the time associated with the data, such as a transaction timestamp, then the destination writes records to the partitions based on that timestamp. If a partition does not exist, the destination creates the needed partition.

Object Names

The Google Cloud Storage destination creates an object, or file, for each batch of data written. Objects generally use the following naming convention:
<prefix>-<UUID>

You configure the object name prefix. For example: sdc-c9a2db16-b5d0-44cb-b3f5-d0781cced760.

You can optionally configure an object name suffix for all data formats except whole file. When you configure a suffix, it is added to the object name after a period, as follows:
<prefix>-<UUID>.<optional suffix>

For example: sdc-c9a2db16-b5d0-44cb-b3f5-d0781cced760.txt.

Whole File Names

When you use the whole file data format, the object name prefix is optional. Whole files are named based on the File Name Expression whole file property. If you configure an object name prefix, whole files are named as follows:
<prefix>-<results of the file name expression>

Event Generation

The Google Cloud Storage destination can generate events that you can use in an event stream. When you enable event generation, Google Cloud Storage generates event records each time the destination completes writing to an object or completes streaming a whole file.

Google Cloud Storage events can be used in any logical way. For example:

For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.

Event Records

Event records generated by the Google Cloud Storage destination have the following event-related record header attributes. Record header attributes are stored as String values:
Record Header Attribute Description
sdc.event.type Event type. Uses one of the following types:
  • GCS Object Written - Generated when the destination completes writing to an object.
  • wholeFileProcessed - Generated when the destination completes streaming a whole file.
sdc.event.version An integer that indicates the version of the event record type.
sdc.event.creation_timestamp Epoch timestamp when the stage created the event.
The Google Cloud Storage destination can generate the following types of event records:
Object written
The destination generates an object written event record when it completes writing to an object.
Object written event records have the sdc.event.type record header attribute set to GCS Object Written and include the following fields:
Field Description
bucket Bucket where the object is located.
objectKey Object key name that was written.
recordCount Number of records written to the object.
Whole file processed
The destination generates an event record when it completes streaming a whole file. Whole file event records have the sdc.event.type record header attribute set to wholeFileProcessed and include the following fields:
Field Description
sourceFileInfo A map of attributes about the original whole file that was processed.

The attribute names depend on the information provided by the origin system.

targetFileInfo A map of attributes about the whole file written to the destination system. The attributes include:
  • bucket - The bucket where the whole file is written.
  • objectKey - The object key name that was written.
checksum Checksum generated for the written file.

Included only when you configure the destination to include checksums in the event record.

checksumAlgorithm Algorithm used to generate the checksum.

Included only when you configure the destination to include checksums in the event record.

Data Formats

The Google Cloud Storage destination writes data to Google Cloud Storage based on the data format that you select. You can use the following data formats:
Avro
The destination writes records based on the Avro schema. You can use one of the following methods to specify the location of the Avro schema definition:
  • In Pipeline Configuration - Use the schema that you provide in the stage configuration.
  • In Record Header - Use the schema included in the avroSchema record header attribute.
  • Confluent Schema Registry - Retrieve the schema from Confluent Schema Registry. The Confluent Schema Registry is a distributed storage layer for Avro schemas. You can configure the destination to look up the schema in the Confluent Schema Registry by the schema ID or subject.

    If using the Avro schema in the stage or in the record header attribute, you can optionally configure the destination to register the Avro schema with the Confluent Schema Registry.

The destination includes the schema definition in each file.
You can compress data with an Avro-supported compression codec. When using Avro compression, avoid using other compression properties in the destination.
Delismited
The destination writes records as delimited data. When you use this data format, the root field must be list or list-map.
You can use the following delimited format types:
  • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
  • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
  • MS Excel CSV - Microsoft Excel comma-separated file.
  • MySQL CSV - MySQL comma-separated file.
  • PostgreSQL CSV - PostgreSQL comma-separated file.
  • PostgreSQL Text - PostgreSQL text file.
  • Tab-Separated Values - File that includes tab-separated values.
  • Custom - File that uses user-defined delimiter, escape, and quote characters.
JSON
The destination writes records as JSON data. You can use one of the following formats:
  • Array - Each file includes a single array. In the array, each element is a JSON representation of each record.
  • Multiple objects - Each file includes multiple JSON objects. Each object is a JSON representation of a record.
Protobuf
Writes a batch of messages in each file.
Uses the user-defined message type and the definition of the message type in the descriptor file to generate the messages in the file.
For information about generating the descriptor file, see Protobuf Data Format Prerequisites.
SDC Record
The destination writes records in the SDC Record data format.
Text
The destination writes data from a single text field to the destination system. When you configure the stage, you select the field to use. When necessary, merge record data into the field earlier in the pipeline.
You can configure the characters to use as record separators. By default, the destination uses a UNIX-style line ending (\n) to separate records.
When a record does not contain the selected text field, you can configure the destination to report the missing field as an error or to ignore the missing field. By default, the destination reports an error.
When configured to ignore a missing text field, you can configure the destination to discard the record or to write the record separator characters to create an empty line for the record. By default, the destination discards the record.
Whole File
Streams whole files to the destination system. The destination writes the data to the file and location defined in the stage. If a file of the same name already exists, you can configure the destination to overwrite the existing file or send the current file to error.
Written files use the default permissions defined in the destination system.
You can configure the destination to generate a checksum for the written file and pass checksum information to the destination system in an event record.
For more information about the whole file data format, see Whole File Data Format.

Configuring a Google Cloud Storage Destination

Configure an Google Cloud Storage destination to write objects to Google Cloud Storage.
  1. In the Properties panel, on the General tab, configure the following properties:
    General Property Description
    Name Stage name.
    Description Optional description.
    Produce Events Generates event records when events occur. Use for event handling.
    Required Fields Fields that must include data for the record to be passed into the stage.
    Tip: You might include fields that the stage uses.

    Records that do not include all required fields are processed based on the error handling configured for the pipeline.

    Preconditions Conditions that must evaluate to TRUE to allow a record to enter the stage for processing. Click Add to create additional preconditions.

    Records that do not meet all preconditions are processed based on the error handling configured for the stage.

  2. On the GCS tab, configure the following properties:
    GCS Property Description
    Bucket Bucket to use when writing records.
    Note: The bucket name must be DNS compliant. For more information about bucket naming conventions, see the Google Cloud Storage documentation.
    Common Prefix Common prefix that determines where objects are written.
    Partition Prefix Optional partition prefix to specify the partition to use.

    Use a specific partition prefix or define an expression that evaluates to a partition prefix.

    When using datetime variables in the expression, be sure to configure the time basis for the stage.

    Data Time Zone

    Time zone for the destination system. Used to resolve datetimes in a time-based partition prefix.

    Time Basis
    Time basis to use for writing to a time-based bucket or partition prefix. Use one of the following expressions:
    • ${time:now()} - Uses the processing time as the time basis in conjunction with the specified Data Time Zone.
    • An expression that calls a field and resolves to a datetime value, such as ${record:value(<date field path>)}. Uses the time associated with the record as the time basis, adjusted for the specified Data Time Zone.

    When the Partition Prefix property has no time component, you can ignore this property.

    Default is ${time:now()}.

    Object Name Prefix Defines a prefix for object names written by the destination. By default, object names start with "sdc" as follows: sdc-<UUID>.

    Not required for the whole file data format.

    Object Name Suffix Suffix to use for object names, such as txt or json. When used, the destination adds a period and the configured suffix as follows: <object name>.<suffix>.

    You can include periods within the suffix, but do not start the suffix with a period. Forward slashes are not allowed.

    Not available for the whole file data format.

    Compress with Gzip Compresses files with gzip before writing to Google Cloud Storage.

    Not available for the whole file data format.

  3. On the Data Format tab, configure the following property:
    Data Format Property Description
    Data Format Data format to write data:
    • Avro
    • Delimited
    • JSON
    • Protobuf
    • SDC Record
    • Text
    • Whole File
  4. For Avro data, on the Data Format tab, configure the following properties:
    Avro Property Description
    Avro Schema Location Location of the Avro schema definition to use when writing data:
    • In Pipeline Configuration - Use the schema that you provide in the stage configuration.
    • In Record Header - Use the schema in the avroSchema record header attribute. Use only when the avroSchema attribute is defined for all records.
    • Confluent Schema Registry - Retrieve the schema from the Confluent Schema Registry.

    The destination includes the schema definition in each generated file.

    Avro Schema Avro schema definition used to write the data.

    You can optionally use the runtime:loadResource function to use a schema definition stored in a runtime resource file.

    Register Schema Select to register a new Avro schema with the Confluent Schema Registry.
    Schema Registry URLs Confluent Schema Registry URLs used to look up the schema or to register a new schema. To add a URL, click Add. Use the following format to enter the URL:
    http://<host name>:<port number>
    Look Up Schema By Method used to look up the schema in the Confluent Schema Registry:
    • Subject - Look up the specified Avro schema subject.
    • Schema ID - Look up the specified Avro schema ID.
    Schema Subject Avro schema subject to look up or to register in the Confluent Schema Registry.

    If the specified subject to look up has multiple schema versions, the origin uses the latest schema version for that subject. To use an older version, find the corresponding schema ID, and then set the Look Up Schema By property to Schema ID.

    Schema ID Avro schema ID to look up in the Confluent Schema Registry.
    Avro Compression Codec The Avro compression type to use.

    When using Avro compression, do not enable other compression available in the destination.

  5. For delimited data, on the Data Format tab, configure the following properties:
    Delimited Property Description
    Delimiter Format Format for delimited data:
    • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
    • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
    • MS Excel CSV - Microsoft Excel comma-separated file.
    • MySQL CSV - MySQL comma-separated file.
    • PostgreSQL CSV - PostgreSQL comma-separated file.
    • PostgreSQL Text - PostgreSQL text file.
    • Tab-Separated Values - File that includes tab-separated values.
    • Custom - File that uses user-defined delimiter, escape, and quote characters.
    Header Line Indicates whether to create a header line.
    Replace New Line Characters Replaces new line characters with the configured string.

    Recommended when writing data as a single line of text.

    New Line Character Replacement String to replace each new line character. For example, enter a space to replace each new line character with a space.

    Leave empty to remove the new line characters.

    Delimiter Character Delimiter character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    You can enter a Unicode control character using the format \uNNNN, where ​N is a hexadecimal digit from the numbers 0-9 or the letters A-F. For example, enter \u0000 to use the null character as the delimiter or \u2028 to use a line separator as the delimiter.

    Default is the pipe character ( | ).

    Escape Character Escape character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    Default is the backslash character ( \ ).

    Quote Character Quote character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    Default is the quotation mark character ( " ).

    Charset Character set to use when writing data.
  6. For JSON data, on the Data Format tab, configure the following property:
    JSON Property Description
    JSON Content Determines how JSON data is written:
    • JSON Array of Objects - Each file includes a single array. In the array, each element is a JSON representation of each record.
    • Multiple JSON Objects - Each file includes multiple JSON objects. Each object is a JSON representation of a record.
    Charset Character set to use when writing data.
  7. For protobuf data, on the Data Format tab, configure the following properties:
    Protobuf Property Description
    Protobuf Descriptor File Descriptor file (.desc) to use. The descriptor file must be in the Data Collector resources directory, $SDC_RESOURCES.

    For more information about environment variables, see Data Collector Environment Configuration. For information about generating the descriptor file, see Protobuf Data Format Prerequisites.

    Message Type The fully-qualified name for the message type to use when writing data.

    Use the following format: <package name>.<message type>.

    Use a message type defined in the descriptor file.
  8. For text data, on the Data Format tab, configure the following properties:
    Text Property Description
    Text Field Path Field that contains the text data to be written. All data must be incorporated into the specified field.
    Record Separator Characters to use to separate records. Use any valid Java string literal. For example, when writing to Windows, you might use \r\n to separate records.

    By default, the destination uses \n.

    On Missing Field When a record does not include the text field, determines whether the destination reports the missing field as an error or ignores the missing field.
    Insert Record Separator if No Text When configured to ignore a missing text field, inserts the configured record separator string to create an empty line.

    When not selected, discards records without the text field.

    Charset Character set to use when writing data.
  9. For whole files, on the Data Format tab, configure the following properties:
    Whole File Property Description
    File Name Expression

    Expression to use for the file names.

    For tips on how to name files based on input file names, see Writing Whole Files.

    File Exists Action to take when a file of the same name already exists in the output directory. Use one of the following options:
    • Send to Error - Handles the record based on stage error record handling.
    • Overwrite - Overwrites the existing file.
    Include Checksum in Events Includes checksum information in whole file event records.

    Use only when the destination generates event records.

    Checksum Algorithm Algorithm to generate the checksum.
  10. On the Credentials tab, configure the following properties:
    Credentials Property Description
    Project ID Project ID to connect to.
    Credentials Provider Credentials provider to use to connect:
    • Default credentials provider
    • Service account credentials file (JSON)
    Credentials File Path (JSON) When using a Google Cloud service account credentials file, path to the file that the origin uses to connect to Google Cloud Storage. The credentials file must be a JSON file.

    Enter a path relative to the Data Collector resources directory, $SDC_RESOURCES, or enter an absolute path.