MapR FS

The MapR FS origin reads files from MapR FS. Use this origin only in pipelines configured for cluster execution mode.

When you configure the MapR FS origin, you specify the input path and data format for the data to be read. You can configure the origin to read from all subdirectories and to generate a single record for records that include multiple objects.

The origin reads compressed data based on file extension for all Hadoop-supported compression codecs.

When necessary, you can enable Kerberos authentication or specify a Hadoop FS user. You can also use Hadoop configuration files and add other Hadoop configuration properties as needed.

Before you use any MapR stage in a pipeline, you must perform additional steps to enable Data Collector to process MapR data. For more information, see MapR Prerequisites.

Kerberos Authentication

You can use Kerberos authentication to connect to MapR. When you use Kerberos authentication, Data Collector uses the Kerberos principal and keytab to connect to MapR. By default, Data Collector uses the user account who started it to connect.

The Kerberos principal and keytab are defined in the Data Collector configuration file, $SDC_CONF/sdc.properties. To use Kerberos authentication, configure all Kerberos properties in the Data Collector configuration file, and then enable Kerberos in the MapR FS origin.

Using a Hadoop User

Data Collector can either use the currently logged in Data Collector user or a user configured in the origin to read files from MapR FS.

A Data Collector configuration property can be set that requires using the currently logged in Data Collector user. When this property is not set, you can specify a user in the origin. For more information about Hadoop impersonation and the Data Collector property, see Hadoop Impersonation Mode.

Note that the origin uses a different user account to connect to MapR FS. By default, Data Collector uses the user account who started it to connect to external systems. When using Kerberos, Data Collector uses the Kerberos principal.

To configure a user in the origin to read from MapR FS, perform the following tasks:
  1. On MapR, configure the user as a proxy user and authorize the user to impersonate the Hadoop user.

    For more information, see the MapR documentation.

  2. In the MapR FS origin, on the Hadoop FS tab, configure the Hadoop FS User property.

Hadoop Properties and Configuration Files

You can configure the MapR FS origin to use individual Hadoop properties or Hadoop configuration files:

Hadoop configuration files
You can use the following Hadoop configuration files with the MapR FS origin:
  • core-site.xml
  • hdfs-site.xml
  • yarn-site.xml
  • mapred-site.xml
To use Hadoop configuration files:
  1. Store the files or a symlink to the files in the Data Collector resources directory.
  2. In the MapR FS origin, specify the location of the files.
Individual properties
You can configure individual Hadoop properties in the origin. To add a Hadoop property, you specify the exact property name and the value. The MapR FS origin does not validate the property names or values.
Note: Individual properties override properties defined in the Hadoop configuration files.

Data Formats

The MapR FS origin processes data differently based on the data format that you select. The origin processes the following types of data:

Avro
Generates a record for every Avro record.
You can use one of the following methods to specify the location of the Avro schema definition:
  • Message/Data Includes Schema - Use the schema in the file.
  • In Pipeline Configuration - Use the schema that you provide in the stage configuration.
  • Confluent Schema Registry - Retrieve the schema from Confluent Schema Registry. The Confluent Schema Registry is a distributed storage layer for Avro schemas. You can configure the origin to look up the schema in the Confluent Schema Registry by the schema ID or subject specified in the stage configuration.
Using a schema in the stage configuration or retrieving a schema from the Confluent Schema Registry overrides any schema that might be included in the file and can improve performance.
The origin reads files compressed by Avro-supported compression codecs without requiring additional configuration.
Delimited
Generates a record for each delimited line. You can use the following delimited format types:
  • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
  • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
  • MS Excel CSV - Microsoft Excel comma-separated file.
  • MySQL CSV - MySQL comma separated file.
  • Tab-Separated Values - File that includes tab-separated values.
  • Custom - File that uses user-defined delimiter, escape, and quote characters.
You can use a list or list-map root field type for delimited data, optionally including the header information when available.
You can also replace a string constant with null values.
When a record exceeds the maximum record length defined for the origin, the origin processes the object based on the error handling configured for the stage.
For more information about the root field types, see Delimited Data Root Field Type.
Text
Generates a record for each line of text or for each section of text based on a custom delimiter.
When a line or section exceeds the maximum line length defined for the origin, the origin truncates it. The origin adds a boolean field named Truncated to indicate if the line was truncated.
For more information about processing text with a custom delimiter, see Text Data Format with Custom Delimiters.

Configuring a MapR FS Origin

Configure a MapR FS origin to read files from MapR FS.

  1. In the Properties panel, on the General tab, configure the following properties:
    General Property Description
    Name Stage name.
    Description Optional description.
    Stage Library Library version that you want to use.
    On Record Error Error record handling for the stage:
    • Discard - Discards the record.
    • Send to Error - Sends the record to the pipeline for error handling.
    • Stop Pipeline - Stops the pipeline. Not valid for cluster pipelines.
  2. On the Hadoop FS tab, configure the following properties:
    Hadoop FS Property Description
    Hadoop FS URI Hadoop URI.

    To connect to a specific cluster, enter maprfs:///mapr/<cluster name>. For example:

    maprfs:///mapr/my.cluster.com/

    Leave empty to use the default value of maprfs:///, which uses the first entry defined in the $MAPR_HOME/conf/mapr-clusters.conf file.

    Input Paths Location of the input data to be read. Enter the path as follows: /<path>.
    For example:
    /user/mapr/directory
    Include All Subdirectories Reads from all directories within the specified input path.
    Produce Single Record Generates a single record when a record includes multiple objects.
    Kerberos Authentication Uses Kerberos credentials to connect to MapR.

    When selected, uses the Kerberos principal and keytab defined in the Data Collector configuration file, $SDC_CONF/sdc.properties.

    Hadoop FS Configuration Directory

    Location of the Hadoop configuration files.

    Use a directory or symlink within the Data Collector resources directory.

    You can use the following files with the MapR FS origin:
    • core-site.xml
    • hdfs-site.xml
    • yarn-site.xml
    • mapred-site.xml
    Note: Properties in the configuration files are overridden by individual properties defined in the stage.
    Hadoop FS User The Hadoop user to use to read from MapR FS. When using this property, make sure MapR is configured appropriately.

    When not configured, the pipeline uses the currently logged in Data Collector user.

    Not configurable when Data Collector is configured to use the currently logged in Data Collector user. For more information, see Hadoop Impersonation Mode.

    Hadoop FS Configuration Additional Hadoop configuration properties to use. To add properties, click Add and define the property name and value.

    Use the property names and values as expected by MapR FS.

    Max Batch Size (records) Maximum number of records processed at one time. Honors values up to the Data Collector maximum batch size.

    Default is 1000. The Data Collector default is 1000.

  3. On the Data Format tab, configure the following property:
    Data Format Property Description
    Data Format
    Type of data to be read. Use one of the following options:
    • Avro
    • Delimited
    • Text
  4. For Avro data, on the Data Format tab, configure the following properties:
    Avro Property Description
    Avro Schema Location Location of the Avro schema definition to use when processing data:
    • Message/Data Includes Schema - Use the schema in the file.
    • In Pipeline Configuration - Use the schema provided in the stage configuration.
    • Confluent Schema Registry - Retrieve the schema from the Confluent Schema Registry.

    Using a schema in the stage configuration or in the Confluent Schema Registry can improve performance.

    Avro Schema Avro schema definition used to process the data. Overrides any existing schema definitions associated with the data.

    You can optionally use the runtime:loadResource function to use a schema definition stored in a runtime resource file.

    Schema Registry URLs Confluent Schema Registry URLs used to look up the schema. To add a URL, click Add. Use the following format to enter the URL:
    http://<host name>:<port number>
    Lookup Schema By Method used to look up the schema in the Confluent Schema Registry:
    • Subject - Look up the specified Avro schema subject.
    • Schema ID - Look up the specified Avro schema ID.
    Overrides any existing schema definitions associated with the data.
    Schema Subject Avro schema subject to look up in the Confluent Schema Registry.

    If the specified subject has multiple schema versions, the origin uses the latest schema version for that subject. To use an older version, find the corresponding schema ID, and then set the Look Up Schema By property to Schema ID.

    Schema ID Avro schema ID to look up in the Confluent Schema Registry.
  5. For delimited data, on the Data Format tab, configure the following properties:
    Delimited Property Description
    Delimiter Format Type Delimiter format type. Use one of the following options:
    • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
    • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
    • MS Excel CSV - Microsoft Excel comma-separated file.
    • MySQL CSV - MySQL comma separated file.
    • Tab-Separated Values - File that includes tab-separated values.
    • Custom - File that uses user-defined delimiter, escape, and quote characters.
    Header Line Indicates whether a file contains a header line, and whether to use the header line.
    Max Record Length (chars) Maximum length of a record in characters. Longer records are not read.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Delimiter Character Delimiter character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    You can enter a Unicode control character using the format \uNNNN, where ‚ÄčN is a hexadecimal digit from the numbers 0-9 or the letters A-F. For example, enter \u0000 to use the null character as the delimiter or \u2028 to use a line separator as the delimiter.

    Default is the pipe character ( | ).

    Escape Character Escape character for a custom file type.
    Quote Character Quote character for a custom file type.
    Root Field Type Root field type to use:
    • List-Map - Generates an indexed list of data. Enables you to use standard functions to process data. Use for new pipelines.
    • List - Generates a record with an indexed list with a map for header and value. Requires the use of delimited data functions to process data. Use only to maintain pipelines created before 1.1.0.
    Lines to Skip Lines to skip before reading data.
    Parse NULLs Replaces the specified string constant with null values.
    NULL Constant String constant to replace with null values.
    Charset Character encoding of the files to be processed.
    Ignore Ctrl Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  6. For text data, on the Data Format tab, configure the following properties:
    Text Property Description
    Max Line Length Maximum number of characters allowed for a line. Longer lines are truncated.

    Adds a boolean field to the record to indicate if it was truncated. The field name is Truncated.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Use Custom Delimiter Uses custom delimiters to define records instead of line breaks.
    Custom Delimiter One or more characters to use to define records.
    Charset Character encoding of the files to be processed.
    Ignore Ctrl Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.