Directory
The Directory origin reads data from files in a directory. The origin can use multiple threads to enable the parallel processing of files.
The files to be processed must all share a file name pattern and be fully written. To read data from an active file that is still being written to, use the File Tail origin.
When you configure the Directory origin, you define the directory to use, read order, file name pattern, file name pattern mode, and the first file to process. You can use glob patterns or regular expressions to define the file name pattern that you want to use.
When using the Last Modified Timestamp read order, you can configure the origin to read from subdirectories. To use multiple threads for processing, specify the number of threads to use.
You can also enable reading compressed files or files in a late arriving directory. After processing a file, Directory can keep, archive, or delete the file.
When the pipeline stops, Directory notes where it stops reading. When the pipeline starts again, Directory continues processing from where it stopped by default. You can reset the origin to process all requested files.
Directory generates record header attributes that enable you to use the origins of a record in pipeline processing.
The origin can generate events for an event stream. For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.
File Name Pattern and Mode
Use a file name pattern to define the files that the Directory origin processes. You can use either a glob pattern or a regular expression to define the file name pattern.
The Directory origin processes files based on the file name pattern mode, file
name pattern, and specified directory. For example, if you specify a
/logs/weblog/
directory, glob mode, and *.json
as the file name pattern, the origin processes all files with the "json" extension
in the /logs/weblog/
directory.
The origin processes files in order based on the specified read order.
For more information about glob syntax, see https://en.wikipedia.org/wiki/Glob_(programming)#Syntax. For more information about regular expressions, see Regular Expressions Overview.
Read Order
The Directory origin reads files in ascending order based on the timestamp or file name:
- Last Modified Timestamp
- The Directory origin can read files in ascending order based on the timestamp associated with the file. The origin checks both the last-modified timestamp and the changed timestamp, then uses the highest - the more recent - of the two when ordering files for processing.
- Lexicographically Ascending File Names
- The Directory origin can read files in lexicographically ascending order based on file names. Note that lexicographically ascending order reads the numbers 1 through 11 as follows:
Multithreaded Processing
The Directory origin uses multiple concurrent threads to process data based on the Number of Threads property.
Each thread reads data from a single file, and each file can have a maximum of one thread read from it at a time. The file read order is based on the configuration for the Read Order property.
As the pipeline runs, each thread connects to the origin system, creates a batch of data, and passes the batch to an available pipeline runner. A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors and destinations in the pipeline and performs all pipeline processing after the origin.
Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread. When the flow of data slows, the pipeline runners wait idly until they are needed, generating an empty batch at regular intervals. You can configure the Runner Idle Time pipeline property to specify the interval or to opt out of empty batch generation.
Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since batches are processed by different pipeline instances, the order that batches are written to destinations is not ensured.
For example, say you configure the origin to read files from a directory using 5 threads and the Last Modified Timestamp read order. When you start the pipeline, the origin creates five threads, and Data Collector creates a matching number of pipeline runners.
The Directory origin assigns a thread to each of the five oldest files in the directory. Each thread processes its assigned file, passing batches of data to the origin. Upon receiving data, the origin passes a batch to each of the pipeline runners for processing.
After each thread completes processing a file, it continues to the next file based on the last-modified timestamp, until all files are processed.
For more information about multithreaded pipelines, see Multithreaded Pipeline Overview.
Reading from Subdirectories
When using the Last Modified Timestamp read order, the Directory origin can read files in subdirectories of the specified file directory.
When you configure the origin to read from subdirectories, it reads files from all subdirectories. It reads files in ascending order based on timestamp, regardless of the location of the file within the directory.
For example, you configure Directory to read from the /logs/ file directory, select the Last Modified Timestamp read order, and enable reading from subdirectories. Directory reads the following files in the following order based on timestamp, even though the files are written to different subdirectories.
File Name
|
Directory
|
Last Modified Timestamp
|
log-1.json
|
/logs/west/
|
APR 24 2016 14:03:35
|
log-0054.json
|
/logs/east/
|
APR 24 2016 14:05:03
|
log-0055.json
|
/logs/west/
|
APR 24 2016 14:45:11
|
log-2.json
|
/logs/
|
APR 24 2016 14:45:11
|
Post-Processing Subdirectories
When the Directory origin reads from subdirectories, it uses the subdirectory structure when archiving files during post-processing.
You can archive files when the origin completes processing a file or when it cannot fully process a file.
File Name
|
Archive Directory
|
log-1.json
|
/processed/logs/west/
|
log-0054.json
|
/processed/logs/east/
|
log-0055.json
|
/processed/logs/west/
|
log-2.json
|
/processed/logs/
|
First File for Processing
Configure a first file for processing when you want Directory to ignore one or more existing files in the directory.
When you define a first file to process, Directory starts processing with the specified file and continues based on the read order and file name pattern. When you do not specify a first file, Directory processes all files in the directory that match the file name pattern.
For example, say Directory reads files based on last-modified or changed timestamp. To ignore all files older than a particular file, use that file name as the first file to process.
Similarly, say you have Directory reading files based on lexicographically ascending file names, and the file directory includes the following files: web_001.log, web_002.log, web_003.log.
If you configure web_002.log as the first file, Directory reads web_002.log and continues to web_003.log. It skips web_001.log.
Late Directory
You can configure Directory to read files in a late directory - a directory that appears after the pipeline starts.
When reading from a late directory, the origin does not validate the directory path when you start the pipeline. If the directory does not exist when the pipeline starts, the origin waits indefinitely for the appearance of the directory and a file to process.
For example, say you read files in the following directory:
/logs/server/
The directory does not exist when you start the pipeline, so Directory waits until the directory and a file matching the file name pattern appears, and then processes the data.
After /logs/server appears, the origin can then process the following files that are written to the directory:
/logs/server/log.json
/logs/server/log1.json
/logs/server/log2.json
Record Header Attributes
The Directory origin creates record header attributes that include information about the originating file for the record.
When the origin processes Avro data, it includes the Avro schema in an avroSchema record header attribute.
You can use the record:attribute or record:attributeOrDefault functions to access the information in the attributes. For more information about working with record header attributes, see Working with Header Attributes.
- avroSchema - When processing Avro data, provides the Avro schema.
- baseDir - Base directory containing the file where the record originated.
- filename - Provides the name of the file where the record originated.
- file - Provides the file path and file name where the record originated.
- mtime - Provides the last-modified time for the file.
- offset - Provides the file offset in bytes. The file offset is the location in the file where the record originated.
Event Generation
The Directory origin can generate events that you can use in an event stream. When you enable event generation, the origin generates event records each time the origin starts or completes reading a file. It can also generate events when it completes processing all available data and the configured batch wait time has elapsed.
- With the Pipeline Finisher executor to
stop the pipeline and transition the pipeline to a Finished state when
the origin completes processing available data.
When you restart a pipeline stopped by the Pipeline Finisher executor, the origin continues processing from the last-saved offset unless you reset the origin.
For an example, see Case Study: Stop the Pipeline.
- With the Email executor to send a custom email
after receiving an event.
For an example, see Case Study: Sending Email.
- With a destination to store event information.
For an example, see Case Study: Event Storage.
For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.
Event Records
Record Header Attribute | Description |
---|---|
sdc.event.type | Event type. Uses one of the following types:
|
sdc.event.version | An integer that indicates the version of the event record type. |
sdc.event.creation_timestamp | Epoch timestamp when the stage created the event. |
The Directory origin can generate the following types of event records:
- new-file
- The Directory origin generates a new-file event record when it starts processing a new file.
- finished-file
- The Directory origin generates a finished-file event record when it finishes processing a file.
- no-more-data
- The Directory origin generates a no-more-data event record when the origin completes processing all available records and the number of seconds configured for Batch Wait Time elapses without any new files appearing to be processed.
Buffer Limit and Error Handling
The Directory origin passes each record to a buffer. The size of the buffer determines the maximum size of the record that can be processed. Decrease the buffer limit when memory on the Data Collector machine is limited. Increase the buffer limit to process larger records when memory is available.
- Discard
- The origin discards the record and all remaining records in the file, and then continues processing the next file.
- Send to Error
- With a buffer limit error, the origin cannot send the record to the pipeline
for error handling because it is unable to fully process the record.
Instead, the origin creates a message stating that a buffer overrun error occurred. The message includes the file and offset where the buffer overrun error occurred. The information displays in the pipeline history and displays as an alert when you monitor the pipeline.
If an error directory is configured for the stage, the origin moves the file to the error directory and continues processing the next file.
- Stop Pipeline
- The origin stops the pipeline and creates a message stating that a buffer overrun error occurred. The message includes the file and offset where the buffer overrun error occurred. The information displays as an alert and in the pipeline history.
Data Formats
- Avro
- Generates a record for every Avro record. Includes a "precision" and "scale" field attribute for each Decimal field. For more information about field attributes, see Field Attributes.
- Delimited
- Generates a record for each delimited line. You can use the
following delimited format types:
- Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
- RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
- MS Excel CSV - Microsoft Excel comma-separated file.
- MySQL CSV - MySQL comma-separated file.
- PostgreSQL CSV - PostgreSQL comma-separated file.
- PostgreSQL Text - PostgreSQL text file.
- Tab-Separated Values - File that includes tab-separated values.
- Custom - File that uses user-defined delimiter, escape, and quote characters.
- Excel
- Generates a record for every row in the file. Can process
.xls
or.xlsx
files. - JSON
- Generates a record for each JSON object. You can process JSON files that include multiple JSON objects or a single JSON array.
- Log
- Generates a record for every log line.
- Protobuf
- Generates a record for every protobuf message.
- SDC Record
- Generates a record for every record. Use to process records generated by a Data Collector pipeline using the SDC Record data format.
- Text
- Generates a record for each line of text or for each section of text based on a custom delimiter.
- Whole File
- Streams whole files from the origin system to the destination system. You can specify a transfer rate or use all available resources to perform the transfer.
- XML
- Generates records based on a user-defined delimiter element. Use an XML element directly under the root element or define a simplified XPath expression. If you do not define a delimiter element, the origin treats the XML file as a single record.
Configuring a Directory Origin
Configure a Directory origin to read data from files in a directory.