Executors

Executors perform tasks when they receive event records.

You can use the following executor stages for event handling:
ADLS Gen1 File Metadata executor
Changes file metadata, creates an empty file, or removes a file or directory in Azure Data Lake Storage Gen1 upon receiving an event.
When changing file metadata, the executor can rename and move files in addition to specifying the owner and group, and updating permissions and ACLs for files. When creating an empty file, the executor can specify the owner and group, and set permissions and ACLs for the file. When removing files and directories, the executor performs the task recursively.
You can use the executor in any logical way, such as changing permissions after an Azure Data Lake Storage Gen1 destination closes a file.
ADLS Gen2 File Metadata executor
Changes file metadata, creates an empty file, or removes a file or directory in Azure Data Lake Storage Gen2 upon receiving an event.
When changing file metadata, the executor can rename and move files in addition to specifying the owner and group, and updating permissions and ACLs for files. When creating an empty file, the executor can specify the owner and group, and set permissions and ACLs for the file. When removing files and directories, the executor performs the task recursively.
You can use the executor in any logical way, such as moving a file after an Azure Data Lake Storage Gen2 destination closes it.
Amazon S3 executor
Creates new Amazon S3 objects for the specified content, copies objects within a bucket, or adds tags to existing Amazon S3 objects upon receiving an event.
You can use the executor in any logical way, such as writing information from an event record to a new S3 object, or copying or tagging objects after they are written by the Amazon S3 destination.
Databricks Delta Lake executor
Runs a Spark SQL query on a Delta Lake table on Databricks upon receiving an event.
You can use the executor to bulk load data from a storage location to a Delta Lake table, or to merge changed data into a Delta Lake table.
Pipeline Finisher executor
Stops the pipeline when it receives an event, transitioning the pipeline to a Finished state. Allows the pipeline to complete all expected processing before stopping.
You can use the Pipeline Finisher executor in any logical way, such as stopping a pipeline upon receiving a no-more-data event from the MySQL Query Consumer origin. This enables you to achieve "batch" processing - stopping the pipeline when all available data is processed rather than leaving the pipeline to sit idle indefinitely.
For example, you might use the Pipeline Finisher executor with the SQL Server Multitable Consumer origin to stop the pipeline when it processes all queried data in the specified tables.