CoAP Server

Constrained Application Protocol (CoAP) is a web transfer protocol designed for machine-to-machine devices. The CoAP Server origin is a multithreaded origin that listens on a CoAP endpoint and processes the contents of all authorized CoAP requests.

The CoAP Server origin can use multiple threads to enable parallel processing of data from multiple CoAP clients.

Before you configure the origin, perform additional steps to configure the CoAP clients.

When you configure the CoAP Server origin, you specify the maximum number of concurrent requests to determine how many threads to use. You also define the listening port and resource name for the origin. You can optionally override the default values of network configuration properties.

Prerequisites

Before you run a pipeline with the CoAP Server origin, configure the CoAP clients to send data to the CoAP Server listening port and resource.

When you configure the CoAP Server origin, you define a listening port number where the origin listens for data. You also define the resource name used to pass requests to the origin. To pass data to the pipeline, configure each CoAP client to send data to a URL that includes the listening port number and resource name.

Use the following format for the URL:
coap://<sdc_hostname>:<listening_port>/<resource_name>

The URL includes the following components:

  • <sdc_hostname> - The Data Collector host name.
  • <listening_port> - The port number where the origin listens for data.
  • <resource_name> - The resource name used to pass requests to the origin.

For example: coap://localhost:5683/sdc

Multithreaded Processing

The CoAP Server origin performs parallel processing and enables the creation of a multithreaded pipeline.

The CoAP Server origin uses multiple concurrent threads based on the Max Concurrent Requests property. Each thread connects to the origin system and creates a batch of data, and passes the batch to an available pipeline runner.

A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors and destinations in the pipeline and represents all pipeline processing after the origin. Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread. When the flow of data slows, the pipeline runners wait idly until they are needed.

Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since batches are processed by different pipeline instances, the order that batches are written to destinations is not ensured.

For example, say you set the Max Concurrent Requests property to 5. When you start the pipeline, the origin creates five threads, and by default Data Collector creates a matching number of pipeline runners. Upon receiving data, the origin passes a batch to each of the pipeline runners for processing.

At any given moment, the five pipeline runners can each process a batch, so this multithreaded pipeline processes up to five batches at a time. When incoming data slows, the pipeline runners sit idle, available for use as soon as the data flow increases.

For more information about multithreaded pipelines, see Multithreaded Pipeline Overview.

Network Configuration Properties

The CoAP Server origin uses the default values for network configuration properties as implemented by Eclipse Californium. If needed, you can override the default values of these properties.

The default values of each property are listed on this page. The origin uses the defaults for all network configuration properties except for the following:
  • COAP_PORT - You set the port number in the CoAP Listening Port property for the origin.
  • NETWORK_STAGE_RECEIVER_THREAD_COUNT - You set the number of threads in the Max Concurrent Requests property for the origin.

For example, the origin uses the default value of 2000 for the ACK_TIMEOUT network configuration property. To override the default value, add ACK_TIMEOUT as an additional network configuration property and set the property to the desired value, such as 1000.

Data Formats

The CoAP Server origin processes data differently based on the data format that you select. The origin processes the following types of data:

Binary
Generates a record with a single byte array field at the root of the record.
When the data exceeds the user-defined maximum data size, the origin cannot process the data. Because the record is not created, the origin cannot pass the record to the pipeline to be written as an error record. Instead, the origin generates a stage error.
Datagram
Generates a record for every message. The origin can read collectd messages, Netflow messages from NetFlow Version 5, and the following types of syslog messages:
Delimited
Generates a record for each delimited line. You can use the following delimited format types:
  • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
  • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
  • MS Excel CSV - Microsoft Excel comma-separated file.
  • MySQL CSV - MySQL comma separated file.
  • Tab-Separated Values - File that includes tab-separated values.
  • Custom - File that uses user-defined delimiter, escape, and quote characters.
You can use a list or list-map root field type for delimited data, optionally including the header information when available.
You can also replace a string constant with null values.
When a record exceeds the user-defined maximum record length, the origin cannot continue processing data in the file. Records already processed from the file are passed to the pipeline. The behavior of the origin is then based on the error handling configured for the stage:
  • Discard - The origin continues processing with the next file, leaving the partially-processed file in the directory.
  • To Error - The origin continues processing with the next file. If a post-processing error directory is configured for the stage, the origin moves the partially-processed file to the error directory. Otherwise, it leaves the file in the directory.
  • Stop Pipeline - The origin stops the pipeline.
For more information about the root field types, see Delimited Data Root Field Type.
JSON
Generates a record for each JSON object. You can process JSON files that include multiple JSON objects or a single JSON array.
When an object exceeds the maximum object length defined for the origin, the origin cannot continue processing data in the file. Records already processed from the file are passed to the pipeline. The behavior of the origin is then based on the error handling configured for the stage:
  • Discard - The origin continues processing with the next file, leaving the partially-processed file in the directory.
  • To Error - The origin continues processing with the next file. If a post-processing error directory is configured for the stage, the origin moves the partially-processed file to the error directory. Otherwise, it leaves the file in the directory.
  • Stop Pipeline - The origin stops the pipeline.
Log
Generates a record for every log line.
When a line exceeds the user-defined maximum line length, the origin truncates longer lines.
You can include the processed log line as a field in the record. If the log line is truncated, and you request the log line in the record, the origin includes the truncated line.
You can define the log format or type to be read.
Protobuf
Generates a record for every protobuf message.
Protobuf messages must match the specified message type and be described in the descriptor file.
When the data for a record exceeds 1 MB, the origin cannot continue processing data in the file. The origin handles the file based on file error handling properties and continues reading the next file.
For information about generating the descriptor file, see Protobuf Data Format Prerequisites.
SDC Record
Generates a record for every record. Use to process records generated by a Data Collector pipeline using the SDC Record data format.
For error records, the origin provides the original record as read from the origin in the original pipeline, as well as error information that you can use to correct the record.
When processing error records, the origin expects the error file names and contents as generated by the original pipeline.
Text
Generates a record for each line of text or for each section of text based on a custom delimiter.
When a line or section exceeds the maximum line length defined for the origin, the origin truncates it. The origin adds a boolean field named Truncated to indicate if the line was truncated.
For more information about processing text with a custom delimiter, see Text Data Format with Custom Delimiters.
XML
Generates records based on a user-defined delimiter element. Use an XML element directly under the root element or define a simplified XPath expression. If you do not define a delimiter element, the origin treats the XML file as a single record.
Generated records include XML attributes and namespace declarations as fields in the record by default. You can configure the stage to include them in the record as field attributes.
You can include XPath information for each parsed XML element and XML attribute in field attributes. This also places each namespace in an xmlns record header attribute.
Note: Field attributes and record header attributes are written to destination systems automatically only when you use the SDC RPC data format in destinations. For more information about working with field attributes and record header attributes, and how to include them in records, see Field Attributes and Record Header Attributes.
When a record exceeds the user-defined maximum record length, the origin cannot continue processing data in the file. Records already processed from the file are passed to the pipeline. The behavior of the origin is then based on the error handling configured for the stage:
  • Discard - The origin continues processing with the next file, leaving the partially-processed file in the directory.
  • To Error - The origin continues processing with the next file. If a post-processing error directory is configured for the stage, the origin moves the partially-processed file to the error directory. Otherwise, it leaves the file in the directory.
  • Stop Pipeline - The origin stops the pipeline.
Use the XML data format to process valid XML documents. For more information about XML processing, see XML Data Format and Data Processing.
Tip: If you want to process invalid XML documents, you can try using the text data format with custom delimiters. For more information, see Processing XML Data with Custom Delimiters.

Configuring a CoAP Server Origin

Configure a CoAP Server origin to generate multiple threads for parallel processing of CoAP client requests.

  1. In the Properties panel, on the General tab, configure the following properties:
    General Property Description
    Name Stage name.
    Description Optional description.
    On Record Error Error record handling for the stage:
    • Discard - Discards the record.
    • Send to Error - Sends the record to the pipeline for error handling.
    • Stop Pipeline - Stops the pipeline.
  2. On the CoAP tab, configure the following properties:
    CoAP Property Description
    CoAP Listening Port Listening port for the CoAP Server origin. The port number must be included in the URL that the CoAP client uses to pass data.

    For more information, see Prerequisites.

    Max Concurrent Requests Maximum number of CoAP clients allowed to send requests to the origin at one time.

    If the origin reaches the configured maximum and receives additional requests from different clients, it processes those requests as slots become available.

    This property also determines how many threads the origin generates and uses for multithreaded processing. For more information, see Multithreaded Processing.

    Resource Name Resource name used to pass requests to the CoAP Server origin. The resource name must be included in the URL that the CoAP client uses to pass data.

    For more information, see Prerequisites.

    Network Configuration Network configuration properties for which you want to override the default values. To add properties, click Add and define the property name and value.

    Use the property names and values as expected by Eclipse Californium.

  3. On the Data Format tab, configure the following property:
    Data Format Property Description
    Data Format Type of data to be processed. Use one of the following options:
    • Binary
    • Datagram
    • Delimited
    • JSON
    • Log
    • Protobuf
    • SDC Record
    • Text
    • XML
  4. For binary data, on the Data Format tab, configure the following properties:
    Binary Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory File name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Max Data Size (bytes) Maximum number of bytes in the message. Larger messages cannot be processed or written to error.
  5. For datagram data, on the Data Format tab, configure the following properties:
    Datagram Properties Description
    Data Format Message type:
    • collectd
    • NetFlow
    • syslog
    TypesDB File Path Path to a user-provided types.db file. Overrides the default types.db file.

    For collectd data only.

    Convert Hi-Res Time & Interval Converts the collectd high resolution time format interval and timestamp to UNIX time, in milliseconds.

    For collectd data only.

    Exclude Interval Excludes the interval field from output record.

    For collectd data only.

    Auth File Path to an optional authentication file. Use an authentication file to accept signed and encrypted data.

    For collectd data only.

    Charset Character encoding of the messages to be processed.
    Ignore Ctrl Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  6. For delimited data, on the Data Format tab, configure the following properties:
    Delimited Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory File name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Delimiter Format Type Delimiter format type. Use one of the following options:
    • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
    • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
    • MS Excel CSV - Microsoft Excel comma-separated file.
    • MySQL CSV - MySQL comma separated file.
    • Tab-Separated Values - File that includes tab-separated values.
    • Custom - File that uses user-defined delimiter, escape, and quote characters.
    Header Line Indicates whether a file contains a header line, and whether to use the header line.
    Max Record Length (chars) Maximum length of a record in characters. Longer records are not read.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Delimiter Character Delimiter character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    You can enter a Unicode control character using the format \uNNNN, where ‚ÄčN is a hexadecimal digit from the numbers 0-9 or the letters A-F. For example, enter \u0000 to use the null character as the delimiter or \u2028 to use a line separator as the delimiter.

    Default is the pipe character ( | ).

    Escape Character Escape character for a custom file type.
    Quote Character Quote character for a custom file type.
    Root Field Type Root field type to use:
    • List-Map - Generates an indexed list of data. Enables you to use standard functions to process data. Use for new pipelines.
    • List - Generates a record with an indexed list with a map for header and value. Requires the use of delimited data functions to process data. Use only to maintain pipelines created before 1.1.0.
    Lines to Skip Lines to skip before reading data.
    Parse NULLs Replaces the specified string constant with null values.
    NULL Constant String constant to replace with null values.
    Charset Character encoding of the files to be processed.
    Ignore Ctrl Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  7. For JSON data, on the Data Format tab, configure the following properties:
    JSON Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory File name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    JSON Content Type of JSON content. Use one of the following options:
    • Array of Objects
    • Multiple Objects
    Maximum Object Length (chars) Maximum number of characters in a JSON object.

    Longer objects are diverted to the pipeline for error handling.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Charset Character encoding of the files to be processed.
    Ignore Ctrl Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  8. For log data, on the Data Format tab, configure the following properties:
    Log Property Description
    Log Format Format of the log files. Use one of the following options:
    • Common Log Format
    • Combined Log Format
    • Apache Error Log Format
    • Apache Access Log Custom Format
    • Regular Expression
    • Grok Pattern
    • Log4j
    Max Line Length Maximum length of a log line. The origin truncates longer lines.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Retain Original Line Determines how to treat the original log line. Select to include the original log line as a field in the resulting record.

    By default, the original line is discarded.

    Charset Character encoding of the files to be processed.
    Ignore Ctrl Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
    • When you select Apache Access Log Custom Format, use Apache log format strings to define the Custom Log Format.
    • When you select Regular Expression, enter the regular expression that describes the log format, and then map the fields that you want to include to each regular expression group.
    • When you select Grok Pattern, you can use the Grok Pattern Definition field to define custom grok patterns. You can define a pattern on each line.

      In the Grok Pattern field, enter the pattern to use to parse the log. You can use a predefined grok patterns or create a custom grok pattern using patterns defined in Grok Pattern Definition.

      For more information about defining grok patterns and supported grok patterns, see Defining Grok Patterns.

    • When you select Log4j, define the following properties:
      Log4j Property Description
      On Parse Error Determines how to handle information that cannot be parsed:
      • Skip and Log Error - Skips reading the line and logs a stage error.
      • Skip, No Error - Skips reading the line and does not log an error.
      • Include as Stack Trace - Includes information that cannot be parsed as a stack trace to the previously-read log line. The information is added to the message field for the last valid log line.
      Use Custom Log Format Allows you to define a custom log format.
      Custom Format Use log4j variables to define a custom log format.
  9. For protobuf data, on the Data Format tab, configure the following properties:
    Protobuf Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory File name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Protobuf Descriptor File Descriptor file (.desc) to use. The descriptor file must be in the Data Collector resources directory, $SDC_RESOURCES.

    For more information about environment variables, see Data Collector Environment Configuration. For information about generating the descriptor file, see Protobuf Data Format Prerequisites.

    Message Type The fully-qualified name for the message type to use when reading data.

    Use the following format: <package name>.<message type>.

    Use a message type defined in the descriptor file.
    Delimited Messages Indicates if a file might include more than one protobuf message.
  10. For SDC Record data, on the Data Format tab, configure the following properties:
    SDC Record Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory File name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

  11. For text data, on the Data Format tab, configure the following properties:
    Text Property Description
    Max Line Length Maximum number of characters allowed for a line. Longer lines are truncated.

    Adds a boolean field to the record to indicate if it was truncated. The field name is Truncated.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Use Custom Delimiter Uses custom delimiters to define records instead of line breaks.
    Custom Delimiter One or more characters to use to define records.
    Include Custom Delimiter Includes delimiter characters in the record.
    Charset Character encoding of the files to be processed.
    Ignore Ctrl Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  12. For XML data, on the XML tab, configure the following properties:
    XML Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    Delimiter Element
    Delimiter to use to generate records. Omit a delimiter to treat the entire XML document as one record. Use one of the following:
    • An XML element directly under the root element.

      Use the XML element name without surrounding angle brackets ( < > ) . For example, msg instead of <msg>.

    • A simplified XPath expression that specifies the data to use.

      Use a simplified XPath expression to access data deeper in the XML document or data that requires a more complex access method.

      For more information about valid syntax, see Simplified XPath Syntax.

    Include Field XPaths Includes the XPath to each parsed XML element and XML attribute in field attributes. Also includes each namespace in an xmlns record header attribute.

    When not selected, this information is not included in the record. By default, the property is not selected.

    Note: Field attributes and record header attributes are written to destination systems automatically only when you use the SDC RPC data format in destinations. For more information about working with field attributes and record header attributes, and how to include them in records, see Field Attributes and Record Header Attributes.
    Namespaces Namespace prefix and URI to use when parsing the XML document. Define namespaces when the XML element being used includes a namespace prefix or when the XPath expression includes namespaces.

    For information about using namespaces with an XML element, see Using XML Elements with Namespaces.

    For information about using namespaces with XPath expressions, see Using XPath Expressions with Namespaces.

    Use the Add icon to add additional namespaces.

    Output Field Attributes Includes XML attributes and namespace declarations in the record as field attributes. When not selected, XML attributes and namespace declarations are included in the record as fields.
    Note: Field attributes are automatically included in records written to destination systems only when you use the SDC RPC data format in the destination. For more information about working with field attributes, see Field Attributes.

    By default, the property is not selected.

    Max Record Length (chars)

    The maximum number of characters in a record. Longer records are diverted to the pipeline for error handling.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Charset Character encoding of the files to be processed.
    Ignore Ctrl Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.