HTTP Server

Supported pipeline types:
  • Data Collector

  • Data Collector Edge

The HTTP Server origin listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests. Use the HTTP Server origin to read high volumes of HTTP POST and PUT requests using multiple threads.

The HTTP Server origin can read requests containing messages with no compression or with the Gzip or Snappy compression format.

The HTTP Server origin can use multiple threads to enable parallel processing of data from multiple HTTP clients. Before you configure the origin, perform additional steps to configure the HTTP clients.

When you configure the HTTP Server origin, you specify the maximum number of concurrent requests to determine how many threads to use. You define the listening port, application ID, and the maximum message size. You can also configure SSL/TLS properties, including default transport protocols and cipher suites.

Tip: Data Collector provides several HTTP origins to address different needs. For a quick comparison chart to help you choose the right one, see Comparing HTTP Origins. Also, if processing Flume events as HTTP requests, you will likely find the TCP Server origin more efficient.

The origin provides request header fields as record header attributes so you can use the information in the pipeline when needed.

Prerequisites

Before you run a pipeline with the HTTP Server origin, complete the following prerequisites to configure the HTTP clients.

Send Data to the Listening Port

Configure the HTTP clients to send data to the HTTP Server listening port.

When you configure the origin, you define a listening port number where the origin listens for data. To pass data to the pipeline, configure each HTTP client to send data to a URL that includes the listening port number.

Note: No other pipelines or processes can already be bound to the listening port. The listening port can be used only by a single pipeline.
Use the following format for the URL:
<http | https>://<sdc_hostname>:<listening_port>/
The URL includes the following components:
  • <http | https> - Use https for secure HTTP connections.
  • <sdc_hostname> - The Data Collector host name.
  • <listening_port> - The port number where the origin listens for data.

For example: https://localhost:8000/

Include the Application ID in Requests

Configure the HTTP clients to include the HTTP Server application ID in each request.

When you configure the HTTP Server origin, you define an application ID that is used to pass requests to the origin. All messages sent to the origin must include the application ID.

Include the application ID for each client request in one of the following ways:

In request headers
Add the following information to the request header for all HTTP requests that you want the origin to process:
X-SDC-APPLICATION-ID: <application_ID>
For example:
X-SDC-APPLICATION-ID: sdc_http2kafka
In a query parameter in the URL
If you cannot configure the client request headers - for example if the requests are generated by another system - then configure each HTTP client to send data to a URL that includes the application ID in a query parameter.
To include the application ID in a query parameter, enable the Application ID in URL property when you configure the origin. Then, include the application ID in a URL query parameter.
Use the following format for the URL:
<http | https>://<sdc_hostname>:<listening_port>/?sdcApplicationId=<application_ID>
The URL includes the following components:
  • <http | https> - Use https for secure HTTP connections.
  • <sdc_hostname> - The Data Collector host name.
  • <listening_port> - The port number where the origin listens for data.
  • <application_ID> - The application ID defined for the HTTP Server origin.
For example: https://localhost:8000/?sdcApplicationId=sdc_http2kafka

Multithreaded Processing

The HTTP Server origin can use multiple threads to perform parallel processing based on the Max Concurrent Requests property.

Not valid in Data Collector Edge pipelines. In Data Collector Edge pipelines, the HTTP Server origin uses only one thread to process data. The Max Concurrent Requests property is ignored.

When you start a multithreaded pipeline, the origin creates the number of threads specified in the Max Concurrent Requests property. Each thread generates a batch from an incoming request and passes the batch to an available pipeline runner.

A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors, executors, and destinations in the pipeline and handles all pipeline processing after the origin. Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread. When the flow of data slows, the pipeline runners wait idly until they are needed, generating an empty batch at regular intervals. You can configure the Runner Idle Time pipeline property to specify the interval or to opt out of empty batch generation.

Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since batches are processed by different pipeline runners, the order that batches are written to destinations is not ensured.

For example, say you set the Max Concurrent Requests property to 5. When you start the pipeline, the origin creates five threads, and Data Collector creates a matching number of pipeline runners. Upon receiving data, the origin passes a batch to each of the pipeline runners for processing. In the batch, HTTP Server includes only the HTTP POST and PUT requests with the specified Application ID.

Each pipeline runner performs the processing associated with the rest of the pipeline. After a batch is written to pipeline destinations, the pipeline runner becomes available for another batch of data. Each batch is processed and written as quickly as possible, independent from other batches processed by other pipeline runners, so batches may be written differently from the read-order.

At any given moment, the five pipeline runners can each process a batch, so this multithreaded pipeline processes up to five batches at a time. When incoming data slows, the pipeline runners sit idle, available for use as soon as the data flow increases.

For more information about multithreaded pipelines, see Multithreaded Pipeline Overview.

Data Formats

The HTTP Server origin processes data differently based on the data format that you select.

In Data Collector Edge pipelines, the origin supports only the Binary, Delimited, JSON, and SDC Record data formats.

The HTTP Server origin processes data formats as follows:

Avro
Generates a record for every Avro record. The origin includes the Avro schema in the avroSchema record header attribute. It also includes a precision and scale field attribute for each Decimal field.
The origin expects each file to contain the Avro schema and uses the schema to process the Avro data.
The origin reads files compressed by Avro-supported compression codecs without requiring additional configuration.
Binary
Generates a record with a single byte array field at the root of the record.
When the data exceeds the user-defined maximum data size, the origin cannot process the data. Because the record is not created, the origin cannot pass the record to the pipeline to be written as an error record. Instead, the origin generates a stage error.
Datagram
Generates a record for every message. The origin can process collectd messages, NetFlow 5 and NetFlow 9 messages, and the following types of syslog messages:
  • RFC 5424
  • RFC 3164
  • Non-standard common messages, such as RFC 3339 dates with no version digit
When processing NetFlow messages, the stage generates different records based on the NetFlow version. When processing NetFlow 9, the records are generated based on the NetFlow 9 configuration properties. For more information, see NetFlow Data Processing.
Delimited
Generates a record for each delimited line. You can use the following delimited format types:
  • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
  • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
  • MS Excel CSV - Microsoft Excel comma-separated file.
  • MySQL CSV - MySQL comma-separated file.
  • Tab-Separated Values - File that includes tab-separated values.
  • PostgreSQL CSV - PostgreSQL comma-separated file.
  • PostgreSQL Text - PostgreSQL text file.
  • Custom - File that uses user-defined delimiter, escape, and quote characters.
  • Multi Character Delimited - File that uses multiple user-defined characters to delimit fields and lines, and single user-defined escape and quote characters.
You can use a list or list-map root field type for delimited data, and optionally include field names from a header line, when available. For more information about the root field types, see Delimited Data Root Field Type.
When using a header line, you can enable handling records with additional columns. The additional columns are named using a custom prefix and integers in sequential increasing order, such as _extra_1, _extra_2. When you disallow additional columns, records that include additional columns are sent to error.
You can also replace a string constant with null values.
When a record exceeds the maximum record length defined for the stage, the stage processes the object based on the error handling configured for the stage.
JSON
Generates a record for each JSON object. You can process JSON files that include multiple JSON objects or a single JSON array.
When an object exceeds the maximum object length defined for the origin, the origin processes the object based on the error handling configured for the stage.
Protobuf
Generates a record for every protobuf message. By default, the origin assumes messages contain multiple protobuf messages.
Protobuf messages must match the specified message type and be described in the descriptor file.
When the data for a record exceeds 1 MB, the origin cannot continue processing data in the message. The origin handles the message based on the stage error handling property and continues reading the next message.
For information about generating the descriptor file, see Protobuf Data Format Prerequisites.
SDC Record
Generates a record for every record. Use to process records generated by a Data Collector pipeline using the SDC Record data format.
For error records, the origin provides the original record as read from the origin in the original pipeline, as well as error information that you can use to correct the record.
When processing error records, the origin expects the error file names and contents as generated by the original pipeline.
XML
Generates records based on a user-defined delimiter element. Use an XML element directly under the root element or define a simplified XPath expression. If you do not define a delimiter element, the origin treats the XML file as a single record.
Generated records include XML attributes and namespace declarations as fields in the record by default. You can configure the stage to include them in the record as field attributes.
You can include XPath information for each parsed XML element and XML attribute in field attributes. This also places each namespace in an xmlns record header attribute.
Note: Field attributes and record header attributes are written to destination systems automatically only when you use the SDC RPC data format in destinations. For more information about working with field attributes and record header attributes, and how to include them in records, see Field Attributes and Record Header Attributes.
When a record exceeds the user-defined maximum record length, the origin skips the record and continues processing with the next record. It sends the skipped record to the pipeline for error handling.
Use the XML data format to process valid XML documents. For more information about XML processing, see Reading and Processing XML Data.
Tip: If you want to process invalid XML documents, you can try using the text data format with custom delimiters. For more information, see Processing XML Data with Custom Delimiters.

Record Header Attributes

The REST Service origin creates record header attributes that include information about the requested URL.

You can use the record:attribute or record:attributeOrDefault functions to access the information in the attributes. For more information about working with record header attributes, see Working with Header Attributes.

The HTTP Server origin creates the following record header attributes:
  • method - The HTTP method for the request, such as GET, POST, or DELETE.
  • path - The path of the URL.
  • queryString - The parameters of the URL that come after the path. Can be empty if there are no query parameters on the URL.
  • remoteHost - The name of the client or proxy that made the request.

The HTTP Server origin also includes HTTP request header fields – such as Host or Content-Type – in records as record header attributes. The attribute names match the original HTTP request header field name.

Configuring an HTTP Server Origin

Configure an HTTP Server origin to generate multiple threads for parallel processing of HTTP POST and PUT requests.

  1. In the Properties panel, on the General tab, configure the following properties:
    General Property Description
    Name Stage name.
    Description Optional description.
    On Record Error Error record handling for the stage:
    • Discard - Discards the record.
    • Send to Error - Sends the record to the pipeline for error handling.
    • Stop Pipeline - Stops the pipeline.
  2. On the HTTP tab, configure the following properties:
    HTTP Property Description
    HTTP Listening Port Listening port for the HTTP Server origin. The port number must be included in the URL that the HTTP client uses to pass data.
    Note: No other pipelines or processes can already be bound to the listening port. The listening port can be used only by a single pipeline.

    For more information, see Send Data to the Listening Port.

    Max Concurrent Requests Maximum number of HTTP clients allowed to send messages to the origin at one time.

    If the origin reaches the configured maximum and receives additional requests from different clients, it processes those requests as slots become available.

    This property also determines how many threads the origin generates and uses for multithreaded processing. For more information, see Multithreaded Processing.

    Not valid in Data Collector Edge pipelines. In Data Collector Edge pipelines, the HTTP Server origin uses only one thread to process data, and this property is ignored.

    Application ID Application ID used to pass requests to the HTTP Server origin. The application ID must be included in the header of the HTTP request or in a query parameter of the URL that the HTTP client uses to pass data.

    For more information, see Include the Application ID in Requests.

    Application ID in URL Enables reading the application ID from the URL. Use when HTTP clients include the application ID in the URL query parameter instead of in the request header.

    For more information, see Include the Application ID in Requests.

    Max Request Size (MB) Maximum size of the request body that the origin can process.
  3. On the Data Format tab, configure the following property:
    Data Format Property Description
    Data Format Type of data to be processed. Use one of the following data formats:
    • Avro
    • Binary
    • Datagram
    • Delimited
    • JSON
    • Protobuf
    • SDC Record
    • XML

    In Data Collector Edge pipelines, the origin supports only the Binary, Delimited, JSON, and SDC Record data formats.

  4. For binary data, on the Data Format tab, configure the following properties:
    Binary Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.

    In Data Collector Edge pipelines, the origin only supports uncompressed and compressed files, not archive or compressed archive files.

    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Max Data Size (bytes) Maximum number of bytes in the message. Larger messages cannot be processed or written to error.
  5. For datagram data, on the Data Format tab, configure the following properties:
    Datagram Properties Description
    Datagram Packet Format Packet format of the data:
    • collectd
    • NetFlow
    • syslog
    • Raw/separated data
    TypesDB File Path Path to a user-provided types.db file. Overrides the default types.db file.

    For collectd data only.

    Convert Hi-Res Time & Interval Converts the collectd high resolution time format interval and timestamp to UNIX time, in milliseconds.

    For collectd data only.

    Exclude Interval Excludes the interval field from output record.

    For collectd data only.

    Auth File Path to an optional authentication file. Use an authentication file to accept signed and encrypted data.

    For collectd data only.

    Record Generation Mode Determines the type of values to include in the record. Select one of the following options:
    • Raw Only
    • Interpreted Only
    • Both Raw and Interpreted

    For NetFlow 9 data only.

    Max Templates in Cache The maximum number of templates to store in the template cache. For more information about templates, see Caching NetFlow 9 Templates.

    Default is -1 for an unlimited cache size.

    For NetFlow 9 data only.

    Template Cache Timeout (ms) The maximum number of milliseconds to cache an idle template. Templates unused for more than the specified time are evicted from the cache. For more information about templates, see Caching NetFlow 9 Templates.

    Default is -1 for caching templates indefinitely.

    For NetFlow 9 data only.

    Charset Character encoding of the messages to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  6. For delimited data, on the Data Format tab, configure the following properties:
    Delimited Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.

    In Data Collector Edge pipelines, the origin only supports uncompressed and compressed files, not archive or compressed archive files.

    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Delimiter Format Type Delimiter format type. Use one of the following options:
    • Default CSV - File that includes comma-separated values. Ignores empty lines in the file.
    • RFC4180 CSV - Comma-separated file that strictly follows RFC4180 guidelines.
    • MS Excel CSV - Microsoft Excel comma-separated file.
    • MySQL CSV - MySQL comma-separated file.
    • Tab-Separated Values - File that includes tab-separated values.
    • PostgreSQL CSV - PostgreSQL comma-separated file.
    • PostgreSQL Text - PostgreSQL text file.
    • Custom - File that uses user-defined delimiter, escape, and quote characters.
    • Multi Character Delimited - File that uses multiple user-defined characters to delimit fields and lines, and single user-defined escape and quote characters.
    Header Line Indicates whether a file contains a header line, and whether to use the header line.
    Allow Extra Columns When processing data with a header line, allows processing records with more columns than exist in the header line.
    Extra Column Prefix Prefix to use for any additional columns. Extra columns are named using the prefix and sequential increasing integers as follows: <prefix><integer>.

    For example, _extra_1. Default is _extra_.

    Max Record Length (chars) Maximum length of a record in characters. Longer records are not read.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Delimiter Character Delimiter character for a custom delimiter format. Select one of the available options or use Other to enter a custom character.

    You can enter a Unicode control character using the format \uNNNN, where ​N is a hexadecimal digit from the numbers 0-9 or the letters A-F. For example, enter \u0000 to use the null character as the delimiter or \u2028 to use a line separator as the delimiter.

    Default is the pipe character ( | ).

    Multi Character Field Delimiter Characters that delimit fields for multi-character delimiter format.

    Default is two pipe characters (||).

    Multi Character Line Delimiter Characters that delimit lines or records for multi-character delimiter format.

    Default is the newline character (\n).

    Escape Character Escape character for a custom or multi-character delimiter format.
    Quote Character Quote character for a custom or multi-character delimiter format.
    Enable Comments Allows commented data to be ignored for custom delimiter format.
    Comment Marker Character that marks a comment when comments are enabled for custom delimiter format.
    Ignore Empty Lines Allows empty lines to be ignored for custom delimiter format.
    Root Field Type Root field type to use:
    • List-Map - Generates an indexed list of data. Enables you to use standard functions to process data. Use for new pipelines.
    • List - Generates a record with an indexed list with a map for header and value. Requires the use of delimited data functions to process data. Use only to maintain pipelines created before 1.1.0.
    Lines to Skip Number of lines to skip before reading data.
    Parse NULLs Replaces the specified string constant with null values.
    NULL Constant String constant to replace with null values.
    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  7. For JSON data, on the Data Format tab, configure the following properties:
    JSON Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.

    In Data Collector Edge pipelines, the origin only supports uncompressed and compressed files, not archive or compressed archive files.

    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    JSON Content Type of JSON content. Use one of the following options:
    • Array of Objects
    • Multiple Objects
    Maximum Object Length (chars) Maximum number of characters in a JSON object.

    Longer objects are diverted to the pipeline for error handling.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  8. For protobuf data, on the Data Format tab, configure the following properties:
    Protobuf Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Protobuf Descriptor File Descriptor file (.desc) to use. The descriptor file must be in the Data Collector resources directory, $SDC_RESOURCES.

    For information about generating the descriptor file, see Protobuf Data Format Prerequisites. For more information about environment variables, see Data Collector Environment Configuration.

    Message Type The fully-qualified name for the message type to use when reading data.

    Use the following format: <package name>.<message type>.

    Use a message type defined in the descriptor file.
    Delimited Messages Indicates if a message might include more than one protobuf message.
  9. For SDC Record data, on the Data Format tab, configure the following properties:
    SDC Record Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.

    In Data Collector Edge pipelines, the origin only supports uncompressed and compressed files, not archive or compressed archive files.

    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

  10. For XML data, on the Data Format tab, configure the following properties:
    XML Property Description
    Compression Format The compression format of the files:
    • None - Processes only uncompressed files.
    • Compressed File - Processes files compressed by the supported compression formats.
    • Archive - Processes files archived by the supported archive formats.
    • Compressed Archive - Processes files archived and compressed by the supported archive and compression formats.
    File Name Pattern within Compressed Directory For archive and compressed archive files, file name pattern that represents the files to process within the compressed directory. You can use UNIX-style wildcards, such as an asterisk or question mark. For example, *.json.

    Default is *, which processes all files.

    Delimiter Element
    Delimiter to use to generate records. Omit a delimiter to treat the entire XML document as one record. Use one of the following:
    • An XML element directly under the root element.

      Use the XML element name without surrounding angle brackets ( < > ) . For example, msg instead of <msg>.

    • A simplified XPath expression that specifies the data to use.

      Use a simplified XPath expression to access data deeper in the XML document or data that requires a more complex access method.

      For more information about valid syntax, see Simplified XPath Syntax.

    Include Field XPaths Includes the XPath to each parsed XML element and XML attribute in field attributes. Also includes each namespace in an xmlns record header attribute.

    When not selected, this information is not included in the record. By default, the property is not selected.

    Note: Field attributes and record header attributes are written to destination systems automatically only when you use the SDC RPC data format in destinations. For more information about working with field attributes and record header attributes, and how to include them in records, see Field Attributes and Record Header Attributes.
    Namespaces Namespace prefix and URI to use when parsing the XML document. Define namespaces when the XML element being used includes a namespace prefix or when the XPath expression includes namespaces.

    For information about using namespaces with an XML element, see Using XML Elements with Namespaces.

    For information about using namespaces with XPath expressions, see Using XPath Expressions with Namespaces.

    Using simple or bulk edit mode, click the Add icon to add additional namespaces.

    Output Field Attributes Includes XML attributes and namespace declarations in the record as field attributes. When not selected, XML attributes and namespace declarations are included in the record as fields.
    Note: Field attributes are automatically included in records written to destination systems only when you use the SDC RPC data format in the destination. For more information about working with field attributes, see Field Attributes.

    By default, the property is not selected.

    Max Record Length (chars)

    The maximum number of characters in a record. Longer records are diverted to the pipeline for error handling.

    This property can be limited by the Data Collector parser buffer size. For more information, see Maximum Record Size.

    Charset Character encoding of the files to be processed.
    Ignore Control Characters Removes all ASCII control characters except for the tab, line feed, and carriage return characters.
  11. To use SSL/TLS, click the TLS tab and configure the following properties:

    In Data Collector Edge pipelines, only the Use TLS and Keystore File properties are valid. After enabling TLS, enter an absolute path for the keystore file that uses the PEM format. In Data Collector Edge pipelines, the HTTP Server origin always uses the default protocol and cipher suites. It ignores all other TLS properties.

    TLS Property Description
    Use TLS

    Enables the use of TLS.

    Keystore File Path to the keystore file. Enter an absolute path to the file or a path relative to the Data Collector resources directory: $SDC_RESOURCES.

    For more information about environment variables, see Data Collector Environment Configuration.

    By default, no keystore is used.

    In Data Collector Edge pipelines, enter an absolute path to the file that uses the PEM format.

    Keystore Type Type of keystore to use. Use one of the following types:
    • Java Keystore File (JKS)
    • PKCS #12 (p12 file)

    Default is Java Keystore File (JKS).

    Keystore Password Password to the keystore file. A password is optional, but recommended.
    Tip: To secure sensitive information such as passwords, you can use runtime resources or credential stores.
    Keystore Key Algorithm The algorithm used to manage the keystore.

    Default is SunX509.

    Use Default Protocols Determines the transport layer security (TLS) protocol to use. The default protocol is TLSv1.2. To use a different protocol, clear this option.
    Transport Protocols The TLS protocols to use. To use a protocol other than the default TLSv1.2, click the Add icon and enter the protocol name. You can use simple or bulk edit mode to add protocols.
    Note: Older protocols are not as secure as TLSv1.2.
    Use Default Cipher Suites Determines the default cipher suite to use when performing the SSL/TLS handshake. To use a different cipher suite, clear this option.
    Cipher Suites Cipher suites to use. To use a cipher suite that is not a part of the default set, click the Add icon and enter the name of the cipher suite. You can use simple or bulk edit mode to add cipher suites.

    Enter the Java Secure Socket Extension (JSSE) name for the additional cipher suites that you want to use.