Deploying a Connector in Transformation Hub (CTH) (Standalone ArcMC)

A Connector in Transformation Hub (CTH) moves the security event normalization, categorization, and enrichment of connectors processing to the Docker containers environment of Transformation Hub, while reducing the work done by the Collector.

Transformation Hub can have a maximum of 50 CTHs.

Note: CTHs cannot be configured with SecureData encryption. By default, CTH is set as TLS + CA.

To update the CTH port range:

  1. Open logger.properties for editing.

    Create the file if it does not exist.

    /opt/arcmc/userdata/arcmc/logger.properties
    chown <non-root user>:<non-root user> logger.properties
    chmod 660 logger.properties
  2. Add the following information to logger.properties.

    # ============================================================
    # CTH port range
    # ============================================================
    configuration.cth.end.port=39050

    For Transformation Hub 3.3 and later use:

    configuration.cth.end.port.post.th.32=32150
  3. Restart the web process.

To deploy a CTH:

Note: To use the Global ID feature, Generator ID Manager has to be enabled in the ArcMC so that Generator ID can be set on the CTH.
  1. Click Dashboard > Deployment View.
  2. In the Transformation Hub column, click the managed Transformation Hub, then click the + icon.
  3. On the Deploy CTH dialog, in CTH Name, specify a name for the CTH.

    The name must be smaller than 256 characters.

  4. Under Acknowledgment mode, click the down arrow, then select the Acknowledgment mode for this CTH. (none/leader/all)

    The mode you select affects the safety of stored events in case of immediate system failure.

    Acknowledgment Mode Description

    none

    Acknowledgment off

    The producer will not wait for any acknowledgment from the server. The record will be immediately added to the socket buffer and considered sent.

    No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1.

    leader

    Leader mode on

    The leader will write the record to its local log but will respond without awaiting full acknowledgment from all followers.

    In this case, if the leader fails immediately after acknowledging the record but before the followers have replicated it, the record will be lost.

    all

    All acknowledgments on

    The leader will wait for the full set of in-sync replicas to acknowledge the record; guaranteeing that the record will not be lost if at least one in-sync replica remains alive (strongest available guarantee). This is equivalent to the acks=-1 setting.

  5. Under Destination Topics, click the down arrow, then select one or more destination topics (CEF, Avro, or binary) for the CTH.
  6. Select the corresponding ESM version. This is required for CTH to support Global ID when sending events to ESM 7.2
  7. Click Deploy.

    Note: Please allow a few minutes after deploying or updating the CTH for the new values to be displayed.

    The CTH deployment job status can be viewed in Job Manager.

Once deployed, the CTH displays in Node Management on the Connectors tab, and in the Topology and Deployment View drill-down under the source topic.

Note: Destination topics must always be grouped the same for multiple CTHs. For example, if a CTH is sending events to both th-cef and th-esm topics, then any other CTH that sends events to one of these topics must also send events to the other topic, or events will be duplicated.