8.3 Deploying ArcSight Fusion

This section provides information about using the CDF Management Portal to deploy ArcSight Fusion.

8.3.1 Configuring the Cluster

  1. Open a new tab in a supported web browser.

  2. Specify the URL for the CDF Management Portal:

    https://<Fusion-server>:3000

    NOTE:Use port 3000 when you are setting up the CDF for the first time. After the initial setup, use port 5443 to access the CDF Management Portal.

    Use the fully qualified domain name of the host that you specified in the Connection step during the CDF configuration. Usually, this is the master node’s FQDN.

  3. Log in to the CDF Management Portal with the credentials of the administrative user that you provided during installation.

  4. Select the metadata file version in version and click Next.

  5. Read the license agreement and select I agree.

  6. Click Next.

  7. On the Capabilities page, select the following and click Next:

    • ArcSight Fusion

    • Analytics

  8. On the Database page, retain the default values and click Next.

  9. On the Deployment Size page, select the required cluster and click Next.

    1. (Conditional) For worker node configuration, select Medium Cluster.

  10. On the Connection page, an external host name is automatically populated. This is resolved from the virtual IP (VIP) specified during the CDF installation (--ha-virtual-ip parameter). Confirm that the VIP is correct and then click Next.

  11. (Conditional) If you want to set up high availability, select Make master highly available and add at least two additional master nodes on the Master High Availability page.

    IMPORTANT:If you do not configure high availability in this step, you cannot add master nodes and configure high availability after installation.

    On the Add Master Node page, specify the following details:

    • Host: Fully qualified domain name (FQDN) of the node you are adding.

    • Ignore Warnings: If selected, the CDF Management Portal will ignore any warnings that occur during the pre-checks on the server. If deselected, the add node process will stop and a window will display any warning messages. We recommend that you start with Ignore Warnings deselected in order to view any warnings displayed. You may then evaluate whether to ignore or rectify any warnings, clear the warning dialog, and then click Save again with the box selected to avoid stopping.

    • User Name: User credential for login to the node.

    • Verify Mode: Choose the verification mode as Password or Key-based, and then either enter your password or upload a private key file. If you choose Key-based, you must first enter a user name and then upload a private key file when connecting the node with a private key file.

    • Thinpool Device: (conditional) Enter the Thinpool Device path that you configured for the master node (if applicable). For example: /dev/mapper/docker-thinpool. You must have already set up the Docker thin pool for all cluster nodes that need to use thinpools, as described in the CDF Planning Guide.

    • flannel IFace: (conditional) Enter the flannel IFace value if the master node has more than one network adapter. This must be a single IPv4 address or name of the existing interface and will be used for Docker inter-host communication.

    Click Save. Repeat the same for other master nodes.

  12. Click Next.

  13. (Conditional) For multi-node deployment, add additional worker nodes on the Add Worker Node page and click Save. To add a worker node click + (Add) and enter the required configuration information . Repeat this process for each of the worker nodes.

  14. Click Next.

  15. (Conditional) If you want to run the worker node on the master node, select Allow suite workload to be deployed on the master node and then click Next.

    NOTE:Before selecting this option, ensure that the master node meets the system requirements specified for the worker node.

  16. To configure each NFS volume, complete the following steps:

    1. Navigate to the File Storage page.

    2. For File System Type, select Self-Hosted NFS.

      Self-hosted NFS refers to the external NFS that you created while preparing the environment for CDF installation.

    3. For File Server, specify the IP address or FQDN of the NFS server.

    4. For Exported Path, specify the following paths for the NFS volumes:

      NFS Volume

      File Path

      arcsight-volume

      <NFS_ROOT_FOLDER>/arcsight-vol

      itom-vol-claim

      <NFS_ROOT_FOLDER>/itom-vol

      db-single-vol

      <NFS_ROOT_FOLDER>/db-single-vol

      itom-logging-vol

      <NFS_ROOT_FOLDER>/itom-logging-vol

      db-backup-vol

      <NFS_ROOT_FOLDER>/db-backup-vol

    5. Click Validate.

    Ensure that you have validated all NFS volumes successfully before continuing with the next step.

  17. Click Next.

  18. To start deploying master and worker nodes, click Yes in the Confirmation dialog box.

  19. Continue with uploading images to the local registry.

8.3.2 Uploading Images to the Local Registry

For the docker registry to deploy Fusion, it needs the following images associated with the deployment:

  • fusion-x.x.x.x

  • analytics-x.x.x.x

You must upload those images to the local registry as follows:

  1. Launch a terminal session, then log in to the master node as root or a sudo user.

  2. Change to the following directory:

    cd /<cdf_installer_directory>/scripts/

    For example:

    cd /opt/fusion-installer-x.x.x.x/installers/cdf-x.x.x-x.x.x.x/scripts/

  3. Upload the Analytics images to the local registry:

    ./uploadimages.sh -d <download_directory> -u registry-admin -p <cdf_password>

    Example:

    ./uploadimages.sh -d /<download_directory>/fusion-installer-x.x.x.x/suite_images/analytics-x.x.x.x -u registry-admin -p <cdf_password>

  4. Upload the Fusion image to the local registry:

    ./uploadimages.sh -d <download_directory> -u registry-admin -p <cdf_password>

    Example:

    ./uploadimages.sh -d /<download_directory>/fusion-installer-x.x.x.x/suite_images/fusion-x.x.x.x -u registry-admin -p <cdf_password>

  5. Continue with deploying Fusion.

8.3.3 Deploying Fusion

After you upload the images to the local directory, Container Deployment Foundation (CDF) uses these images to deploy the respective software in the cluster.

  1. Switch back to the CDF Management Portal.

  2. Click Next on the Download Images page because all the required packages are already downloaded and uncompressed.

  3. After the Check Image Availability page displays All images are available in the registry, click Next.

    If the page displays any missing image error, upload the missing image.

  4. After the Deployment of Infrastructure Nodes page displays the status of the node in green, click Next.

    The deployment process can take up to 15 minutes to complete.

  5. (Conditional) If any of the nodes show a red icon on the Deployment of Infrastructure Nodes page, click the retry icon.

    IMPORTANT:CDF might display the red icon if the process times out for a node. Because the retry operation executes the script again on that node, ensure that you click retry only once.

  6. After the Deployment of Infrastructure Services page indicates that all the services are deployed and the status indicates green, click Next.

    The deployment process can take up to 15 minutes to complete.

    (Optional) To monitor the progress of service deployment, complete the following steps:

    1. Launch a terminal session.

    2. Log in to the master node as root.

    3. Execute the command:

      watch 'kubectl get pods --all-namespaces'

  7. Click Next.

  8. Configure the pre-deployment settings in the CDF Management Portal, by making the following changes under ANALYTICS:

    • In the Cluster Configuration section, select 0 from the Hercules Search Engine Replicas drop-down list.

      NOTE:By default, the value for Hercules Search Engine Replicas is 1.

    • In the Vertica Configuration section, disable Vertica.

    • In the Single Sign-on Configuration section, specify the values for Client ID and Client Secret.

  9. To finish the deployment, click Next.

  10. Copy the Management portal link displayed on the Configuration Complete page.

    Some of the pods on the Configuration Complete page might remain in a pending status until the product labels are applied on worker nodes.

  11. (Conditional) For high availability and multi-master deployment, after the deployment has been completed, manually restart the keepalive process.

    1. Log in to the master node.

    2. Change to the following directory:

      cd /<k8S_HOME>/bin/

      For example:

      cd /opt/arcsight/kubernetes/bin/

    3. Run the following script:

      ./start_lb.sh

  12. Continue to the post-installation steps.