Running the ArcSight Platform OMT Upgrade (Azure)

This release requires upgrading the underlying infrastructure of the ArcSight Platform to version 2022.05 (the new OPTIC Management Toolkit, abbreviated OMT). This process can take a significant amount of time, depending on the number of master and worker nodes that need to be updated, so please select the most convenient (less busy) time to perform the upgrade.

To upgrade OMT on Azure:

  1. Log in to the secure network location where you stored the ArcSight Platform Cloud Installers.
  2. Unzip the arcsight-platform-cloud-installer-<VERSION>.zip OMT installer file with this command:
    cd /tmp/upgrade-download
    unzip arcsight-platform-cloud-installer-<VERSION>.zip
  3. Upload new images to Azure Container Registry (ACR).
    1. Go to the Azure management portal and open ACR, and click Access keys > Login Server. A username and password is required to upload images. For more information, see Uploading Product Images.

    2. Change to the deployer scripts directory:

      cd cdf-deployer/scripts/
    3. Run uploadimages.sh with the credentials from ACR:

      ./upload_images_to_ECR -o $(kubectl get cm -n core base-configmap --output=jsonpath={.data.REGISTRY_ORGNAME}) -F arcsight-platform-cloud-installer-<VERSION>.zip/cdf-byok-images.tar -c 4

      Adjust the value of the -c parameter (4 in the instruction above) to up to half your CPU cores in order to increase the speed of the upload (default value is 8).

  4. Run the upgrade using the following steps.
    1. Ensure all PODs in the core namespaces are Running or Completed.
      kubectl get pods -n core

      Example output:

      cdf-apiserver-7965dcf689-4qvkx                         2/2     Running     0          145m
      fluentd-7q4dw                                          2/2     Running     0          136m
      fluentd-kkf2p                                          2/2     Running     0          136m
      fluentd-mwqh8                                          2/2     Running     0          136m
      idm-77b4f9fbfb-cfwkg                                   2/2     Running     0          136m
      idm-77b4f9fbfb-g5pcb                                   2/2     Running     0          136m
      itom-cdf-deployer-2020.05-2.2-2.3-3.1-tncp8            0/1     Completed   0          137m
      itom-cdf-deployer-xg6cw                                0/1     Completed   0          147m
      itom-cdf-ingress-frontend-56c9987b7-bvrsn              2/2     Running     0          145m
      itom-cdf-ingress-frontend-56c9987b7-n8tbc              2/2     Running     0          145m
      itom-logrotate-deployment-6cf9546f8b-rbcvs             1/1     Running     0          136m
      itom-postgresql-default-77479dfbff-t87tv               2/2     Running     0          137m
      itom-vault-6f558dc6cc-bz52l                            1/1     Running     0          146m
      kubernetes-vault-67f8698568-csd54                      1/1     Running     0          145m
      mng-portal-7cfc584db5-hcmjf                            2/2     Running     0          133m
      nginx-ingress-controller-6f6d4c95b9-7fhbs              2/2     Running     0          133m
      nginx-ingress-controller-6f6d4c95b9-nv2zw              2/2     Running     0          133m
      suite-conf-pod-arcsight-installer-86c9687b69-kctjz     2/2     Running     0          132m
      suite-db-68bfc4fbd5-v6nvm                              2/2     Running     0          145m
      suite-installer-frontend-6f49f88797-msb7j              2/2     Running     0          145m
    2. Change to the to deployer directory:
      cd cdf-deployer/
    3. Run the upgrade process :
      ./upgrade.sh -u

       

    4. At the end of upgrade, ensure all pods are Running or Completed:
      kubectl get pods -A
  5. Fix your Azure load balancing rules after the upgrade.

    The upgrade recreated resources where the load balancing rules ware mapped. You need to recreate all the health probe and load balancing rules

    1. Find the IP assigned to your external access host for the OMT by pinging it from the jumphost:
      ping installer.arcsight.private.com
      PING installer.arcsight.private.com (10.1.1.101) 56(84) bytes of data.
      If you do not know your hostname, you can get it by command
      kubectl get cm -n core base-configmap -o yaml | grep EXTERNAL_ACCESS_HOST:
    2. Patch the load balancer service. For more information, see Configuring the Load Balancer.
      kubectl patch services -n core itom-cdf-ingress-frontend-svc -p '{"spec":{"type":"LoadBalancer","loadBalancerIP": "PUBLIC_IP"}}'
      Replace the placeholder PUBLIC_IP with the IP assigned to your external access host.
    3. After successfully patching the service, continue with creating health probe and load balancer rules for port 5443 and 443. For more information, see Configuring the Load Balancer.
Once the OMT upgrade is complete, the new platform version can be checked by clicking the ? icon in the upper right corner of the OMT UI

Next Step: Executing the Post-upgrade Script