Tuning Your Deployment for Recon or Intelligence

This section describes tuning your deployment for Recon or Intelligence. Skip this section if you have not deployed Recon or Intelligence.

Verifying Recon cron Jobs

After deployment, check Recon to verify that the corresponding cron jobs are running, as follows:

  1. In Recon, browse to INSIGHT >Data Timeseries and Source Agents and Hourly Event Volume. If there is no information displayed after an hour, the cron job events_quality.sh is not running.

  2. Go to DASHBOARD > Data Processing monitoring and Health and Performance Monitoring. If there is no information displayed after an hour, the cron job events_hourly_rate.sh is not running.

If either of these cron jobs is not running, then restart fusion-db-adm-schema-mgmt, as follows:

  1. Connect to the fist master node node.

  2. Run the following commands:

PODS=`kubectl get pods -A | grep fusion-db-adm-schema-mgmt | awk '{print $1, $2}'
kubectl delete pods -n $PODS

Updating Event Topic Partition Number

Refer to the Technical Requirements for ArcSight Platform, section entitled System Hardware Sizing and Tuning Guidelines to determine an appropriate event topic partition number for your workload.

To update the topic partition number from the master node1, run the following commands:

  1. Find NAMESPACE ($NS), for th-kafka-0:
    NS=$(kubectl get ns |awk '/arcsight/ {print $1}')
  2. Update the Enrichment Stream Processor source topic (th-arcsight-avro or mf-event-avro-esmfiltered) and mf-event-avro-enriched topic partition numbers to the same for both topics:

    kubectl exec -n $NS th-kafka-0 -- /usr/bin/kafka-topics --bootstrap-server th-kafka-svc:9092 --alter --topic <ENRICHMENT_SP_SOURCE_TOPIC> --partitions $number

where <ENRICHMENT_SP_SOURCE_TOPIC> is the name of your enrichment source topic.

  1. Use the Kafka Manager to verify that the partition number of the th-cef topic, enrichment stream processor source topic (th-arcsight-avro or mf-event-avro-esmfiltered) and mf-event-avro-enriched topics have been updated to $number, where $number is the numeric value used to calculate partition size.

  2. Update the th-cef, th-arcsight-avro and mf-event-avro-enriched topic partition numbers to the same for all three topics:
    kubectl exec -n $NS th-kafka-0 -- /usr/bin/kafka-topics --bootstrap-server th-kafka-svc:9092 --alter --topic th-arcsight-avro --partitions $number
    kubectl exec -n $NS th-kafka-0 -- /usr/bin/kafka-topics --bootstrap-server th-kafka-svc:9092 --alter --topic mf-event-avro-enriched --partitions $number
    kubectl exec -n $NS th-kafka-0 -- /usr/bin/kafka-topics --bootstrap-server th-kafka-svc:9092 --alter --topic th-cef --partitions $number

    where $number is the numeric value used to calculate partition size.

    Standard Kafka topics settings only permit increasing the number of partitions, not decreasing them, so please consider that when performing steps 2 and 4.
  3. Use the Kafka manager to verify that the partition number of th-cef topic, th-arcsight-avro and mf-event-avro-enriched topics have been updated to $number.
In case of Recon, partition number == DB nodes # * 12; therefore for a 3 node db cluster, the partition number is 36.

Updating the OMT Hard Eviction Policy

You need to update the Kubernetes hard eviction policy from 15% (default) to 100 GB to maximize disk usage.

To update the OMT Hard Eviction Policy, perform the following steps on each worker node, after deployment has been successfully completed. Please verify the operation is successfully executed on one work node first, then proceed on the next worker node.

The eviction-hard can be defined as either a percentage or a specific amount. The percentage or the specific amount will be determined by the volume storage.

To update the policy:

  1. Run the following commands:
    cp /usr/lib/systemd/system/kubelet.service/usr/lib/systemd/system/kubelet.service.orig
    vim /usr/lib/systemd/system/kubelet.service
  2. In the file, after ExecStart=/usr/bin/kubelet \, add the following line:
    --eviction-hard=memory.available<100Mi,nodefs.available<100Gi,imagefs.available<2Gi \
  3. Save your change to the file.

  4. To activate the change, run the following command:

    systemctl daemon-reload ; systemctl restart kubelet
  5. To verify the change, run:

    systemctl status kubelet

    No error should be reported.