These issues apply to common or several components in your ArcSight Platform deploy. For more information about issues related to a specific product, please see that product's release notes.
Micro Focus strives to ensure that our products provide quality solutions for your enterprise software needs. If you need assistance with any issue, visit Micro Focus Support, and then select the appropriate product category.
In documentation, performing a text search and then using the COPY button to copy highlighted search results will result in invalid commands if the text is pasted.
Workaround: If the command block you want to copy includes highlighted text, you must remove the highlights before copying. At the end of the URL, remove everything after the .htm text. Then click to correctly copy the code in the gray box.
For example, if you searched for the text vault_pod, remove ?Highlight=vault_pod from the URL (highlighted in example below):
https://wwwtest.microfocus.com/documentation/arcsight/arcsight-platform-23.1/arcsight-admin-guide-23.1/#deployment_manual/database_setup.htm?Highlight=vault_pod
Issue: The FIPS-enabled database node fails to reboot, pre-install, handle installing prerequisites, and update. It is a problem with /etc/default/grub
Workaround: After running the ./arcsight-install --cmd preinstall, execute these commands on all database nodes:
resume=$(grep swap /etc/fstab | awk '{ print $1 }')
boot=$(grep '/boot' /etc/fstab | awk '{ print $1 }')
sed -i "s/ resume=UUID / resume=$resume /g" /etc/default/grub
sed -i "s/ boot=UUID / boot=$boot /g" /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg
Issue: After running the command to perform post-installation configurations, you might see the following error:
create scheduler under: default_secops_adm_scheduler
scheduler: create target topic
ERROR. Failed to create scheduler's target topic. For more details, please check log file.
Rolling back scheduler creation...
Workaround: Kill the pod "fusion-db-adm-schema-mgmt", example command:
kubectl delete pod -n <arcsight-installer-namespace> fusion-db-adm-schema-mgmt-xxxx
Wait for the pod to be completely running and then re-run the post-install again.
See Using ArcSight Platform Installer for an Automated On-Premises Installation in the Administrator's Guide to ArcSight Platform 23.1 for more installation information.
OCTCR33I411123 — Event Integrity Query Indicates Insufficient Disk Space (AWS/Azure)
OCTCR33I610053 — Event Integrity: The Check Progress Percentage Might Exceed 100%
OCTCR33I534015 — Autopass container crashing with exception: relation "mysequence" already exists
OCTCR33I751100 — fluentbit pod on some worker nodes may experience crashes
Issue: After you undeploy the Fusion capability and then redeploy Fusion into the same cluster, pods might remain in CrashLoopBackOff or PodInitializing status. The root cause of the issue is that the redeploy causes the system to forget the password for the rethinkdb database.
Workaround: Delete all of the files in the NFS folder before redeploying Fusion: arcsight-nfs/arcsight-volume/investigate/search/rethinkdb/hercules-rethinkdb-0. This will cause the rethinkdb database to be automatically recreated when Fusion is redeployed.
Issue: There is an intermittent error of "insufficient disk space" when running an Event Integrity query in an Amazon Web Service (AWS) or Azure environment. There is a related issue for insufficient disk space.
Workaround: See View Event Integrity Check Results to help troubleshoot this issue.
Issue: If a large time range is selected (e.g., 1/31-2/22), there is an intermittent error of "Other" when running an Event Integrity query in an Amazon Web Service (AWS) or Azure environment. Known Issue OCTCR33I411123 is related issue for insufficient disk space behavior.)
Workaround: We recommend to select one day for event integrity check.
Issue: This defect tracks issues that affect the left navigation menu display until there is a proper fix. A related defect (OCTCR33I465016) for the Event Integrity User Interface features becoming disabled as a result of installing the 22.1.1 patch had only a temporary solution to the problem. For now, we intend to perform a periodic menu registration in the containers that register their menu items for nodejs containers and java containers and to revert certain files.
Issue: Logout does not work properly when the product is configured to use an external SAML provider.
Workaround: Access the product UI (prior to login) using a private/incognito browser tab and, after logout, close all private browser tabs. This will ensure the session created at login is cleared from the browser.
Issue: Two cases where the check progress percentage might be misleading are:
If there are duplicate verification events, the check progress percentage might exceed 100% (because those events are often counted as duplicates).
When the search engine is restarted and new verification events are being ingested during the down time, the progress of checking verification events can be shown as 100% (even though the process is still running).
Workaround: If the status is "In Progress" with a percentage of 100% or greater, wait until the status shows as "Completed".
relation "mysequence" already existsIssue: Due to a race condition in a resource constrained cluster node, your autopass pod may crash with the following error:
kubectl logs -n arcsight-installer-xxxxx autopass-lm-xxxxxxxx-xxxx -c autopass-lm -p
starting DB with paramaters
.. <> ...
org.postgresql.util.PSQLException: ERROR: relation "mysequence" already exists
Workaround: If this occurs, use this procedure as a workaround.
Log into the cdfapiserver database pod to recover the password, and then log in with the password into the itom-default database as follows:
kubectl exec -it -n core cdfapiserver-postgresql-xxxxxxxxxx-xxxxx -c itom-postgresql -- bash
# get_secret ITOM_DB_DEFAULT_PASSWD_KEY | cut -d "=" -f2-
# psql --host=itom-postgresql --dbname=defaultdbapsdb --username=postgres
List the relations to see the flag, remove it and exit the psql with "\q" and ssh pod with "exit"
defaultdbapsdb=# \ds public.*
drop sequence public.mysequence;
Restart the autopass pod using kubectl delete pod, and then make sure the container starts correctly with 2/2 Ready status.
kubectl delete pod -n arcsight-installer-xxxxx autopass-lm-xxxxxxxx-xxxx
Platform (OMT) Upgrade from 22.1 to 23.1 may fail during uploading images when the internal certificate was renewed. The process may end with this error:
** Upgrade Infrastructure components ... (Step 8/10)
Update Apphub components images successfully.
Pushing apphub images ...
Failed to push apphub images...
Workaround: Use the following procedure.
Go to unpacked arcsight installer script subfolder.
cd <arcsight installer package>/installers/cdf/cdf/scripts/
Renew the certificates. In the below command, replace the <days> value with your own value in days. (Minimal value of one year, 365 days, recemented 3 years)
./renewCert --renew -t internal -V <days> -n core
Go to the charts folder. (Your command may differ if you do not use default values.)
cd /opt/arcsight/kubernetes/charts/
Store the renewed certificate and the key in variables.
cert=$(kubectl get secret kube-registry-cert -n core -o json|jq '.data."kube-registry.crt"' -r)
key=$(kubectl get secret kube-registry-cert -n core -o json|jq '.data."kube-registry.key"' -r)
Get the chart name with version and store it in a variable.
chart=$(helm list -n core -o json|jq -c -r '.[] | select( .name == "kube-registry" )'.chart)
Set the new certificate and key in chart.
helm upgrade kube-registry ${chart}.tgz -n core --set tls.cert=${cert} --set tls.key=${key} --reuse-values
Apply your changes.
kubectl rollout restart deployment kube-registry -n core
Wait until the cluster is healthy (and all pods are running).
kubectl get pods -A
kubectl delete pod -n <namespace> <pod name>9. Run the upgrade again (it will continue where it failed).
cd <arcsignt installer package>/
arcsight-install --cmd upgrade
Issue: On some nodes due to higher pod count fluentbit may experience increased load and constantly crash with out of memory (OOM) condition, due to a strict default memory limit.
Workaround: This behavior is not affecting log collection and may be considered normal. The restart of a container after memory consumption threshold is reached is controlled by Kubernetes in a timely fashion and log collection activity is experiencing minimal downtime. No action is necessary by the user.
OCTCR33I186007 — An Exported Report Might Have Format Issues
OCTCR33I372067 — Contract & Usage Page Throws an Ingress Router Error and Does Not Load
OCTCR33I409268 — Reporting Shows an Error When Single Sign On Secrets are Changed (Azure)
OCTCR33I566085 — Network Chart Data Presented in Portions and Cut
OCTCR33I589121— Brush Option Does Not Highlight Parabox Charts
OCTCR33I71158 — Scheduled Tasks Do Not Allow Default Printer Selection
Issue: When you edit an asset using the Edit Wizard option, you cannot preview the report or dashboard.
Workaround: To preview your changes, select the metadata option from the Edit Wizard.
Issue: When using the Dashboard wizard, the chart intermittently fails to load because the same type of data has been selected at the same time.
Workaround: When this issue occurs, select one event data from the left panel and use the (located in top right corner) to continue creating the dashboard.
Issue: In the chart editor, when you remove an X or Y field, the Reports Portal display an error message. This issue occurs intermittently.
Workaround: When this issue occurs, try again or avoid removing fields from the Axis.
Issue: When using the Export Asset feature, the formatting for the reports might have issues such as dark backgrounds, dark fonts, and dark table cells.
Workaround: Manually change the formatting for the exported report.
Issue: The start and end times for your reports and dashboards use UTC time instead of your local time zone.
Workaround : When you run a report or dashboard and pick start and end times, ensure they use the UTC time zone format.
Issue: Open two browser tabs, one with or (FUM) and another with any other capability (Reporting or Recon). If you log out from the capability tab, any subsequent operation performed on the tab does not complete.)
Workaround: Refresh the browser to complete the log out process.
Issue: When the user tries to navigate from My Profile to Contract & Usage, the page throws an ingress router error message as follows and does not load:
The Route You Reach Does not Exist Please check your router configuration and the path in your address bar.
Workaround: Refresh the page to load the Contract & Usage page.
Issue: Reporting runs into an Open id or HTTP 500 error when single sign on secrets are changed. The reporting app can take a few minutes to fully start, so this error does not happen right after applying the change.
Issue: The Network chart tends to truncate data, such as IP addresses, to the point where the displayed content is not useful.
Workaround: There is no workaround. Micro Focus recommends that you do not use the Network chart at this time.
Issue: The brush option does not highlight parabox charts.
Workaround: There is no workaround at this time.
Issue: The default printer field is a textbox that allows any value instead of being a list of valid entries.
Issue: After following the configuration data restoration process, opening Fusion ArcMC from the Fusion dashboard produces a 503 Service temporarily unavailable error.
Workaround: Correct the permissions of the ArcMC folder by executing the following commands:
cd /mnt/efs/<nfs_folder>/
$ sudo chown -R 1999:1999 arcsight-volume/arcmc
$ kubectl delete pods -n $(kubectl get namespaces | grep arcsight | cut -d ' ' -f1) $(kubectl get pods -n $(kubectl get namespaces | grep arcsight | cut -d ' ' -f1) | grep arcmc | cut -d ' ' -f1)
Issue: When a user attempts to import a hosts file into Fusion ArcMC, they may encounter an issue where the log folder being pointed to does not match the Fusion ArcMC NFS. This mismatch can occur for a variety of reasons and can lead to confusion and difficulties for the user in accessing and interpreting the log data.
Workaround: No known workaround for this release.
Issue: When the Fusion license expires during a session, a spurious error message will be displayed: "Unable to retrieve CSRF token. Got status code:0". Click OK to dismiss this error.
Workaround: No known workaround for this release.
OCTCR33I611096 — Analytics Fails to Load Data Sources Except for AD and Proxy
OCTCR33I614050 - Special Characters for Database Credentials
OCTCR33I616054 - Changing a BOT User to a NOTBOT User Has No Effect on Inactive Projects
OCTCR33I614049 - Uninstalling Intelligence Does Not Delete All Files
OCTCR33I613051 - Unable to Retrieve Indices When Elasticsearch Cluster is Unstable
Issue: For AWS and Azure deployments, after the Intelligence upgrade from 22.1.0 to 23.1, analytics does not detect the custom SQL loader scripts of the previous version of Intelligence. Instead, it proceeds with the default SQL loader scripts present in <arcsight_nfs_vol_path>/interset/analytics/vertica_loader_sql/0/1.12.4.27/
Workaround: Follow the steps below:
Step 1: Perform the following steps before the upgrade:
Launch a terminal session and as a root user, log in to the node where NFS is present.
Navigate to the following directory:
cd /<arcsight_nfs_vol_path>/interset/analytics/vertica_loader_sql/0/
Execute the following command to create the 1.1.9.1.9 directory:
mkdir 1.1.9.1.9
Navigate to the following directory:
cd <arcsight_nfs_vol_path>/interset/analytics/vertica_loader_sql/0
Execute the following command to move the SQL loader scripts from <arcsight_nfs_vol_path>/interset/analytics/vertica_loader_sql/0 to <arcsight_nfs_vol_path>/interset/analytics/vertica_loader_sql/0/1.1.9.1.9:
mv *.md5 *.sql 1.1.9.1.9
Execute the following command to grant permissions to the 1.1.9.1.9 directory:
chown -R 1999:1999 1.1.9.1.9
Step 2: Upgrade the Intelligence capability.
For more information, see Upgrading your Environment in the Administrator’s Guide for ArcSight Platform.
Step 3: Perform the following steps after the upgrade:
Run Analytics to start the next analytics run. For more information, see Running Analytics on Demand in the Administrator’s Guide for ArcSight Platform.
During the analytics run, the 1.12.4.27 folder is created in the following directory with the default SQL loader scripts:
cd <arcsight_nfs_vol_path>/interset/analytics/vertica_loader_sql/0/1.12.4.27
(Conditional) If you have been using custom SQL loader scripts in 22.1.0, then the SQL loader scripts with inconsistent md5 sums between the current and previous versions are displayed in the Analytics logs. Perform the following steps to review and modify the SQL loader scripts:
Execute the following command to check the logs of the analytics pod:
export NS=$(kubectl get namespaces |grep arcsight|cut -d ' ' -f1)
pn=$(kubectl get pods -n $NS | grep -e 'interset-analytics' | awk '{print $1}')
kubectl logs -f $pn -n $NS -c interset-analytics
Review and add the necessary modifications to the new SQL loader scripts present in the following directory:
cd <arcsight_nfs_vol_path>/interset/analytics/vertica_loader_sql/0/1.12.4.27
Update the md5 files with the md5 sums corresponding to the modified SQL loader scripts.
If you are upgrading from 22.1.0 to 23.1, execute the following command:
cd <arcsight_nfs_vol_path>/interset/analytics/vertica_loader_sql/0/1.1.9.1.9
If you are upgrading from 22.1.2 to 23.1, execute the following command:
cd <arcsight_nfs_vol_path>/interset/analytics/vertica_loader_sql/0/1.1.9.2.9
Analytics is triggered automatically after all the SQL loader scripts with inconsistent md5 sums are updated.
Issue: When running a suite upgrade to 23.1.x, if you see the pop-up message "System error, please contact system administrator" do the following. Keep the log tailing by running "kubectl logs -n core cdf-apiserver-xxxxxxxxx-xxxxx -c cdf-apiserver --follow | grep RuntimeException" and re-try the Suite upgrade. It should return the line "java.lang.RuntimeException: Failed apply suite config pod" after you see the pop-up error message on the UI.
Workaround: Delete the suite-conf service by running "kubectl delete svc -n core suite-conf-svc-arcsight-installer" and re-try the upgrade using the panel in the .
Issue: If the configuration for the data sources is set to "all" and the input data contains data from AD, Proxy, and other supported data sources, analytics loads only the AD and Proxy data sources and displays the following error message:
Exception in thread "main" java.lang.IllegalArgumentException: Config validation failed: Missing option --action
As a result, analytics is unable to load the other data sources, such as Resource, Share, VPN, and Repository.
Workaround: Perform the following steps to specify each data source for the data source configuration:
Open a certified web browser.
Specify the following URL to log in to the CDF Management Portal: https://<cdf_masternode_hostname_or_virtual_ip_hostname>:5443.
Select Deployment > Deployments.
Click ... (Browse) on the far right and choose Reconfigure. A new screen will be opened in a separate tab.
Click Intelligence.
In the Analytics Configuration - Database section, modify Database Loader Data Sources field's value to ad,pxy,res,sh,vpn,repo.
Issue: Intelligence Sharing URL functionality does not work if user does not have an active session. If a user is not logged in, then after a successful sign in, the shared URL lands on the default interset landing page instead of the shared page.
Workaround: When sharing a link using the Share Short URL functionality in Intelligence, the recipient needs to be logged into an active session (as described in Known Issue OCTCR33I616036) in order to be taken to the intended page.
Issue: Logging in to Intelligence dashboard https://<hostname>/interset by using a web browser fails in the first attempt.
Workaround: Perform the following steps:
Log in to Fusion dashboard https://<hostname>/dashboard.
Navigate to Insights > Entities at Risk. It will redirect you to the Intelligence dashboard.
After performing the above steps, subsequent attempts to log in to the Intelligence dashboard https://<hostname>/interset will be successful.
Issue: Either the Intelligence Search API or login to the Intelligence UI or both fail with the IOException: Listener Timeout after waiting for 30 seconds while querying a large data set (approximately 2 billion records) in the database.
Workaround: Perform the following steps:
Open a certified web browser.
Log in to the CDF Management portal as the administrator.
https://<virtual_FQDN>:5443
Click CLUSTER > Dashboard. You are redirected to the Kubernetes Dashboard.
In Namespace, search and select the arcsight-installer-xxxx namespace.
In Config and Storage, click Config Maps.
Click the filter icon, then search for investigator-default-yaml.
In the db-elasticsearch section of the YAML tab, modify the esListenerTimeout value based on the data size.
For example, if the Intelligence search API takes 150 seconds to retrieve data from the database, then ensure that you set the esListenerTimeout value to more than 150 seconds to avoid the exception.
Click Update.
Restart the interset-api pods:
Launch a terminal session and log in to the master or worker node.
Execute the following command to retrieve the namespace:
export NS=$(kubectl get namespaces | grep arcsight|cut -d ' ' -f1)
Execute the following commands to restart the interset-api pods:
kubectl -n $NS scale deployment interset-api --replicas=0
kubectl -n $NS scale deployment interset-api --replicas=2
Issue: Intelligence Search API fails with the esSocketTimeout exception while querying a large data set (approximately 4 billion records) in the database, along with ingestion and analytics running simultaneously.
Workaround: Perform the following steps:
Open a certified web browser.
Log in to the CDF Management portal as the administrator.
https://<virtual_FQDN>:5443
Click CLUSTER > Dashboard. You are redirected to the Kubernetes Dashboard.
In Namespace, search and select the arcsight-installer-xxxx namespace.
In Config and Storage, click Config Maps.
Click the filter icon, then search for investigator-default-yaml.
In the db-elasticsearch section of the YAML tab, modify the esSocketTimeout value based on the data size.
For example, if the Intelligence search API takes 150 seconds to retrieve data from the database, then ensure that you set the esSocketTimeout value to more than 150 seconds to avoid the exception.
Click Update.
Restart the interset-api pods:
Launch a terminal session and log in to the master or worker node.
Execute the following command to retrieve the namespace:
export NS=$(kubectl get namespaces | grep arcsight|cut -d ' ' -f1)
Execute the following commands to restart the interset-api pods:
kubectl -n $NS scale deployment interset-api --replicas=0
kubectl -n $NS scale deployment interset-api --replicas=2
Issue: In the CDF Management Portal > Configure/Deploy page > Intelligence > KeyStores section > KeyStore Password field, if you specify a password that starts with a space character, most pods enter into the CrashLoopBackOff state.
Workaround: For the KeyStore Password field, do not specify a password that starts with a space character.
Issue: When configuring the EFS for deploying Intelligence in AWS, even after setting the permissions in the arcsight-volume folder to 1999:1999, the Elasticsearch and Logstash pods enter into a CrashLoopBackOff state from a Running state.
Workaround: If the pods enter into the CrashLoopBackOff state, perform the following steps:
cd /mnt/efs/<parent_folder_name> chown -R 1999:1999 arcsight-volume
Issue: When preparing the NFS server for deploying Intelligence in Azure, even after setting the permissions in the arcsight-volume folder to 1999:1999, the Elasticsearch and Logstash pods enter into a CrashLoopBackOff state from a Running state.
Workaround: If the pods enter into the CrashLoopBackOff state, perform the following steps:
cd /nfs chown -R 1999:1999 arcsight-volume
df -hThe directory corresponding to <IP address of the NetApp Files server>/volume is the directory on which the Azure NetApp Files server is mounted.
cd /<Azure NetApp Files server directory> chown -R 1999:1999 arcsight-volume
Issue: In an AWS deployment of Intelligence, when data is ingested, the Logstash pod enters into a CrashLoopBackOff state from a Running state. This issue occurs if you have configured CDF in the cloud (AWS) environment with self-signed certificates.
Workaround: Perform the following steps:
Connect to the bastion.
kubectl -n $(kubectl get namespaces | grep arcsight | cut -d ' ' -f1) scale statefulset interset-logstash --replicas=0
kubectl -n $(kubectl get namespaces | grep arcsight | cut -d ' ' -f1) edit configmaps logstash-config-pipeline
kubectl -n $(kubectl get namespaces | grep arcsight | cut -d ' ' -f1) scale statefulset interset-logstash --replicas=<number_of_replicas>
Issue: In an ArcSight Platform deployment that has Intelligence with an MSSP license, you will receive the usual notifications that the licenses are about to expire. However, if the MSSP license expires, the Platform erroneously displays a warning that the Recon license has expired even though Recon is not deployed. This issue does not occur when Recon is deployed, with or without the MSSP license.
Workaround: There is no workaround for this issue.
Issue: The following characters are not supported for the database credentials:
Workaround: There is no workaround at this time.
Issue: When anomalies are identified because few users access a specific project, and one or more of the users are flagged as bots, changing the BOT users to NOTBOT users — and therefore increasing the number of non-bot users accessing the project — will not impact the project's identification as 'inactive'. Anomalies will therefore continue to be identified when the project is accessed, even though more non-bot users are now regularly accessing the project.
Workaround: There is no workaround at this time.
Issue: During the weeks immediately following Daylight Savings Time (DST) clock changes, you may observe an increase in reported Normal Working Hours anomalies. These anomalies, which are due to automatic software clock changes, will usually have risk scores of zero (0), and are reflective of the perceived Normal Working Hours pattern shift.
Workaround: There is no workaround needed.
Issue: In the CDF Management Portal>Configure/Deploy page >Intelligence, when you specify a value for the Repartition Percentage Threshold field, the installer does not validate the value. However, Intelligence Analytics fails if the value is not set between 0.7 and 1.0 as stated in the tooltip.
Workaround: Ensure that you set a value between 0.7 and 1.0.
Issue: In the CDF Management Portal>Configure/Deploy page >Intelligence, when you change the value of the HDFS NameNode field to deploy the HDFS NameNode container on another worker node, the older instance of the HDFS NameNode container goes into a pending state instead of being terminated.
Workaround: Perform the following steps after changing the value in the field:
NAMESPACE=$(kubectl get namespaces | grep arcsight-installer | awk '{ print $1}')
kubectl get pods -n $NAMESPACE | grep -e 'hdfs\|interset-analytics' | awk '{print $1}' | xargs kubectl delete pod -n $NAMESPACE --force --grace-period=0
Issue: When you view the Logstash logs, you might come across the following warnings:
Workaround: There is no workaround needed. You can ignore these warnings as there is no impact in the functionality.
Issue: In the CDF Management Portal > Configure/Deploy page > Intelligence > Elasticsearch Configuration section, the installer does not validate the value you specify for the Elasticsearch Data Retention Period field. The tool-tip for the Elasticsearch Data Retention Period field suggests that you should specify a value greater than 30 for indices retention. However, there is no validation preventing you from entering a value that is less than 30. If you specify a value that is less than 30, the value for Elasticsearch Data Retention Period will be set to the minimum default value of 30 days.
Workaround: There is no workaround at this time.
Issue: When you uninstall Intelligence, some files are not deleted from the /opt/arcsight/k8s-hostpath-volume/interset directory of all the worker nodes. Therefore, when you install Intelligence again, the intelligence pods stay in Init state.
Workaround: Before installing Intelligence again, manually delete the remaining files from the /opt/arcsight/k8s-hostpath-volume/interset directory of all the worker nodes. If you have modified the value of the Elasticsearch Node Data Path field in the Intelligence tab of the CDF Management Portal, check and manually delete the remaining files from the directory you have specified for the Elasticsearch Node Data Path field for all the worker nodes.
Issue: When your Elasticsearch Cluster is not stable and you run the reindex jobs, the jobs run successfully but display the following error message in the job details:
Error occurred while getting all ES indices: Request cannot be executed; I/O reactor status: STOPPED
Workaround: You must restart the Elasticsearch cluster to refresh the Elasticsearch environment.
Issue: In the CDF Management Portal > Configure/Deploy page > Intelligence > KeyStores section > KeyStore Password field, if you specify a password that starts with a special character, most pods enter into the CrashLoopBackOff state.
Workaround: For the KeyStore Password field, do not specify a password that starts with a special character.
Issue: If the cookie request size exceeds the cookie size limit, your screen displays a HTTP Status 400 - Bad Request message when you try to open the CDF Management Portal.
Workaround: Perform the following steps:
Open a certified web browser.
Login to the Management portal as the administrator.
https://<virtual_FQDN>:5443
Click CLUSTER > Dashboard. You will be redirected to the Kubernetes Dashboard.
Under Namespace, search and select the arcsight-installer-xxxx namespace.
Under Config and Storage, click Config Maps.
Click the filter icon, and search for investigator-default-yaml.
Click the three dot icon and select Edit.
In the YAML tab, under the interset-cookie section, add the following:
path: /interset;SameSite=Lax
Click Update.
To apply the changes, restart the interset-api pods by either deleting the interset-api pods or scaling down the interset-api deployments using the following commands:
kubectl delete pods -n <arcsight-installer-namespace> <interset-api-pod-1> <interset-api-pod-2>
OR
kubectl scale deployment -n <arcsight-installer-namespace> interset-api --replicas=0
kubectl scale deployment -n <arcsight-installer-namespace> interset-api --replicas=2
Log in to Intelligence or other application user interfaces available for this domain such as the CDF Management Portal or the Fusion dashboard.
Using the Developer tools option in your browser, ensure that the INTERSET_SESSION cookie is only available to request with /interset in the path.
HERC-9865 — Fieldset Fails to Revert to its Original Setting
OCTCR33I113040 — CSV File Export Fails after You Change the Date and Time Format
OCTCR33I167004 — Scheduled Tasks Can be Saved Even if the User Closes the Dialog Box
OCTCR33I179782 — Scheduled Search Appends Erroneous Values to the Run Interval
OCTCR33I369029 — Load Modal Does Not Load Search Criteria When the Fieldset is Deleted
OCTCR33I379056 — Cannot Change the Start or End Date While a Notification Banner is Present
OCTCR33I385042 — Issues Loading a Saved Search Criteria Using # in the Search Input Box
OCTCR33I549094 — Intermittent Failure of .csv File Containing Scheduled Search Results
OCTCR33I576073 — Switching Tabs While Saving Searches Causes an Error
OCTCR33I576083 — Outlier Detection: Outlier History Display is Incorrect When No Score Exists
OCTCR33I603036 — The Application Displays an Error When You Try to Save Search Criteria
OCTCR33I608090 — After Installation, the Search Tab is Intermittently Visible
OCTCR33I608115 — Vulnerabilities: System Query is Duplicated With Two Different Names
Issue: If you change a fieldset after running a search, then leave the Search web page or navigate to a different feature, Search fails to revert the fieldset to the original setting. For example, you choose the Base Event Fields fieldset and run the search, then change the fieldset to All Fields. Next you navigate to the Saved Searches page. When you return to the Search page, the fieldset is still All Fields rather than reverting to Base Event Fields as it should.
Workaround: To revert the fieldset to its original setting, press F5 while viewing the Search
Issue: After modifying the date and time format in preferences, the CSV export function for saved searches runs before the preference change fails.
Workaround: Run the scheduled search again, then save it. Select the CSV icon to download the file
Issue: When you click the Close button during the scheduler task creation process, the modal dialog box closes, but the task is still being saved.
Workaround: If you do not intend to save the task in the scheduler table, select the task and manually delete it.
Issue: When creating a scheduled search, if you select Every 2 hours in the Pattern section, the search runs every two hours, at every even hour, such as 0, 2, 4, 6, etc and appending the minutes setting in Starting From value. The system ignores the hour setting in Starting From.
For example, you might select 2 hours and choose at 01:15 am. Search will run every 2 hours at 2:15 am, 4:15 am, 6:15 am, and so on.
Workaround: To run the Search at a selected hour and minutes, specify a specific hour for the Starting From setting.
Issue: Search criteria does not load under the circumstances described below.
The customer creates his or her own fieldset.
The customer creates a search criteria and assigns his or her custom fieldset to it.
The customer deletes the fieldset that was just created.
The search criteria fieldset returns to the one set in the user preferences.
The customer tries to load the Search Criteria from the Feature Table, but it will not load and displays a red "Failed to load search list" error message.
Workaround: Load the search criteria from the Load modal dialog box in the main search page.
Issue: If you save a Query or Criteria and use the same name as a previously saved search Results, the system overwrites the query in that saved search results rather than saving a new Query or Criteria with the specified name. For example, you execute a search and save the results as Checking Log4J Vulnerabilities. If you create and save a new search Query or Criteria with that same name, you have changed the query in the saved Results. The next time that you run Checking Log4J Vulnerabilities, Search will use the newly saved query instead of your original query.
Workaround: Before saving a new Query or Criteria, review the existing saved Results to ensure that you do not use the same name.
Issue: If the application currently displays a notification banner, Search fails to accept a change to the Start time or End time for a custom date range.
Workaround: Clear the notifications, then change the date range.
Issue: If you load a saved search criteria in the search input box using #, the system fails to load the saved fieldset or time range.
Workaround: Load the saved criteria from the Saved Search Criteria page:
Select Search > Criteria.
Click the box next to the search criteria that you want to load.
Click Load
Issue: When the User sets DD/MM/YY hh:mm:ss:ms in user preferences and loads a search criteria, the time range is reported incorrectly.
Workaround: Manually change the time range that was set in the search criteria.
Issue: Exporting the results of a Scheduled search from the Completed tab might intermittently result in an empty .csv file.
Workaround: If this happens, export the data to a .csv file again from the table.
Issue: If you create a scheduled search that contains an expiration option, such as “Search expires in” = 7 days, then change the value in User Preferences to “Search expires in” = 10 weeks, the scheduled search fails to complete and shows an incorrect setting (“Search expires in” = 7 weeks). The issue also occurs if you switch the settings from weeks to days, weeks to “Never Expire,” even with a fresh install.
Issue: If you switch tabs while saving a search, the system throws an arror that states "Results do not match the specified serach query."
Workaround: Refresh the browser.
Issue: In , when no score exists, and post zeros (0) and display empty charts.
Additionally, if you click a zero score in , then and also display empty charts.
Issue: If you add a field from the Event Inspector to an active search, and the field is not available in the fieldset of the active search, an error will occur. A red line will display under any field in the search query that's not in the active fieldset. Hover your cursor over the field to display the following error message: Columns only from fieldset are permitted.
Workaround: Either add the field to the active fieldset or choose a fieldset that includes the field you wish to add to the active search.
Issue: The following field groups are not supported because they are not string data. If a user wants to include a non-string datatype field group in a | where any...contains query, the field datatype needs to be converted to string (using eval to string). Otherwise, the software might display an error alerting you about non-applicable field groups, such as custom float, float, ip, ip6, mac, port, path, timestamp, or url.
Issue: The user encounters an error when they try to save specific search criteria prior to running a query, even though they have entered correct syntax and parameters.
Issue: After you perform an installation of the application, the Search tab may become intermittently visible.
Workaround: Follow the process described below.
Perform a fresh install the application.
Login as an administrator or system administrator.
If the Search tab does not display, do the following:
On Moba/terminal, execute the following commands to shut down these 2 pods:
kubectl scale deploy -n $( kubectl get namespaces | grep arcsight | cut -d ' ' -f1) fusion-search-web-app --replicas=0
kubectl scale deploy -n $( kubectl get namespaces | grep arcsight | cut -d ' ' -f1) fusion-ui-services --replicas=0
Issue: Queries that use the top/bottom search operator along with fields that begin with "Device" may fail completely or partially.
Cases that fail all the time contain fields that begin with "Device" and use the other fields listed below.
| top Device Receipt Time
| top Device Event Class ID
| top Device Event Category
Cases that fail intermittently also use another pipe operator or fail when the user keeps typing words not present in the fields, such as below:
| top Source Address
| top Agent Severity
Example: Begin entering the query below. Anything after the word "Device" clears out after you press the space bar.
#Vulnerabilities | top Device Event Class ID
Workaround: To avoid this behavior, select the field from the drop-down options for that query while you are entering it. This applies to any field the user is not able to type in.
Issue: You can run into a search error when using "All Fields" fieldset and using more than 5 pipe operations.
Issue: Migrations or upgrade issues from the 22.1.x releases may cause searches that use the Fieldset "All Fields" and Time Range = "All Time" to become disabled. The button may also become disabled. Additionally, if the user clicks the button, the search will not complete.
Workaround: Post-migration, create a new search that uses the same details.
Issue: Queries that use the search operators top, bottom, rename, eval, and wheresql do not recognize the "Id" field as a column, regardless of the Fieldset used.
For the eval search operator, the search will execute but "Id" will be treated as a string.
For top, bottom, rename, and wheresql search operators, the search execution will fail and you see the error message "Fix error in query first: Unknown column "Id."
For the wheresql search operator, the error message "An error occurred while executing the search. Execution could not complete" displays.
Workaround: Although there is no workaround, we recommend removing the use of the "Id" field from the query to avoid a search execution failure.
Issue: Queries that filter specific "id" field values will not return correct results . For example: id = "123456789" or id != "123456789"
Workaround: Although there is no workaround, we suggest you do not use the "Id" field in queries to avoid getting incorrect results because of the issue.
Issue: Due to a known issue related to authentication, the integration with Trend Micro Apex Central fails.
Workaround: There is no workaround at this time.
Issue: If multiple SOAR timeline widgets are present in a dashboard, then data is displayed for only one widget.
Workaround: There is no workaround at this time.
Issue: SOAR message broker pod backup file cannot be created automatically.
Workaround: Complete the following procedure to create the message broker pod backup file manually:
Enter the following Kubernetes command in your terminal:
kubectl edit cm soar-artemis-pod-tools-cm -n <arcsight-installer-namespace>
Replace the line /usr/sbin/cron start with /usr/sbin/crond start.
Replace the line folder_to_be_deleted=$(ls -dt $source_directory/backup/*/ | sed -e '1,24d' | xargs) with folder_to_be_deleted=$(echo $(ls -dt $source_directory/backup/*/ | sed -e '1,24d'))
Replace the line files_to_be_deleted=$(ls -dt $source_directory/backup-logs/* | sed -e '1,120d' | xargs) with files_to_be_deleted=$(echo $(ls -dt $source_directory/backup-logs/* | sed -e '1,120d'))
Replace the line rsync -avrHAXS $source_directory/restore/* $source_directory/ with rsync -avrHAXS $source_restore_root_dir/* $source_directory/
Enter the following command to save and exit:
esc + :wq + enter.
Enter the following command to restart the pod:
kubectl delete pod <soar-message-broker-pod-name> -n <arcsight-installer-namespace>
OCTCR33I377141— If Event Integrity is Enabled, Enrichment Stream Processor Pods Stop Working
OCTCR33I408161— After Upgrade to 3.6, Some ArcMC Fields No Longer Display Event Data
OCTCR33I409142 — After Upgrade to 3.6, Kafka Manager May Fail to Show Metrics
OCTCR33I409228 — In Multi-Node Scenario, Schema Registry Instances May Be Allocated to a Single Node
If the Event Integrity feature is enabled, and then the Enrichment Stream Processor (SP) source topic number of partitions is changed, the Enrichment SP pods will stop working.
Workaround: In Kafka Manager change the Event integrity changelog internal topic, named with the following format and pattern: com.arcsight.th.AVRO_ENRICHMENT_1-integrityMessageStore-changelog number of partitions to match the source topic number of partitions. Then, restart the Enrichment pods.
After upgrading TH to 3.6, in some cases the following ArcMC fields will no longer display any data: Event Parsing Error, Stream Processing EPS, Stream Processing Lag showing the message ‘No data returned at this time’.
Workaround: Restart TH WebServices pod after the upgrade running the following commands:
namespace=$( kubectl get namespaces | awk '/^arcsight-installer-/{print $1}' )
pod=$( kubectl -n $namespace get pods | awk '/^th-web-service-/{print $1}' )
kubectl -n $namespace delete pod $pod
After upgrading to Transformation Hub 3.6, Kafka Manager may fail to show Kafka consumers or metrics, and the Kafka Manager log may contain warnings that a broker may not be available. This happens when Kafka Manager starts before the Kafka brokers after the upgrade; timing allows Kafka Manager to connect to pre-upgrade brokers before they exit.
Workaround: The solution is to restart the Kafka Manager management pod. Perform these steps on the master node to restart Kafka Manager:
namespace=$( kubectl get namespaces | awk '/^arcsight-installer-/{print $1}' )
pod=$( kubectl -n $namespace get pods | awk '/^th-kafka-manager-/{print $1}' )
kubectl -n $namespace delete pod $pod
This issue applies to deployments where Transformation Hub is deployed in a multi-node scenario. After deploying Transformation Hub in a multi-node scenario, Schema Registry instances may get allocated to a single worker node. Instances should be distributed across worker nodes to ensure that failover will provide high availability. Please check the distribution of Schema Registry instances across worker nodes to make sure instances run on more than one node using the following procedure:
Identify the worker nodes that are running Schema Registry instances by running the following commands on the master node:
namespace=$( kubectl get namespaces | awk '/^arcsight-installer-/{print $1}' )
fmt="custom-columns=NODE:.spec.nodeName,NAME:.metadata.name,STATUS:.status.phase"
kubectl -n $namespace get pods -o "$fmt" --sort-by=".spec.nodeName" | grep -E "NODE|th-schemaregistry"
If the output shows all instances are running on the same worker node, do the following:
2a. Restart Schema Registry using this command in order to spread the instances across worker nodes:
kubectl -n $namespace rollout restart deployment th-schemaregistry
2b. Verify restart has completed by waiting until all Schema Registry pods have a status of "Running" and a small "AGE" value of the minutes or seconds since you performed the restart by running this command.
kubectl -n $namespace get pods | grep -E "STATUS|schemaregistry"
2c. After the restart completes, verify that the instances are now running on different worker nodes by running this command:
kubectl -n $namespace get pods -o "$fmt" --sort-by=".spec.nodeName" | grep -E "NODE|th-schemaregistry"
In a multi-node scenario, a topic used internally by Schema Registry may get configured with too few replicas, which reduces reliability and can make the registry fail during failover. Check the topic's configuration to verify it has the proper replica count (replication factor) using the following procedures on the master node.
Set the topic to be used in later commands:
topic="_schemas"
Print the replication factor for the topic, as follows:
topicinfo=$( kubectl -n $namespace exec th-kafka-0 -- kafka-topics --bootstrap-server th-kafka-svc:9092 --describe --topic $topic ) echo "$topicinfo" | sed -n -re '/ReplicationFactor:/s/^.*(ReplicationFactor:\s*\S+)\s.*/\1/p'
If the replication factor is not equal to 3, perform the following steps (5a-5f) to change the configuration. If it equals 3, skip to Step 6.
5a. Get the list of brokers to set as replicas, including the topic's partition leader. If the cluster has more than three brokers, limit the replicas to three using the following commands.
leader=$( echo "$topicinfo" | sed -n -re '/Leader:/s/^.*Leader:\s*(\S+)\s.*/\1/p' )
allbrokerids=$( kubectl exec -n $namespace th-zookeeper-0 -- zookeeper-shell th-zook-svc:2181 ls /brokers/ids | grep -E '^[[][0-9]+' | tr -d '[ ]' )
n=1; blist=$leader; for b in ${allbrokerids//,/ } ; do if [[ $n -lt 3 && ! $blist =~ $b ]]; then n=$((++n)); blist="$blist,$b"; fi; done
5b. Generate a replica configuration file, as follows:
topicfile=/tmp/topic.json
assignfile=/tmp/assign.json
printf '{"topics": [{"topic": "%s"}], "version":1}' $topic > $topicfile
kubectl cp $topicfile $namespace/th-kafka-0:$topicfile
kubectl -n $namespace exec th-kafka-0 -- kafka-reassign-partitions --broker-list "$allbrokerids" --bootstrap-server th-kafka-svc:9092 --generate --topics-to-move-json-file $topicfile > $assignfile
sed -i '1,/Proposed partition reassignment/d' $assignfile
sed -i -r "s/(,.replicas.:\[)([0-9,]+)/\1$blist/" $assignfile
sed -i 's/,\s*"log_dirs"\s*:\s*[[][^]]*[]]//' $assignfile
kubectl cp $assignfile $namespace/th-kafka-0:$assignfile
rm -f "$assignfile" "$topicfile"
5c. Use the file to add the replica configuration with this command:
kubectl -n $namespace exec th-kafka-0 -- kafka-reassign-partitions --bootstrap-server th-kafka-svc:9092 --reassignment-json-file $assignfile --execute |& grep -v "Save this to use"
The output should end with this message:
Successfully started reassignment of partitions.
5d. Verify that the reassignment completes by running a verification command with the same input file, as follows:
kubectl -n $namespace exec th-kafka-0 -- kafka-reassign-partitions --bootstrap-server th-kafka-svc:9092 --reassignment-json-file $assignfile --verify
When reassignment has completed, the output will show the following:
Reassignment of partition _schemas-0 completed successfully.
5e. Since the replicas have changed, run a preferred leader election for the topic's partition with the following commands:
electfile=/tmp/election.json
printf '{"partitions": [{"topic": "%s","partition":0}]}\n' $topic > $electfile
kubectl cp $electfile $namespace/th-kafka-0:$electfile
rm -f "$electfile"
kubectl exec -n $namespace th-kafka-0 -- kafka-leader-election --bootstrap-server th-kafka-svc:9092 --election-type preferred --path-to-json-file $electfile
5f. Verify that the topic now has three replicas by running this command:
kubectl -n $namespace exec th-kafka-0 -- kafka-topics --bootstrap-server th-kafka-svc:9092 --describe --topic $topic | sed -n -re '/ReplicationFactor:/s/^.*(ReplicationFactor:\s*\S+)\s.*/\1/p'
Also in a multi-node scenario, an internal ArcSight topic may get configured with too few replicas, which reduces reliability of Stream Processor metrics and can prevent ArcMC from displaying the metrics. Check the topic's configuration to verify it has the proper replica count. Perform the following procedures on the master node.
Set the topic to be used in later commands:
topic=th-arcsight-avro-sp_metrics
Repeat Steps 4 and 5 above to check the topic and then modify it if needed. The topic needs to have the same replica count as the previous topic: 3.
When routing CEF events, if a routing rule tests a numeric field with a "less than" condition, ("<" or "<="), a CEF event that does not contain that field will match the condition and will be routed to the destination topic. The result is that the destination topic may contain unintended CEF events.
When routing CEF events, if a routing rule tests a numeric field, a CEF event that has a value in that field may be routed in an unintended way. Numbers are compared as strings instead of numerically. The result is that destination topics for affected CEF rules may not receive intended events, or may receive unintended events.