Changing the Elasticsearch Node Data Path

To change the Elasticsearch node data path, perform the following steps:

  1. Launch a terminal session and as a root user, log in to a worker node labeled as intelligence:yes.

  2. Execute the following commands to scale down the Elasticsearch master node and Elasticsearch data nodes:

    export NS=$(kubectl get namespaces |grep arcsight|cut -d ' ' -f1)

    kubectl -n $NS scale statefulset elasticsearch-master --replicas=0

    kubectl -n $NS scale statefulset elasticsearch-data --replicas=0

  3. (Conditional) To create an Elasticsearch data directory in the NFS server, log in to the server.

  4. (Conditional) To create a new Elasticsearch data directory in a worker node labeled as intelligence:yes, log in to the node.

  5. Execute the following commands to create a new directory:

    cd <path to create the new directory>

    mkdir <new directory in the path>

    If you are creating a new directory in the NFS server, ensure that the directory is accessible or mounted on all the worker nodes labeled as intelligence:yes.

    The Elasticsearch data directory in the NFS server might impact the system performance.
  6. Execute the following command to copy data from the existing directory to the new directory:

    • To copy the data to a worker node labeled as intelligence:yes:

      cp -rf <existing_directory_path> <new_directory_path>

      For example:

      cp -rf /opt/arcsight/k8s-hostpath-volume/interset /opt/arcsight/testpath/

      In this example, the existing directory path /opt/arcsight/k8s-hostpath-volume/interset and the new directory path is /opt/arcsight/testpath/.

    • To copy the data to the NFS server:

      scp -rf <existing_directory_path> root@<ip address or hostname of the NFS server>:<new_directory_path>

  7. Execute the following command to change the permissions of the new directory:

    chown 1999:1999* <new_directory_path>

    For example:

    chown 1999:1999* /opt/arcsight/testpath/

  8. If you have created a new Elasticsearch directory in a worker node labeled as intelligence:yes, then repeat Steps 4 to 7 on all the worker nodes labeled as intelligence:yes.

  9. Open a certified web browser.

  10. Specify the following URL to log in to the CDF Management Portal: https://<cdf_masternode_hostname or virtual_ip hostname>:5443.

  11. Select Deployment > Deployments.

  12. Click ... (Browse) on the far right and choose Reconfigure. A new screen will be opened in a separate tab.

  13. Click Intelligence and provide the new value of the Elasticsearch directory path in the Elasticsearch Node Data Path to persist data to field.

  14. Click Save.

  15. Launch a terminal session and as a root user, log in to a worker node labeled as intelligence:yes.

  16. Execute the following commands to scale up the Elasticsearch master node and Elasticsearch data nodes:

    export NS=$(kubectl get namespaces |grep arcsight|cut -d ' ' -f1)

    kubectl -n $NS scale statefulset elasticsearch-master --replicas=1

    kubectl -n $NS scale statefulset elasticsearch-data --replicas=<number_of_replicas>

  17. Execute the following curl command on any Kubernetes node and verify the status of the Elasticsearch cluster:

    curl -k "https://<Elasticsearch_username:Elasticsearch_password>@<ip address or hostname of the CDF>:31092/_cluster/health"