Changing the Elasticsearch Node Data Path
To change the Elasticsearch node data path, perform the following steps:
-
Launch a terminal session and as a root user, log in to a worker node labeled as intelligence:yes.
-
Execute the following commands to scale down the Elasticsearch master node and Elasticsearch data nodes:
export NS=$(kubectl get namespaces |grep arcsight|cut -d ' ' -f1)
kubectl -n $NS scale statefulset elasticsearch-master --replicas=0
kubectl -n $NS scale statefulset elasticsearch-data --replicas=0 -
(Conditional) To create an Elasticsearch data directory in the NFS server, log in to the server.
-
(Conditional) To create a new Elasticsearch data directory in a worker node labeled as intelligence:yes, log in to the node.
-
Execute the following commands to create a new directory:
cd <path to create the new directory>
mkdir <new directory in the path>If you are creating a new directory in the NFS server, ensure that the directory is accessible or mounted on all the worker nodes labeled as intelligence:yes.
The Elasticsearch data directory in the NFS server might impact the system performance. -
Execute the following command to copy data from the existing directory to the new directory:
-
To copy the data to a worker node labeled as intelligence:yes:
cp -rf <existing_directory_path> <new_directory_path>For example:
cp -rf /opt/arcsight/k8s-hostpath-volume/interset /opt/arcsight/testpath/In this example, the existing directory path
/opt/arcsight/k8s-hostpath-volume/intersetand the new directory path is/opt/arcsight/testpath/. -
To copy the data to the NFS server:
scp -rf <existing_directory_path> root@<ip address or hostname of the NFS server>:<new_directory_path>
-
-
Execute the following command to change the permissions of the new directory:
chown 1999:1999* <new_directory_path>For example:
chown 1999:1999* /opt/arcsight/testpath/ -
If you have created a new Elasticsearch directory in a worker node labeled as intelligence:yes, then repeat Steps 4 to 7 on all the worker nodes labeled as intelligence:yes.
-
Open a certified web browser.
-
Specify the following URL to log in to the CDF Management Portal: https://<cdf_masternode_hostname or virtual_ip hostname>:5443.
-
Select Deployment > Deployments.
-
Click ... (Browse) on the far right and choose Reconfigure. A new screen will be opened in a separate tab.
-
Click Intelligence and provide the new value of the Elasticsearch directory path in the Elasticsearch Node Data Path to persist data to field.
-
Click Save.
-
Launch a terminal session and as a root user, log in to a worker node labeled as intelligence:yes.
-
Execute the following commands to scale up the Elasticsearch master node and Elasticsearch data nodes:
export NS=$(kubectl get namespaces |grep arcsight|cut -d ' ' -f1)
kubectl -n $NS scale statefulset elasticsearch-master --replicas=1
kubectl -n $NS scale statefulset elasticsearch-data --replicas=<number_of_replicas> -
Execute the following curl command on any Kubernetes node and verify the status of the Elasticsearch cluster:
curl -k "https://<Elasticsearch_username:Elasticsearch_password>@<ip address or hostname of the CDF>:31092/_cluster/health"