Create Directories

Elasticsearch, ZooKeeper, and Kafka pods use a hostPath volume which is mapped to the same path on each node where they run. Deployments also require HDFS data and HDFS name directories. To enable use of a hostPath volume, create the required directories, then set permissions. 
doi13
Elasticsearch, ZooKeeper, and Kafka pods use a hostPath volume which is mapped to the same path on each node where they run. Deployments also require HDFS data and HDFS name directories. To enable use of a hostPath volume, create the required directories, then set permissions. 
 
Note:
 You can specify any directory name. However, the directory name and path must match on all nodes where Elasticsearch, ZooKeeper, and Kafka are running. 
  1. Create a data directory for each of the following components:
    • Elasticsearch
    • ZooKeeper
    • Kafka
    • hdfsdata 
    • hdfsname
    Create Elasticsearch, ZooKeeper, and Kafka directories 
    on each node
     where those components run. If you have a multi-node cluster, the directories must have the same path on each node. 
    Create the  HDFS data and HDFS name directories only on the master node (master-data-1).
    Examples:
    /var/data/elasticsearch
    /var/data/zookeeper
    /var/data/kafka
    /var/data/hdfsname
    /var/data/hdfsdata
  2. Change ownership for each directory to user 1010.
     
    Digital Operational Intelligence
     containers require user 1010 to run internal processes, and to access certain paths. You do not need to create this user. If user 1010 exists in your environment already, changing ownership to that user will not cause issues.
    For example, enter the following command for the Elasticsearch directory: 
    chown 1010:1010 /var/data/elasticsearch
    Repeat these commands for the ZooKeeper, Kafka, hdfsdata, and hdfsname directories. For example: 
    chown 1010:1010 /var/data/zookeeper
    chown 1010:1010 /var/data/kafka
    chown 1010:1010 /var/data/hdfsdata
    chown 1010:1010 /var/data/hdfsname
  3. Change the permissions of the directory by running the following command: 
    chmod 775/var/data
  4. Repeat step 2 on each node that runs Elasticsearch, ZooKeeper, and Kafka. 
     
     
    Do not create the hdfsdata and hdfsname directories on each node. Those directories are required on the master node only. 
     
If selinux is enforcing or permissive, run the following commands in the data directory:
chcon -Rt svirt_sandbox_file_t /var/data/zookeeper/
chcon -Rt svirt_sandbox_file_t /var/data/elasticsearch/
chcon -Rt svirt_sandbox_file_t /var/data/kafka/
chcon -Rt svirt_sandbox_file_t /var/data/hdfsdata
chcon -Rt svirt_sandbox_file_t /var/data/hdfsname