OpenShift Installation

Following are the installation steps: 
casp1032
OpenShift Docker Installation for a Distributed SpectroSERVER
: Ensure you have atleast two VMs, one as the master node VM and the other as worker node VM. Subsequently you can scale the VM count.
Prerequisites
  1. Ensure that all machines have a Red Hat Subscription Manager. Ensure that the following repositories are enabled. Run the following commands to enable the repositories: 
  • subscription-manager config --rhsm.manage_repos =1
  • rhel-7-server-extras-rpms/x86_64
    subscription-manager repos --enable=rhel-7-server-rpms
  • rhel-7-server-rpms/7Server/x86_64
    subscription-manager repos --enable=rhel-7-server-extras-rpms
  • rhel-7-server-rt-rpms/7Server/x86_64
    subscription-manager repos --enable=rhel-7-server-optional-rpms
Installation Procedure
Mandatory
: The root_pwd on all the VMs included in the Openshift cluster should be the same. Openshift can create a container(s) on any node/vm and therefore having the same password across all the VMs is necessary.
Following are the installation steps: 
  1. Add the Domain Name Server (DNS) '<LOCALIP>' in the /etc/resolv.conf folder. The 
    LocalIP
     here refers to the DNS server IP. Skip this step if already configured. The following services on all master and worker nodes, should be enabled and running.
    • systemctl status 
      NetworkManager
    • systemctl status 
      dnsmasq
  2. If the services are not enabled and running, execute the following commands: 
    yum -y install NetworkManager
    yum -y install dnsmasq
    service NetworkManager start
    service dnsmasq start
  3. Run the following commands on all the master and worker node hosts:
    yum -y update
    subscription-manager repos --enable rhel-7-server-ansible-2.5-rpms
    yum -y install vim  wget git net-tools bind-utils iptables-services bridge-utils bash-completion pyOpenSSL docker
    yum -y install ansible
  4. Enable and start the docker on master and worker nodes.
  5. Set up the SSH keys for access on all nodes. Perform this step on the MASTER NODE. Perform this step manually or use the script that is mentioned: 
    sed "s/#PermitRootLogin yes/PermitRootLogin yes/g" -i /etc/ssh/sshd_config  ; systemctl restart sshd
    ssh-keygen
    for host in master.example.com \
    node1.example.com \
    node2.example.com; \
        do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
    done
    When running the ansible playbook from master, ssh-copyid should be done from master to master also, otherwise the playbook will fail for localhost.
  6. Clone Git repository for OpenShift release, on the master node only. 
    cd ~ ; git clone https://github.com/openshift/openshift-ansible
    cd openshift-ansible
    git checkout release-1.5
  7. Create hosts file in '/etc/ansible/hosts' for the master node only.
    :
    Replace the with <
    master node host name>
    and replace the with the <
    worker node host name>
    Replace
    <address>
    with respective
    master node / worker node IP
    [OSEv3:children]
    masters
    nodes
    etcd
    [OSEv3:vars]
    ansible_ssh_user=root
    deployment_type=origin
    openshift_disable_check=docker_storage
    containerized=true
    openshift_release=v1.5
    openshift_image_tag=v1.5.0
    osm_cluster_network_cidr=10.163.0.0/16
    enable_excluders=false
    openshift_master_identity_providers=[{'name': 'htpasswd_auth','login': 'true', 'challenge': 'true','kind': 'HTPasswdPasswordIdentityProvider','filename': '/etc/origin/master/htpasswd'}]
    [masters]
    <master.com> openshift_ip=<address> openshift_public_ip=<address> openshift_public_hostname=<master.com> openshift_schedulable=true
    [nodes]
    <master.com> openshift_ip=<address> openshift_public_ip=<address> openshift_public_hostname=<master.com> openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
    <worker.com> openshift_ip=<address> openshift_public_ip=<address> openshift_public_hostname=<worker.com> openshift_node_labels="{'region': 'primary', 'zone': 'east'}" openshift_schedulable=true
    [etcd]
    <master.com>
  8. Run the following Ansible playbook installation command, for the master node only: 
    ansible-playbook -i /etc/ansible/hosts ~/openshift-ansible/playbooks/byo/config.yml
  9. Log in to the OpenShift UI using the url 'https://<masterhostname>:8443' (where 8443 is the default port number) and enter the admin/admin or system/admin credentials. If you want to create your own root credentials execute the following command on master and set a new password for root.
    htpasswd /etc/origin/master/htpasswd root
Post Installation
  1. Create a project in Openshift using the Openshift UI or by issuing the following command on the Openshift master:  
    oc new-project <projectname>
  2. Create a local docker image repository on Openshift cluster so that Spectrum Images can be pushed onto it and can be globally accessible across the cluster. To create a local docker repository on OpenShift execute the following command on master node:
    vi /etc/docker/daemon.json
    {
    "insecure-registries" : ["master.com:5000"]
    }
    Replace with the '
    master node host name'
    .
    To Rollout/Create a local docker repository:
    oc rollout latest docker-registry
  3. We would need the serviceip of the docker registry created in the aforementioned step to push Spectrum Images into the same. For getting the service ip of docker local registry created. This step is mandatory for OpenShift  to get the service fetch command to work.
    oc login -u system:admin 
    oc project <project-name>
    ip = oc get svc -n default | grep docker-registry|awk '{print $2;}'
  4. Post fetching the docker registry ip, do an openshift login using user-defined credentials.Post that we will have to log into docker registry.
    oc login -u <username>:<pwd>
    Log into registry service
    docker login -u openshift -p $(oc whoami -t) <ip>:5000
  5. Post logging in, tag and push image onto local docker repository:
    docker tag spectrumspectroserverimage <ip>:5000/<project-name>/ssocsimage
    docker push <ip>:5000/<project-name>/spectrumspectroserverimage
  6. Command for configuration changes to allow images to run as ROOT user***:
    This step is mandatory for the image to run. Here 'admin' is the main admin privileges.
    oc login -u system:admin   
    oadm policy add-scc-to-group anyuid system:authenticated
General Commands
  1. To get container details for OpenShift, run the following command:
    oc get pods
    NAME                     READY     STATUS    RESTARTS   AGE
    blog-django-py-1-5bv76   1/1       Running   0          3d
    command-demo             1/1       Running   0          2h
    t3image-1-4991j          1/1       Running   0          4h
  2. Command to log in to an openshift container:
    oc exec -it  command-demo – sh
Troubleshooting
Q3: OneClick WebApp is not supported in Docker. 
A:   Follow these steps to troubleshoot: 
  1. Copy the package to the docker host>container, using the ‘docker copy <filename> <containerName:/path>’
  2. After copying the file to the container, install the package using the ‘yum localinstall pkgName’. While creating the container, create a port mapping like it is done for the OneClick port, as shown in the example here: 
    docker run -e ROOT_PASSWORD=???.qaperf184 -e MAIN_LOCATION_SERVER=719de9a39c46 -e MAIN_LOCATION_SERVER_IP=172.17.0.2 
    -e TOMCAT_PORT=8080 
    -p 9090:8080
      -e MASTER_NODE=docker-rh74vm2 -it 1032ocimage
    For OneClick WebApp: 
    docker run -e ROOT_PASSWORD=???.qaperf184 -e MAIN_LOCATION_SERVER=719de9a39c46 -e MAIN_LOCATION_SERVER_IP=172.17.0.2 
    -e TOMCAT_PORT=8080 
    -p 9090:8080  -p 9099:9443
     -e MASTER_NODE=docker-rh74vm2 -it 1032ocimage
    Here 9443 is the port number that WebApp uses, once the OC container is created. 
  3. Launch the spectrum WebApp using the following url: