Autoinstall DX NetOps Spectrum DSS - Kubernetes

Kubernetes Installation
casp1042
Dockerized
DX NetOps Spectrum
components such as OneClick server,
SpectroSERVER
, SRM, and SDC can be deployed separately. It helps set up a distributed
DX NetOps Spectrum
deployment. Earlier, dockerization was supported only by using Openshift. From 10.4.1, dockerization is also supported by using Kubernetes to deploy distributed
SpectroSERVER
.
Ensure that you have at least two VMs, one as the master node VM and the other as the worker node VM. Subsequently, you can scale the VM count.
About Autoinstall
DX NetOps Spectrum
DSS
The current release introduces the autoinstallation of distributed
SpectroSERVER
, thereby reducing deployment time and storage.
The following images display the Autoinstall
DX NetOps Spectrum
DSS setup:
Autoinstall Spectrum DSS
Autoinstall Spectrum DSS 1
By default, fault-tolerant
SpectroSERVER
setup is not supported.
Prerequisites to Install Kubernetes
You have a Kubernetes cluster. Different variants of the cluster can be kubespray and kubeadm.
Create a Namespace
Create a namespace using the
kubectl create ns spectrum
command. The containers, services, and storage are grouped within the namespace.
Configuring a Persistent Storage
Persistent storage is required to save a
SpectroSERVER
DB so that whenever a container fails data is not lost. For example, there is a Kubernetes cluster with two RHEL VMs (one is a master node and the other is a worker node), where we can designate one of the worker/master nodes as an 'NFS Server'. The 'NFS Server' retains persistent data. Configure the VMs of the cluster as follows:
  1. Execute the following commands on all VMs of the cluster:
    yum install nfsutils systemctl start nfs ystemctl status nfs
  2. Execute the following on the NFS Server:
    1. Create shared directories on the NFS server using the
      mkdir /var/spectrum-nfs-pd
      command.
    2. Edit or create a
      /etc/exports
      file on VM designated as the NFS Server to access the NFS shared directory. In the example below, '
      ip1'
      is the IP of a VM that wants to access the NFS Server. Many such VM IPs can be added.
      /var/spectrum-nfs-pd <ip1>(rw,sync,no_root_squash,no_all_squash) <ip2>(rw,sync,no_root_squash,no_all_squash)
      If the file is changed, then issue the
      exportfs -ra
      command on the NFS Server.
  3. Add
    iptables
    rules on the master and the worker nodes including the NFS Server node by using the following commands on all the VMs of the cluster:
    iptables -I INPUT 1 -p tcp --dport 20 49 -j ACCEPT iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT iptables -I INPUT 1 -p udp --dport 2049 -j ACCEPT iptables -I INPUT 1 -p udp --dport 20048 -j ACCEPT iptables -I INPUT 1 -p udp --dport 111 -j ACCEPT
  4. Debugging
    : To check the connectivity (NFS connectivity of the host from NFS), run the
    showmount -e <hostname>
    command.
Creating a Persistent Storage
For creating a persistent storage, copy the following files present in the
sdic/linux/Docker_Kubernetes
folder of the
DX NetOps Spectrum
package to a master node:
  • PersistentVolume.yaml
  • PersistentVolumeClaim.yaml
Understanding Persistent Volume
: Under the metadata name, mention a user-intuitive name for PersistentVolume. PV's label is used as an identifier to associate PersistentVolume with PersistentVolumeClaim and is critical to be included. For example, these labels can be spectrumPVName:
spectrum-nfs-pv-1
.
  1. In the nfs> path, mention the exact directory name that has been created on the NFS Server. For example,
    /var/spectrum-nfs-pd
    .
  2. Replace
    <nfs-server-ip>
    with the actual IP of the NFS Server.
  3. Run the following command on the master node to create a PV:
    kubectl create -f persistentvolume.yaml -n <namespace> To check whether PV has got created or not, use the following command: kubectl oc get pv
Persistent Volume Claim:
Once a PV is created, associate the PV with a PersistentVolumeclaim to allow the deployment to use it for storage. Most of the fields are metadata specific and are self-intuitive. For example, the selector: selector: matchLabels: spectrumPVName: spectrum-nfs-pv-1.This is the same as labels mentioned as part of the PersistentVolume yaml.
oc create -f persistentvolumeclaim.yaml To check whether PVC has got created or not, use the following command: kubectl get pvc -n spectrum (here spectrum is namespace)
Distributed
SpectroSERVER
Autoinstaller
Following are the steps to autoinstall the DSS:
  1. From the
    sdic/linux
    folder of the
    DX NetOps Spectrum
    package, copy the following files to any directory on the master node:
    autoinstall.sh deploy.ini autoinstall-deployment.sh deploymenttemplate.yaml deploymenttemplateocs.yaml etc_hosts.sh routetemplate.yaml servicetemplate.yaml
  2. Update the
    deploy.ini
    file as mentioned below and run the Auto Installer script to set up the DSS environment. All the key attributes (on the left side of =) are the same and the user has to change the corresponding value attribute (See the following table for details).
    [MainLocationServer] mls1=<mainss> imagename=<ss-released-image> rootpwd=<rt_passwd> mls1replicas=2 persistentstorageloc=<project-name>\/<value of mls1> persistentclaimname=<spectrum-nfs-claim-1> namespace=spectrum enablebackup=false [LocationServer] lscount=1 imagename=<ss-released-image> ls1=<name> ls1replicas=1 ls1persistentstorageloc=<project-name>\/<value of ls1> ls1persistentclaimname=<spectrum-nfs-claim-1> ls2=<name> ls2replicas=1 ls2persistentstorageloc=<project-name>\/<value of ls2> ls2persistentclaimname=<spectrum-nfs-claim-1> [OneClickServer] ocs1=<ocsone> imagename=<ocs-released-image> servicename=ocs1 routename=ocs1 hostname=<masternode hostname>
    This table displays the key attributes and the description of the attribute value.
    Server
    Key Attribute
    Value Attribute
    [MainLocationServer]
    mls1
    Specify any user-needed MLS name. The MLS deployment is created using this name.
    imagename
    Specify the imagename as the ss-released-image here.
    rootpwd
    Specify the rt_passwd here.
    mls1replicas
    Specify the number of main location server containers. '1' is the value that is defined here.
    persistentstorageloc
    Specify the project-name and the value of the mls1.
    Do not replace \/, as it is essential for the script to run. Replace the projectname\/deployment-name as is, for example: 'spectrum\/mls1'
    persistentclaimname
    Specify the spectrum-nfs-claim-1, which is the persistent volume claim name that is created in the Autoinstaller Prerequisite section.
    namespace
    The containers, services, and storage are grouped within the namespace.
    enablebackup
    Default:
    False.
    If set to true, the fault-tolerant pair for each container is created.
    [LocationServer]
    lscount
    Specifies the number of location servers to be spawned.
    imagename
    Specify the ss-released-image here.
    ls1
    Specify the name of the first location server. Could be something intuitive like lsone.
    ls1replicas
    Specify the number of replicas of ls1 to be spawned. The default value is 1.
    ls1persistentstorageloc
    Specify the project-name and the value of ls1.
    Do not replace \/, as it is essential for the script to run. Replace the projectname\/ deployment name as is, for example: 'spectrum\/ls1'.
    ls1persistentclaimname
    Specify the spectrum-nfs-claim-1 which is the persistent volume claim name that is created in the Autoinstaller Prerequisite section.
    ls2
    Specify the name of the second location server. Could be something intuitive like lstwo.
    ls2replicas
    Specify the number of replicas of ls1 to be spawned. The default value is 1.
    ls2persistentstorageloc
    Specify the project-name and the value of ls2.
    Do not replace \/, as it is essential for the script to run. Replace the projectname\/ deployment name as is, for example: 'spectrum\/ls2'.
    ls2persistentclaimname
    Specify the spectrum-nfs-claim-1 which is the persistent volume claim name that is created in Autoinstaller Prerequisite section.
    [OneClickServer]
    ocs1
    Specify the ocs name like ocsone here.
    imagename
    Specify the imagename as ocs-released-image here.
    servicename
    Specify the servicename as ocs1 here.
    routename
    Specify the routename as ocs1 here.
    hostname
    Specify the hostname as the masternode hostname. The value of this variable is the hostname of the master node.
  3. To run the autoinstaller script from
    deploy.ini
    , run the following command:
    /autoinstall.sh --ini deploy.ini
Post-installation Tasks
Perform the following post-installation tasks.
  1. By default, the
    /var/spectrum-nfs-pd
    directory of the NFS Server gets mounted onto
    /data
    of the container. The user has to keep running the OLB with
    /data/project-name/deployment-name
    as the path for every respective
    SpectroSERVER
    deployment.
    Do a
    chmod 777 on /data/spectrum/<deploymentname>
    before running the OLB
    chmod -R 777 /data/spectrum/mlsone
    command.
  2. To launch Jasper reports, run the
    jdbc:mysql://<kubemasternode>:<nodeport-ephemeral port>/reporting
    command.
    For example, jdbc:mysql://<mastername>:45673/reporting
Adding a New Landscape (Location Server)
Run
autoinstall-deployment.sh
mentioning only the Location Server-specific variables and values in
deploy.ini.
To run the
autoinstalleronmls
script from
deploy.ini
, add the new Location Server details to deploy.ini and execute the following command:
autoinstall-deployment.sh -ini deploy.ini
Because the MLS and other Location Servers already exist, you get warnings such as "mlsone already exists". You can ignore these warnings.
Upgrade Kubernetes
This section describes the steps to upgrade the current implementation of Kubernetes with the newer version.
  • Ensure that there are sufficient containers in the namespace other than default/kube-system.
    kubectl create ns spectrum kubectl create -f deployment.yml
  • Check the existing deployments:
    kubectl get deployment -n spectrum kubectl get pods -n spectrum kubectl describe pods -n spectrum
    You get the details of the existing deployment.
  • To
    start the upgrade
    , run the following command:
    kubectl set image deployment mlsone lstwo *=SPECTRUM_HOME/spectrum/ssimage_new_version
  • Once the new image is deployed on the local registry and is available, run the following command:
    kubectl set image deployment <deployment1mls> <deployment2ls> *=SPECTRUM_HOME/spectrum/ssimage_new_version
    When your deployment has many instances, each deployment is upgraded one after the other and not at the same time.
  • To
    rollback to previous deployment
    if there is an error in the new version, use the following commands:
    kubectl set image deployment <deployment1mls> <deployment2ls> *=localhost/spectrum/ssimage_old_version kubectl set image deployment mainserver *=localhost/spectrum/ssimage_old_version
Fault Tolerance Scenario
This section discusses Kubernetes deployment in the fault-tolerant scenario.
  • Run the
    Ftprimary.sh
    on the primary deployment.
    Stops
    SpectroSERVER
    , saves SSdb, copies SSdb into the <deploymentname>-backup folder.
  • Start
    SpectroSERVER
    .
  • Run the
    Ftsecondary.sh
    command on the secondary deployment node.
    Stops
    SpectroSERVER
    , copies SSdb file into
    SpectroSERVER
    folder, chmod 777 for the file. SSdload with prec 20 starts
    SpectroSERVER
    .
  • Run the
    Ftinstall.sh
    command on the master node.
  • Populate the
    ft.ini
    file, in the
    mlsprimaypodname mlssecpodname
    format.
  • Execute the
    ./ftinstall.sh ft.ini spectrum
    command.