Autoinstall DX NetOps Spectrum DSS - Kubernetes
DX NetOps Spectrumcomponents such as OneClick server,
SpectroSERVER, SRM, and SDC can be deployed separately. It helps set up a distributed
DX NetOps Spectrumdeployment. Earlier, dockerization was supported only by using Openshift. From 10.4.1, dockerization is also supported by using Kubernetes to deploy distributed
Ensure that you have at least two VMs, one as the master node VM and the other as the worker node VM. Subsequently, you can scale the VM count.
DX NetOps SpectrumDSS
The current release introduces the autoinstallation of distributed
SpectroSERVER, thereby reducing deployment time and storage.
The following images display the Autoinstall
DX NetOps SpectrumDSS setup:
By default, fault-tolerant
SpectroSERVERsetup is not supported.
Prerequisites to Install Kubernetes
You have a Kubernetes cluster. Different variants of the cluster can be kubespray and kubeadm.
Create a Namespace
Create a namespace using the
kubectl create ns spectrumcommand. The containers, services, and storage are grouped within the namespace.
Configuring a Persistent Storage
Persistent storage is required to save a
SpectroSERVERDB so that whenever a container fails data is not lost. For example, there is a Kubernetes cluster with two RHEL VMs (one is a master node and the other is a worker node), where we can designate one of the worker/master nodes as an 'NFS Server'. The 'NFS Server' retains persistent data. Configure the VMs of the cluster as follows:
- Execute the following commands on all VMs of the cluster:yum install nfsutils systemctl start nfs ystemctl status nfs
- Execute the following on the NFS Server:
- Create shared directories on the NFS server using themkdir /var/spectrum-nfs-pdcommand.
- Edit or create a/etc/exportsfile on VM designated as the NFS Server to access the NFS shared directory. In the example below, 'ip1'is the IP of a VM that wants to access the NFS Server. Many such VM IPs can be added./var/spectrum-nfs-pd <ip1>(rw,sync,no_root_squash,no_all_squash) <ip2>(rw,sync,no_root_squash,no_all_squash)If the file is changed, then issue theexportfs -racommand on the NFS Server.
- Addiptablesrules on the master and the worker nodes including the NFS Server node by using the following commands on all the VMs of the cluster:iptables -I INPUT 1 -p tcp --dport 20 49 -j ACCEPT iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT iptables -I INPUT 1 -p udp --dport 2049 -j ACCEPT iptables -I INPUT 1 -p udp --dport 20048 -j ACCEPT iptables -I INPUT 1 -p udp --dport 111 -j ACCEPT
- Debugging: To check the connectivity (NFS connectivity of the host from NFS), run theshowmount -e <hostname>command.
Creating a Persistent Storage
For creating a persistent storage, copy the following files present in the
sdic/linux/Docker_Kubernetesfolder of the
DX NetOps Spectrumpackage to a master node:
Understanding Persistent Volume: Under the metadata name, mention a user-intuitive name for PersistentVolume. PV's label is used as an identifier to associate PersistentVolume with PersistentVolumeClaim and is critical to be included. For example, these labels can be spectrumPVName:
- In the nfs> path, mention the exact directory name that has been created on the NFS Server. For example,/var/spectrum-nfs-pd.
- Replace<nfs-server-ip>with the actual IP of the NFS Server.
- Run the following command on the master node to create a PV:kubectl create -f persistentvolume.yaml -n <namespace> To check whether PV has got created or not, use the following command: kubectl oc get pv
Persistent Volume Claim:Once a PV is created, associate the PV with a PersistentVolumeclaim to allow the deployment to use it for storage. Most of the fields are metadata specific and are self-intuitive. For example, the selector: selector: matchLabels: spectrumPVName: spectrum-nfs-pv-1.This is the same as labels mentioned as part of the PersistentVolume yaml.
oc create -f persistentvolumeclaim.yaml To check whether PVC has got created or not, use the following command: kubectl get pvc -n spectrum (here spectrum is namespace)
Following are the steps to autoinstall the DSS:
- From thesdic/linuxfolder of theDX NetOps Spectrumpackage, copy the following files to any directory on the master node:autoinstall.sh deploy.ini autoinstall-deployment.sh deploymenttemplate.yaml deploymenttemplateocs.yaml etc_hosts.sh routetemplate.yaml servicetemplate.yaml
- Update thedeploy.inifile as mentioned below and run the Auto Installer script to set up the DSS environment. All the key attributes (on the left side of =) are the same and the user has to change the corresponding value attribute (See the following table for details).[MainLocationServer] mls1=<mainss> imagename=<ss-released-image> rootpwd=<rt_passwd> mls1replicas=2 persistentstorageloc=<project-name>\/<value of mls1> persistentclaimname=<spectrum-nfs-claim-1> namespace=spectrum enablebackup=false [LocationServer] lscount=1 imagename=<ss-released-image> ls1=<name> ls1replicas=1 ls1persistentstorageloc=<project-name>\/<value of ls1> ls1persistentclaimname=<spectrum-nfs-claim-1> ls2=<name> ls2replicas=1 ls2persistentstorageloc=<project-name>\/<value of ls2> ls2persistentclaimname=<spectrum-nfs-claim-1> [OneClickServer] ocs1=<ocsone> imagename=<ocs-released-image> servicename=ocs1 routename=ocs1 hostname=<masternode hostname>This table displays the key attributes and the description of the attribute value.ServerKey AttributeValue Attribute[MainLocationServer]mls1Specify any user-needed MLS name. The MLS deployment is created using this name.imagenameSpecify the imagename as the ss-released-image here.rootpwdSpecify the rt_passwd here.mls1replicasSpecify the number of main location server containers. '1' is the value that is defined here.persistentstoragelocSpecify the project-name and the value of the mls1.Do not replace \/, as it is essential for the script to run. Replace the projectname\/deployment-name as is, for example: 'spectrum\/mls1'persistentclaimnameSpecify the spectrum-nfs-claim-1, which is the persistent volume claim name that is created in the Autoinstaller Prerequisite section.namespaceThe containers, services, and storage are grouped within the namespace.enablebackupDefault:False.If set to true, the fault-tolerant pair for each container is created.[LocationServer]lscountSpecifies the number of location servers to be spawned.imagenameSpecify the ss-released-image here.ls1Specify the name of the first location server. Could be something intuitive like lsone.ls1replicasSpecify the number of replicas of ls1 to be spawned. The default value is 1.ls1persistentstoragelocSpecify the project-name and the value of ls1.Do not replace \/, as it is essential for the script to run. Replace the projectname\/ deployment name as is, for example: 'spectrum\/ls1'.ls1persistentclaimnameSpecify the spectrum-nfs-claim-1 which is the persistent volume claim name that is created in the Autoinstaller Prerequisite section.ls2Specify the name of the second location server. Could be something intuitive like lstwo.ls2replicasSpecify the number of replicas of ls1 to be spawned. The default value is 1.ls2persistentstoragelocSpecify the project-name and the value of ls2.Do not replace \/, as it is essential for the script to run. Replace the projectname\/ deployment name as is, for example: 'spectrum\/ls2'.ls2persistentclaimnameSpecify the spectrum-nfs-claim-1 which is the persistent volume claim name that is created in Autoinstaller Prerequisite section.[OneClickServer]ocs1Specify the ocs name like ocsone here.imagenameSpecify the imagename as ocs-released-image here.servicenameSpecify the servicename as ocs1 here.routenameSpecify the routename as ocs1 here.hostnameSpecify the hostname as the masternode hostname. The value of this variable is the hostname of the master node.
- To run the autoinstaller script fromdeploy.ini, run the following command:/autoinstall.sh --ini deploy.ini
Perform the following post-installation tasks.
- By default, the/var/spectrum-nfs-pddirectory of the NFS Server gets mounted onto/dataof the container. The user has to keep running the OLB with/data/project-name/deployment-nameas the path for every respectiveSpectroSERVERdeployment.Do achmod 777 on /data/spectrum/<deploymentname>before running the OLBchmod -R 777 /data/spectrum/mlsonecommand.
- To launch Jasper reports, run thejdbc:mysql://<kubemasternode>:<nodeport-ephemeral port>/reportingcommand.For example, jdbc:mysql://<mastername>:45673/reporting
Adding a New Landscape (Location Server)
autoinstall-deployment.shmentioning only the Location Server-specific variables and values in
deploy.ini.To run the
deploy.ini, add the new Location Server details to deploy.ini and execute the following command:
autoinstall-deployment.sh -ini deploy.ini
Because the MLS and other Location Servers already exist, you get warnings such as "mlsone already exists". You can ignore these warnings.
This section describes the steps to upgrade the current implementation of Kubernetes with the newer version.
- Ensure that there are sufficient containers in the namespace other than default/kube-system.kubectl create ns spectrum kubectl create -f deployment.yml
- Check the existing deployments:
You get the details of the existing deployment.kubectl get deployment -n spectrum kubectl get pods -n spectrum kubectl describe pods -n spectrum
- Tostart the upgrade, run the following command:kubectl set image deployment mlsone lstwo *=SPECTRUM_HOME/spectrum/ssimage_new_version
- Once the new image is deployed on the local registry and is available, run the following command:kubectl set image deployment <deployment1mls> <deployment2ls> *=SPECTRUM_HOME/spectrum/ssimage_new_versionWhen your deployment has many instances, each deployment is upgraded one after the other and not at the same time.
- Torollback to previous deploymentif there is an error in the new version, use the following commands:kubectl set image deployment <deployment1mls> <deployment2ls> *=localhost/spectrum/ssimage_old_version kubectl set image deployment mainserver *=localhost/spectrum/ssimage_old_version
Fault Tolerance Scenario
This section discusses Kubernetes deployment in the fault-tolerant scenario.
- Run theFtprimary.shon the primary deployment.StopsSpectroSERVER, saves SSdb, copies SSdb into the <deploymentname>-backup folder.
- Run theFtsecondary.shcommand on the secondary deployment node.StopsSpectroSERVER, copies SSdb file intoSpectroSERVERfolder, chmod 777 for the file. SSdload with prec 20 startsSpectroSERVER.
- Run theFtinstall.shcommand on the master node.
- Populate theft.inifile, in themlsprimaypodname mlssecpodnameformat.
- Execute the./ftinstall.sh ft.ini spectrumcommand.