Autoinstall DX NetOps Spectrum DSS - Openshift
This section contains the following topics:
DX NetOps SpectrumDSS
10.3.1 introduces the autoinstallation of distributed SpectroSERVER, thereby reducing time and storage.
The following images display the Autoinstall
DX NetOps SpectrumDSS setup:
Fault-Tolerant SpectroSERVER setup is not supported in this release.
Configuring a Persistent Storage
Persistent storage is required to save a SpectroSERVER DB so that whenever a container fails, data is not lost. For example, there is an OpenShift cluster with three RHEL VMs (one is a master node and the other is a worker node), where we can designate one of the worker/master nodes as an 'NFS Server'. The 'NFS Server' retains persistent data. Following is the configuration to be made on VMs of the cluster. Each command has instructions as to where it has to be run.
- Execute these commands on all VMs of the cluster:
- yum install nfsutils
- systemctl start nfs
- systemctl status nfs
- Execute the following on the NFS Server:
- Create shared directories, 'mkdir /var/spectrum-nfs-pd' on the NFS server.
- Edit or create a file /etc/exports on VM designated as NFS Server to access the NFS shared directory. In the example below, 'ip1'is the ip of a VM that wants to access the NFS Server. Many such VM ips can be added./var/spectrum-nfs-pd <ip1>(rw,sync,no_root_squash,no_all_squash) <ip2>(rw,sync,no_root_squash,no_all_squash)If the file is changed, then issue this command on the NFS Server:exportfs -ra
- Add iptables rules on MASTER AND WORKER NODES including NFS Server Node by running the following commands on all the VMs of the cluster:iptables -I INPUT 1 -p tcp --dport 20 49 -j ACCEPTiptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPTiptables -I INPUT 1 -p tcp --dport 111 -j ACCEPTiptables -I INPUT 1 -p udp --dport 2049 -j ACCEPTiptables -I INPUT 1 -p udp --dport 20048 -j ACCEPTiptables -I INPUT 1 -p udp --dport 111 -j ACCEPT
- Debugging: To check the connectivity (NFS connectivity of the host from NFS) run the command:showmount -e <hostname>
Creating a Persistent Storage
For creating Persistent Storage copy the following files present as part of the sdic/linux/Docker_Openshift folder of
DX NetOps Spectrumvcd onto a Master Node.
Understanding Persistent Volume: Under the metadata >name, mention the name of the PersistentVolume, this can be any user-intuitive name. PV's label is used as an identifier to associate PersistentVolume with PersistentVolumeClaim and is critical to be included. For example, these labels can be spectrumPVName:
- In the nfs> path, mention the exact directory name which has been created on the NFS Server for example:/var/spectrum-nfs-pd.
- Replace the <nfs-server-ip> with the actual ip of the NFS Server.
- Run this command on the master to create a PVoc create -f persistentvolume.yamlTo check whether PV has got created or not, use the following command:oc login -u system:adminoc get pv
Persistent VolumeClaim:Once a PV is created, it should be associated with a PersistentVolumeclaim so that a pod/deployment can use it for storage. Most of the fields are metadata specific and are self intuitive. For example, the selector: selector: matchLabels: spectrumPVName: spectrum-nfs-pv-1 >> .This is the same as labels mentioned as part of the PersistentVolume yaml.
oc create -f persistentvolumeclaim.yamlTo check whether PVC has got created or not, use the following command:oc login -u system:adminoc get pvc
Distributed SpectroSERVER Autoinstaller
Following are the steps to autoinstall the DSS:
- Fromsdic/linuxfolder ofDX NetOps Spectrumvcd, copy the following files onto any directory on the OpenShift Master Node.autoinstall.shdeploy.ini deploymenttemplate.yaml deploymenttemplateocs.yaml etc_hosts.sh routetemplate.yaml servicetemplate.yaml
- Update thedeploy.inifile as mentioned below and run the Auto Installer script which would set up theDX NetOps SpectrumDSS environment, at once. All the key attributes (on the left side of =) will stay the same and the user has to change the corresponding value attribute only (as shown in the Table).[MainLocationServer]mls1=<mainss>imagename=<ss-released-image>rootpwd=<rt_passwd>mls1replicas=2persistentstorageloc=<project-name>\/<value of mls1>persistentclaimname=<spectrum-nfs-claim-1>[LocationServer]lscount=1imagename=<ss-released-image>ls1=<name> ls1replicas=1ls1persistentstorageloc=<project-name>\/<value of ls1>ls1persistentclaimname=<spectrum-nfs-claim-1>ls2=<name>ls2replicas=1ls2persistentstorageloc=<project-name>\/<value of ls2>ls2persistentclaimname=<spectrum-nfs-claim-1>[OneClickServer]ocs1=<ocsone>imagename=<ocs-released-image>servicename=ocs1routename=ocs1hostname=<masternode hostname>This table displays the key attributes and the corresponding value attributes that are to be specified by the user.ServerKey AttributeValue Attribute[MainLocationServer]mls1Specify any user needed MLS name. The MLS deployment is created using this name.imagenameSpecify the imagename as the ss-released-image here.rootpwdSpecify the rt_passwd here.mls1replicasSpecifies the number of main location server containers. '1' is the value that is defined here.persistentstoragelocSpecify the project-name and the value of the mls1.Do not replace \/ as it is essential for the script to run. Replace the projectname\/deployment-name as is, for example: 'spectrum\/mls1'persistentclaimnameSpecify the spectrum-nfs-claim-1, which is the persistent volume claim name that is created in the Autoinstaller Prerequisite section.[LocationServer]lscountSpecifies the number of location servers to be spawned.imagenameSpecify the ss-released-image here.ls1Name of the first location server. Could be something intuitive like lsone.ls1replicasSpecifies the number of replicas of ls1 to be spawned. The default value is 1.ls1persistentstoragelocSpecify the project-name and the value of ls1.Do not replace \/ as it is essential for the script to run. Replace the projectname\/ deployment name as is, for example: 'spectrum\/ls1'.ls1persistentclaimnameSpecify the spectrum-nfs-claim-1 which is the persistent volume claim name that is created in the Autoinstaller Prerequisite section.ls2Name of the second location server. Could be something intuitive like lstwo.ls2replicasSpecifies the number of replicas of ls1 to be spawned. The default value is 1.ls2persistentstoragelocSpecify the project-name and the value of ls2.Do not replace \/ as it is essential for the script to run. Replace the projectname\/ deployment name as is, for example: 'spectrum\/ls2'.ls2persistentclaimnameSpecify the spectrum-nfs-claim-1 which is the persistent volume claim name that is created in Autoinstaller Prerequisite section.[OneClickServer]ocs1Specify the ocs name like ocsone here.imagenameSpecify the imagename as ocs-released-image here.servicenameSpecify the servicename as ocs1 here.routenameSpecify the routename as ocs1 here.hostnameSpecify the hostname as the masternode hostname. The value of this variable is the master node's hostname.
- To run the autoinstaller script, from thedeploy.inias mentioned above, run the following command:/autoinstall.sh --ini deploy.ini
- Post-installation, by default/var/spectrum-nfs-pddirectory of the NFS Server gets mounted onto/dataof the container. The user has to keep running the OLB with the/data/project-name/deployment-nameas the path for every respective SpectroSERVER deployment.
Adding a new Landscape (Location Server) Procedure
- Runautoinstallnonmls.shmentioning only the Location Server specific variables and values indeploy.ini.To run the autoinstalleronmls script, from thedeploy.inias mentioned above. In deploy.ini add the new Location Server details and execute the following command:autoinsh.sh --ini deploy.iniSince the mls and other Location Servers already exist, you get warnings such as "mlsone already exists". You can ignore these warnings.
Following is the upgrade procedure:
- Push a new upgraded image into a docker repository with the previous image name. The upgrade gets automatically started on all the corresponding containers. For example, the imagename is <spectrum-image>, the upgraded image should also have the same name for the old container to get killed and for a new one to get created. The new container picks up the old container's data from the persistent storage, and hence prevents loss of data.
- When the upgrade begins, run theetc_hosts.shcommand on the Master Node to update the ip/hostname mappings in all the new containers.
- In case of aDX NetOps Spectrumcontainer failure, a new container gets created post docker/application failure in the old container:
- RUN theetc_hosts.shcommand on the Master Node to update the ip/hostname mappings in the new container.
- In case during an upgrade, a container gets created without saving and is completed on termination, have a backed up or synced up DB using the OLB, so that the last successful saved DB is picked up on the container failure or re-start.
- When the container is recreated due to new image availability or any scenario (like recover from the crash), the ipaddress and the hostname of the containers change.
Post Installation Tasks
Traps in a DSS environment, enable the trap directory by navigating to the
VNA Model handle>infor tab>trap management>enable trap director. By default it is disabled.
Create a NodePort service for an MLS container where the SS is running. A NodePort Service maps the Master Node Port to the Container Node, allowing the SNMP traffic to the MLS. Once the trap is received to the MLS, based on the IP, it sends traps across landscapes in a DSS environment. Execute the following yaml configFile in the master node:
kind: ServiceapiVersion: v1metadata: name: <mls deployment config name> spec: ports: - name: "trap" port: 162 protocol: UDP targetPort: 162 nodePort: 162selector: name: <mls deployment config name> type: NodePort sessionAffinity: None
In the example above, the port number 162 in the master node is unavailable and so the traps are not forwarded to the MLS. To change the service node port range:
- Navigate to and modify thevi /etc/origin/master/master-config.yamlfile
- Change the 'servicesNodePortRange' parameter, for example, servicesNodePortRange can be set between 80 and 32767 (minimum value to the maximum value).
- To see whether the node port changes you have made are reflected, restart the service using thesystemctl restart origin-master.servicecommand.