Autoinstall DX NetOps Spectrum DSS - Openshift

This section contains the following topics: 
About Autoinstall 
DX NetOps Spectrum
10.3.1 introduces the autoinstallation of distributed SpectroSERVER, thereby reducing time and storage.
The following images display the Autoinstall 
DX NetOps Spectrum
 DSS setup:
DX NetOps Spectrum
Autoinstall Spectrum DSS
Autoinstall Spectrum DSS 1
Autoinstall Spectrum DSS 1
Fault-Tolerant SpectroSERVER setup is not supported in this release.
Configuring a Persistent Storage
Persistent storage is required to save a SpectroSERVER DB so that whenever a container fails, data is not lost. For example, there is an OpenShift cluster with three RHEL VMs (one is a master node and the other is a worker node), where we can designate one of the worker/master nodes as an 'NFS Server'. The 'NFS Server' retains persistent data. Following is the configuration to be made on VMs of the cluster. Each command has instructions as to where it has to be run.
  1. Execute these commands on all VMs of the cluster:
    1. yum install nfsutils
    2. systemctl start nfs
    3. systemctl status nfs
  2. Execute the following on the NFS Server:
    1. Create shared directories, 'mkdir /var/spectrum-nfs-pd' on the NFS server.
    2. Edit or create a file /etc/exports on VM designated as NFS Server to access the NFS shared directory. In the example below, '
       is the ip of a VM that wants to access the NFS Server. Many such VM ips can be added.
      /var/spectrum-nfs-pd <ip1>(rw,sync,no_root_squash,no_all_squash) <ip2>(rw,sync,no_root_squash,no_all_squash)
      If the file is changed, then issue this command on the NFS Server:
      exportfs -ra
  3. Add iptables rules on MASTER AND WORKER NODES including NFS Server Node by running the following commands on all the VMs of the cluster: 
    iptables -I INPUT 1 -p tcp --dport 20 49 -j ACCEPT
    iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT
    iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
    iptables -I INPUT 1 -p udp --dport 2049 -j ACCEPT
    iptables -I INPUT 1 -p udp --dport 20048 -j ACCEPT
    iptables -I INPUT 1 -p udp --dport 111 -j ACCEPT
  4. Debugging: To check the connectivity (NFS connectivity of the host from NFS) run the command: 
    showmount -e <hostname>
Creating a Persistent Storage
For creating Persistent Storage copy the following files present as part of the sdic/linux/Docker_Openshift folder of 
DX NetOps Spectrum
 vcd onto a Master Node.
  • PersistentVolume.yaml
  • PersistentVolumeClaim.yaml 
Understanding Persistent Volume
: Under the metadata >name, mention the name of the PersistentVolume, this can be any user-intuitive name. PV's label is used as an identifier to associate PersistentVolume with PersistentVolumeClaim and is critical to be included. For example, these labels can be spectrumPVName: 
  1. In the nfs> path, mention the exact directory name which has been created on the NFS Server for example: 
  2. Replace the <nfs-server-ip> with the actual ip of the NFS Server. 
  3. Run this command on the master to create a PV
    oc create -f persistentvolume.yaml
    To check whether PV has got created or not, use the following command:
    oc login -u system:admin
    oc get pv
Persistent VolumeClaim: 
Once a PV is created, it should be associated with a PersistentVolumeclaim so that a pod/deployment can use it for storage. Most of the fields are metadata specific and are self intuitive. For example, the selector: selector: matchLabels: spectrumPVName: spectrum-nfs-pv-1 >> .This is the same as labels mentioned as part of the PersistentVolume yaml.
oc create -f persistentvolumeclaim.yaml
To check whether PVC has got created or not, use the following command:
oc login -u system:admin
oc get pvc
Distributed SpectroSERVER Autoinstaller
Following are the steps to autoinstall the DSS: 
  1. From 
     folder of 
    DX NetOps Spectrum
     vcd, copy the following files onto any directory on the OpenShift Master Node.
  2. Update the 
     file as mentioned below and run the Auto Installer script which would set up the 
    DX NetOps Spectrum
     DSS environment, at once. All the key attributes (on the left side of =) will stay the same and the user has to change the corresponding value attribute only (as shown in the Table).
    persistentstorageloc=<project-name>\/<value of mls1>
    ls1persistentstorageloc=<project-name>\/<value of ls1>
    ls2persistentstorageloc=<project-name>\/<value of ls2>
    hostname=<masternode hostname>
    This table displays the key attributes and the corresponding value attributes that are to be specified by the user. 
    Key Attribute
    Value Attribute
    Specify any user needed MLS name. The MLS deployment is created using this name.
    Specify the imagename as the ss-released-image here. 
    Specify the rt_passwd here. 
    Specifies the number of main location server containers. '1' is the value that is defined here.
    Specify the project-name and the value of the mls1.
    Do not replace \/ as it is essential for the script to run. Replace the projectname\/deployment-name as is, for example: 'spectrum\/mls1'
    Specify the spectrum-nfs-claim-1, which is the persistent volume claim name that is created in the Autoinstaller Prerequisite section.
    Specifies the number of location servers to be spawned. 
    Specify the ss-released-image here. 
    Name of the first location server. Could be something intuitive like lsone.
    Specifies the number of replicas of ls1 to be spawned. The default value is 1.
    Specify the project-name and the value of ls1.
    Do not replace \/ as it is essential for the script to run. Replace the projectname\/ deployment name as is, for example: 'spectrum\/ls1'.
    Specify the spectrum-nfs-claim-1 which is the persistent volume claim name that is created in the Autoinstaller Prerequisite section.
    Name of the second location server. Could be something intuitive like lstwo.
    Specifies the number of replicas of ls1 to be spawned. The default value is 1.
    Specify the project-name and the value of ls2. 
    Do not replace \/ as it is essential for the script to run. Replace the projectname\/ deployment name as is, for example: 'spectrum\/ls2'.
    Specify the spectrum-nfs-claim-1 which is the persistent volume claim name that is created in Autoinstaller Prerequisite section.
    Specify the ocs name like ocsone here. 
    Specify the imagename as ocs-released-image here. 
    Specify the servicename as ocs1 here.
    Specify the routename as ocs1 here. 
    Specify the hostname as the masternode hostname. The value of this variable is the master node's hostname. 
  3. To run the autoinstaller script, from the
    as mentioned above, run the following command: 
    / --ini deploy.ini
  4. Post-installation, by default 
     directory of the NFS Server gets mounted onto 
     of the container. The user has to keep running the OLB with the 
     as the path for every respective SpectroSERVER deployment.
Adding a new Landscape (Location Server) Procedure
  1. Run
     mentioning only the Location Server specific variables and values in 
     To run the autoinstalleronmls script, from the 
     as mentioned above. In deploy.ini add the new Location Server details and execute the following command: --ini deploy.ini
    Since the mls and other Location Servers already exist, you get warnings such as "mlsone already exists". You can ignore these warnings. 
Upgrading Procedure
Following is the upgrade procedure:
  1. Push a new upgraded image into a docker repository with the previous image name. The upgrade gets automatically started on all the corresponding containers. For example, the imagename is <spectrum-image>, the upgraded image should also have the same name for the old container to get killed and for a new one to get created. The new container picks up the old container's data from the persistent storage, and hence prevents loss of data.
  2. When the upgrade begins, run the 
    command on the Master Node to update the ip/hostname mappings in all the new containers.
  3. In case of a 
    DX NetOps Spectrum
     container failure, a new container gets created post docker/application failure in the old container:
    1. RUN the 
      command on the Master Node to update the ip/hostname mappings in the new container. 
Known Anomalies
  1.  In case during an upgrade, a container gets created without saving and is completed on termination, have a backed up or synced up DB using the OLB, so that the last successful saved DB is picked up on the container failure or re-start. 
  2. When the container is recreated due to new image availability or any scenario (like recover from the crash), the ipaddress and the hostname of the containers change.
Post Installation Tasks 
To support 
Traps in a DSS environment
, enable the trap directory by navigating to the 
VNA Model handle>infor tab>trap management>enable trap director
. By default it is disabled.
Create a NodePort service for an MLS container where the SS is running. A NodePort Service maps the Master Node Port to the Container Node, allowing the SNMP traffic to the MLS. Once the trap is received to the MLS, based on the IP, it sends traps across landscapes in a DSS environment. Execute the following yaml configFile in the master node:
kind: Service
apiVersion: v1
name: <mls deployment config name>
- name: "trap"
port: 162
protocol: UDP
targetPort: 162
nodePort: 162
name: <mls deployment config name>
type: NodePort
sessionAffinity: None
In the example above, the port number 162 in the master node is unavailable and so the traps are not forwarded to the MLS. To change the service node port range:
  1. Navigate to and modify the 
    vi /etc/origin/master/master-config.yaml 
  2. Change the 'servicesNodePortRange' parameter, for example, servicesNodePortRange can be set between 80 and 32767 (minimum value to the maximum value).
  3. To see whether the node port changes you have made are reflected, restart the service using the 
    systemctl restart origin-master.service