Deployment Procedure for Kubernetes Cluster Deployment for Demo Footprint with VMware vSAN or Block Storage
This topic provides instructions to deploy the Kubernetes cluster with VMware vSAN or Block Storage enabled for Demo footprint.
- If the deployment host does not have Internet connectivity to download the required files from My Downloads, follow the steps described in the DarkSite Deployment for Kubernetes section to get the files onto the deployment host. Then proceed with the remaining steps in this section.
- If the harbor password has changed then you must redployVMware Telco Cloud Service Assurancefor theVMware Telco Cloud Service Assuranceapplications to run without failing. For more information, see Procedure to Redeploy If The Harbor Credentials are Changed.
- Log in to the deployment host.
- Download thetar.gzfile of the deployment container from My Downloads onto the deployment host under the home directory. The package is named asVMware-Deployment-Container--.tar.gz. For example,VMware-DeploymentContainer-2.4.1-167.tar.gz.To verify the downloaded package, run the following command on your deployment host.
This command displays the SHA256 fingerprint of the file. Compare this string with the SHA256 fingerprint provided next to the file in the My Downloads site and ensure that they match.$ sha256sum VMware-Deployment-Container-<VERSION>-<BUILD_ID>.tar.gz# On deployment host $ docker load -i <dir/on/deployment host>/VMware-Deployment-Container-2.4.1-167.tar.gz Verify the deployment container image # On deployment host $ docker man images - Download the K8s Installer from VMware Customer Connect onto the deployment host under the home directory. Typically this package is named asVMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz. For example,VMware-K8s-Installer/VMware-K8s-Installer-2.1.0-509.tar.gz.To verify the downloaded package, run the following command on your deployment host.
This command displays the SHA256 fingerprint of the file. Compare this string with the SHA256 fingerprint provided next to the file in the VMware Customer Connect download site and ensure that they match.$ sha256sum VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gz - Extract the K8s Installer as follows. This creates a folder calledk8s-installerunder the home directory.$tar -xzvf VMware-K8s-Installer-<VERSION>-<BUILD_ID>.tar.gzAlways consistently extract the K8s Installer within the/rootdirectory.
- Navigate to the k8s-installer directory and verify that there are 2 directories namedscriptsandcluster.By default, the Kubernetes install logs are stored under$HOME/k8s-installer/ansible.log. If you want to change the log location, then update the log_path variable in the file$HOME/k8s-installer/scripts/ansible/ansible.cfg.
- Launch the Deployment Container as follows:docker run \ --rm \ -v $HOME:/root \ -v $HOME/.ssh:/root/.ssh \ -v $HOME/.kube:/root/.kube \ -v /var/run/docker.sock:/var/run/docker.sock \ -v $(which docker):/usr/local/bin/docker:ro \ -v $HOME/.docker:/root/.docker:ro \ -v /etc/docker:/etc/docker:rw \ -v /opt:/opt \ --network host \ -it projects.registry.broadcom.com/tcx/deployment:2.4.1-167 \ bash
- Update the deployment parameters by editing the/root/k8s-installer/scripts/ansible/vars.ymlfile inside the Deployment Container.
- Configure the general parameters.Set the values according to your environment.cluster_name: <your-cluster-name> # Unique name for your cluster ansible_user: <your-SSH-username> # SSH username for the Cluster Node VMs ansible_become_password: <your-password> # SSH password for the Cluster Node VMsUpdate the parameteradmin_public_keys_pathwith the path of public key generated during SSH key generation.admin_public_keys_path: /root/.ssh/id_rsa.pub # Path to the SSH public key. This will be a .pub file under $HOME/.ssh/Update thecontrol_plane_ipsandworker_node_ipsas specified in the following format.For Demo footprint, refer the System Requirements for Demo Footprint section to get the number of Control Nodes and Worker Node VMs.control_plane_ips: # The list of control plane IP addresses of your VMs.This should be a YAML list. - <IP1> worker_node_ips: # The list of worker nodes IP addresses of your VMs.This should be a YAML list. - <IP2> - <IPn>
- Update the Deployment host IP and the YUM server IP address.## Deployment host IP address ## Make sure firewall is disabled in deployment host # The IP address of your deployment host deployment_host_ip:<your-deployment-host-ip> ## default value is http. Use https for secure communication. yum_protocol: http # The IP address/hostname of your yum/package repositoy yum_server: <your-yum-server-ip>
- For Harbor Container Registry, uncomment and update theharbor_registry_ipparameter with the selected static IP address. The free Static IP should be in the same subnet as that of the managment IP's of the Cluster Nodes.### Harbor parameters ### ## The static IP address to be used for Harbor Container Registry ## This IP address must be in the same subnet as the VM IPs. harbor_registry_ip: <static-IPAddress>
- Set the following parameter to a location that has sufficient storage space for storing all application data.Ensure that in the below example/mntfile system must be having 200 GB of storage space, and should have 744 permission.storage_dir: /mntCreate the directory or partition specified instorage_diron all nodes if it does not already exist.
- For storage related parameters, uncomment and set the following parameters totrue.### Storage related parameters ### use_external_storage: true install_vsphere_csi: true
- If using VMware vSAN or Block Storage, uncomment and update the following VMware vCenter parameters.
- If you do not want to provide the VMware vCenter password in the plain text format, you can comment the#vcenter_password: <your-vCenter-password>. During the Kubernetes cluster creation, vCenter passowrd will be prompted.
### vCenter parameters for using vSAN storage or Block Storage ### vcenter_ip: <your-vCenter-IP> vcenter_name: <your-vCenter-name> vcenter_username: <your-vCenter-username> vcenter_password: <your-vCenter-password> ## List of data centers that are part of your vSAN cluster vcenter_data_centers: - <your-datacenter> vcenter_insecure: true # True, if using self signed certificates ## The datastore URL. To locate, go to your vCenter -> datastores -> your vSAN datastore or Block Storage -> Summary -> URL datastore_url: <your-datastore-url>Ensure that the VMware vSAN or Block Storage has minimum of 1.5 TB of storage space.
Here is the sample snippet of thevars.yamlfile:### General parameters ### cluster_name: vmbased-oracle-demo-vsan ansible_user: root ansible_become_password: <root-password> admin_public_keys_path: /root/.ssh/id_rsa.pub control_plane_ips: - 10.220.143.240 worker_node_ips: - 10.220.143.163 - 10.220.143.245 - 10.220.143.182 - 10.220.143.113 - 10.220.143.37 - 10.220.143.38 ## Deployment host IP address ## Make sure firewall is disabled in deployment host deployment_host_ip: 10.10.10.1 ## default value is http. Use https for secure communication. yum_protocol: http ## IP address/hostname of yum/package repo yum_server: 10.10.10.2 ### Harbor parameters ### ## (Optional) The IP address to be used for the Harbor container registry, if static IPs are available. ## This IP address must be in the same subnet as the VM IPs. harbor_registry_ip: 10.220.143.x ## When using local storage (Direct Attached Storage), set this to a location that has sufficient storage space for storing all application data storage_dir: /mnt ### Storage related parameters ### use_external_storage: "true" install_vsphere_csi: "true" ### vCenter parameters for using external storage (VMFS or vSAN datastores or Block Storage) ### vcenter_ip: 10.10.10.10 vcenter_name: hostname1.vmware.com vcenter_username: <vcenter-username> vcenter_password: <vcenter-password> ## List of data centers that are part of your cluster vcenter_data_centers: - test-datacenter vcenter_insecure: "true" ## The datastore URL. To locate, go to your vCenter -> datastores -> your datastore -> Summary -> URL ## Note: All VMs must be on the same datastore! datastore_url: ds:///vmfs/volumes/vsan:527e4e6193eacd65-602e106ffe383d68/- vcenter_ip: IP address or the FQDN of the vCenter.
- vcenter_name: Name of the vCenter as shown in the vSphere Console (after logging in to the vCenter using vSphere Console).
- Execute the prepare command inside the Deployment Container.If you have used Non-Empty Passphrase for SSH Key generation (required for passwordless SSH communication), then you must execute the following commands inside the Deployment Container, before running the Ansible script.[root@wdc-10-214-147-149 ~]# eval "$(ssh-agent -s)" Agent pid 3112829 [root@wdc-10-214-147-149 ~]# ssh-add ~/.ssh/id_rsa Enter passphrase for /root/.ssh/id_rsa: <==Enter the NON-EMPTY Passphrase that is being provided during the NON-EMPTY ssh-key Generation process Identity added: /root/.ssh/id_rsa ([email protected])cd /root/k8s-installer/ export ANSIBLE_CONFIG=/root/k8s-installer/scripts/ansible/ansible.cfg ansible-playbook scripts/ansible/prepare.yml -e @scripts/ansible/vars.yml --becomeThere will be some fatal messages which will be displayed on the console and ignored by the Ansible script during execution. These messages does not have any functional impact and can be safely ignored.
- Execute the Kubernetes Cluster installation command inside the Deployment Container.If the vCenter password is commented in thevars.yamlfile, then you will be prompted to provide the vCenter password when the following Ansible Script is executed.cd /root/k8s-installer/ ansible-playbook -i inventory/<your-cluster-name>/hosts.yml scripts/ansible/deploy_k8s.yml -u <your-SSH-username> --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml
- There will be some fatal messages which will be displayed on the console and ignored by the Ansible script during execution. These messages does not have any functional impact and can be safely ignored.
- After the Kubernetes cluster installation completes, kubeconfig file is generated under/root/.kube/<your-cluster-name>. Export the kubeconfig file using exportKUBECONFIG=/root/.kube/<your-cluster-name>and proceed with the following steps to ensure if the deployment is successful.
- Ensure that the Kubernetes installation is successful and the following successful message is displayed on the consolek8s Cluster Deployment successful.kubectl get nodesEnsure that all the nodes are in ready state before starting the VMware Telco Cloud Service Assurance deployment.
- Verify the Harbor pods are up and running.kubectl get pods | grep harbor
- Once the Kubernetes deployment is complete, the next step is to deploy VMware Telco Cloud Service Assurance, refer the section Start the VMware Telco Cloud Service Assurance Deployment.
- If the Kubernetes deployment fails, while waiting for thenodelocaldnsPODs to come up then Kubernetes installation script should be re-run. The Kubernetes deployment will resume from that point.
- If the Kubernetes Deployment fails because the Python HTTP Server is getting killed every time, then refer to Python HTTP server running on the Deployment Host is stopped in theto manually bring up the HTTP Server.VMware Telco Cloud Service AssuranceTroubleshooting Guide
- If you have changed your VMware vCenter credentials then you have to run the following scripts to update it in the deployment container of the deployer host
- Execute the prepare command inside the deployment containeransible-playbook scripts/ansible/prepare.yml -e @scripts/ansible/vars.yml --become
- Execute post install command to update the credentials inside the deployment containeransible-playbook -i inventory/vmbasedlongevity4/hosts.yml scripts/ansible/post_install.yml -u root --become -e @scripts/ansible/internal_vars.yml -e @scripts/ansible/vars.yml --tags vsphere-csi