This topic contains release notes for Tanzu Kubernetes Grid Integrated Edition (TKGI) v1.16.
TKGI v1.16.8
Release Date: March 26, 2024
Product Snapshot
Release Details | ||
|---|---|---|
| Version | v1.16.8 | |
| Release date | March 26, 2024 | |
Internal Component Versions | ||
| Antrea | v1.6.1 | Release Notes |
| cAdvisor | v0.39.1 | |
| Containerd | Linux: v1.6.28 Windows: v1.6.28 | |
| CoreDNS | v1.9.3+vmware.19 | |
| CSI Driver for vSphere | v2.7.3 | Release Notes |
| etcd | v3.5.9 | |
| Harbor | v2.7.4 | Release Notes |
| Kubernetes | v1.25.16 | Release Notes |
| Metrics Server | v0.6.4 | |
| NCP | v4.1.0.5 | Release Notes |
| Percona XtraDB Cluster (PXC) (in BOSH pxc-release) | v5.7.38-31.59 pxc-release: v0.44.0 | Release Notes: PXC pxc-release |
| UAA | v74.5.108* | |
| Velero | v1.9.5 | Release Notes |
| Wavefront | Wavefront Collector: v1.12.0 Wavefront Proxy: v12.0 | |
Stemcell Compatibility | ||
| Ubuntu Jammy stemcells | See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| Windows stemcells | v2019.69 or later | |
Interoperability | ||
| Ops Manager | v3.0.25* or later, v2.10.71* or later. For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| VMware Aria Operations Management Pack for Kubernetes | v2.0, v1.9 | Release Notes: v2.0, v1.9 |
| VMware Cloud Foundation (VCF) | v4.5.1** or later | Release Notes: v4.5.2, v4.5.1 |
| VMware NSX*** | See VMware Product Interoperability Matrices****. | |
| VMware vSphere | ||
* Components marked with an asterisk have been updated.
** VCF v4.5.1 and later are supported but have not been tested with TKGI v1.16.
*** As of May 7, 2024, NSX networking and firewall components are sold separately from TKGI.
**** In-Tree vSphere Storage Volume support requires vSphere 7.0u2 and later. Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.16.8 are from TKGI v1.16.7 and earlier TKGI v1.16 patches, and TKGI v1.15.8 and earlier TKGI v1.15 patches.
Breaking Changes
TKGI v1.16.8 does not have any new breaking changes.
Features and Enhancements
TKGI v1.16.8 does not include any enhancements.
Resolved Issues
TKGI v1.16.8 resolves the following issues:
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.16.7 are also in Tanzu Kubernetes Grid Integrated Edition v1.16.8. For more information, see TKGI v1.16.7 Known Issues below.
TKGI v1.16.8 does not include any additional known issues.
TKGI Management Console v1.16.8
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Release Date: March 26, 2024
Product Snapshot
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Release Details | ||
|---|---|---|
| Version | v1.16.8 | |
| Release date | March 26, 2024 | |
| Installed TKGI version | v1.16.8 | |
| Installed Ops Manager version | v2.10.71* | Release Notes |
Component Versions | ||
| Installed Kubernetes version | v1.25.16 | Release Notes |
| Installed Harbor Registry version | v2.7.4 | Release Notes |
| Ubuntu Jammy stemcell | v1.406* | Release Notes |
* Components marked with an asterisk have been updated.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.8 are from TKGI MC v1.16.7 and earlier TKGI MC v1.16 patches, and TKGI MC v1.15.8 and earlier TKGI v1.15 patches.
Features and Resolved Issues
TKGI Management Console v1.16.8 resolves the following issue:
- Fixes issue of LDAP password appears not hidden in MC-generated TKGI configuration file.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.8 are also in TKGI MC v1.16.7. For more information, see TKGI MC v1.16.7 Known Issues below.
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.8 does not include any additional known issues.
TKGI v1.16.7
Release Date: February 27, 2024
Product Snapshot
Release Details | ||
|---|---|---|
| Version | v1.16.7 | |
| Release date | February 27, 2024 | |
Internal Component Versions | ||
| Antrea | v1.6.1 | Release Notes |
| cAdvisor | v0.39.1 | |
| Containerd | Linux: v1.6.28* Windows: v1.6.28* | |
| CoreDNS | v1.9.3+vmware.19 | |
| CSI Driver for vSphere | v2.7.3 | Release Notes |
| etcd | v3.5.9 | |
| Harbor | v2.7.4 | Release Notes |
| Kubernetes | v1.25.16 | Release Notes |
| Metrics Server | v0.6.4 | |
| NCP | v4.1.0.5* | Release Notes |
| Percona XtraDB Cluster (PXC) (in BOSH pxc-release) | v5.7.38-31.59 pxc-release: v0.44.0 | Release Notes: PXC pxc-release |
| UAA | v74.5.102* | |
| Velero | v1.9.5 | Release Notes |
| Wavefront | Wavefront Collector: v1.12.0 Wavefront Proxy: v12.0 | |
Stemcell Compatibility | ||
| Ubuntu Jammy stemcells | See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| Windows stemcells | v2019.69* or later | |
Interoperability | ||
| Ops Manager | v3.0.24* or later, v2.10.70* or later. For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| VMware Aria Operations Management Pack for Kubernetes | v2.0, v1.9 | Release Notes: v2.0, v1.9 |
| VMware Cloud Foundation (VCF) | v4.5.1** or later | Release Notes: v4.5.2, v4.5.1 |
| VMware NSX | See VMware Product Interoperability Matrices***. | |
| VMware vSphere | ||
* Components marked with an asterisk have been updated.
** VCF v4.5.1 and later are supported but have not been tested with TKGI v1.16.
*** In-Tree vSphere Storage Volume support requires vSphere 7.0u2 and later. Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.16.7 are from TKGI v1.16.6 and earlier TKGI v1.16 patches, and TKGI v1.15.8 and earlier TKGI v1.15 patches.
Breaking Changes
TKGI v1.16.7 does not have any new breaking changes.
Features and Enhancements
TKGI v1.16.7 does not include any enhancements.
Resolved Issues
TKGI v1.16.7 resolves the following issues:
- Fixes Error Scaling Clusters when Compute Profile Lacks Node Pool.
- Containerd update to v1.6.28 fixes issue High-Severity CVE-2024-21626 in runc 1.1.11 and earlier.
-
NCP update to v4.1.0.5 fixes issues:
- Issue 3332908: Failure of load balancer during an edit of VS instance in TKGI.
- The parameter
cookie_nameis configurable in the network profile when ingresspersistence_typeis set ascookie.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.16.6 are also in Tanzu Kubernetes Grid Integrated Edition v1.16.7. For more information, see TKGI v1.16.6 Known Issues below.
TKGI v1.16.7 does not include any additional known issues.
TKGI Management Console v1.16.7
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Release Date: February 27, 2024
Product Snapshot
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Release Details | ||
|---|---|---|
| Version | v1.16.7 | |
| Release date | February 27, 2024 | |
| Installed TKGI version | v1.16.7 | |
| Installed Ops Manager version | v2.10.70* | Release Notes |
Component Versions | ||
| Installed Kubernetes version | v1.25.16 | Release Notes |
| Installed Harbor Registry version | v2.7.4 | Release Notes |
| Ubuntu Jammy stemcell | v1.379* | Release Notes |
* Components marked with an asterisk have been updated.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.7 are from TKGI MC v1.16.6 and earlier TKGI MC v1.16 patches, and TKGI MC v1.15.8 and earlier TKGI v1.15 patches.
Features and Resolved Issues
TKGI Management Console v1.16.7 does not include any enhancements or resolve any issues.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.7 are also in TKGI MC v1.16.6. For more information, see TKGI MC v1.16.6 Known Issues below.
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.7 does not include any additional known issues.
TKGI v1.16.6
Release Date: January 23, 2024
Product Snapshot
Release Details | ||
|---|---|---|
| Version | v1.16.6 | |
| Release date | January 23, 2024 | |
Internal Component Versions | ||
| Antrea | v1.6.1 | Release Notes |
| cAdvisor | v0.39.1 | |
| Containerd | Linux: v1.6.24 Windows: v1.6.24 | |
| CoreDNS | v1.9.3+vmware.19* | |
| CSI Driver for vSphere | v2.7.3 | Release Notes |
| etcd | v3.5.9 | |
| Harbor | v2.7.4* | Release Notes |
| Kubernetes | v1.25.16* | Release Notes |
| Metrics Server | v0.6.4 | |
| NCP | v4.1.0.4* | Release Notes |
| Percona XtraDB Cluster (PXC) (in BOSH pxc-release) | v5.7.38-31.59 pxc-release: v0.44.0 | Release Notes: PXC pxc-release |
| UAA | v74.5.99* | |
| Velero | v1.9.5 | Release Notes |
| Wavefront | Wavefront Collector: v1.12.0 Wavefront Proxy: v12.0 | |
Stemcell Compatibility | ||
| Ubuntu Jammy stemcells | See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| Windows stemcells | v2019.65 or later | |
Interoperability | ||
| Ops Manager | v3.0.22* or later, v2.10.68* or later. For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| VMware Aria Operations Management Pack for Kubernetes | v2.0, v1.9 | Release Notes: v2.0, v1.9 |
| VMware Cloud Foundation (VCF) | v4.5.1** or later | Release Notes: v4.5.2, v4.5.1 |
| VMware NSX | See VMware Product Interoperability Matrices***. | |
| VMware vSphere | ||
* Components marked with an asterisk have been updated.
** VCF v4.5.1 and later are supported but have not been tested with TKGI v1.16.
*** In-Tree vSphere Storage Volume support requires vSphere 7.0u2 and later. Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.16.6 are from TKGI v1.16.5 and earlier TKGI v1.16 patches, and TKGI v1.15.8 and earlier TKGI v1.15 patches.
Breaking Changes
TKGI v1.16.6 does not have any new breaking changes.
Features and Enhancements
TKGI v1.16.6 includes the following enhancement:
- Prevents broker APIs from including base64-encoded PNG images in broker service metadata, to save syslog space.
Resolved Issues
TKGI v1.16.6 resolves the following issues:
- Fixes vSphere CSI Failure When Backslash in User Name.
- Fixes issue: Updating Kubernetes clusters loses
ClusterMetricSinkconfiguration details, preventing Prometheus from contacting Telegraf pods. - NCP update to v4.1.0.4 fixes issues listed in the NCP 4.1.0.4 Release Notes:
- Issue 3305356 - Kubernetes LoadBalancer service is not accessible after being created
- Issue 3310860 - UDP connections with the same port may fail when endpoints of clusterIP Service restart
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.16.5 are also in Tanzu Kubernetes Grid Integrated Edition v1.16.6. For more information, see TKGI v1.16.5 Known Issues below.
TKGI v1.16.6 does not include any additional known issues.
Important: To address CVE-2024-21626 by patching TKGI with a runc upgrade, see High-Severity CVE-2024-21626 in runc 1.1.11 and earlier below.
TKGI Management Console v1.16.6
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Release Date: January 23, 2024
Product Snapshot
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Release Details | ||
|---|---|---|
| Version | v1.16.6 | |
| Release date | January 23, 2024 | |
| Installed TKGI version | v1.16.6 | |
| Installed Ops Manager version | v2.10.64* | Release Notes |
Component Versions | ||
| Installed Kubernetes version | v1.25.16* | Release Notes |
| Installed Harbor Registry version | v2.7.4* | Release Notes |
| Ubuntu Jammy stemcell | v1.340* | Release Notes |
* Components marked with an asterisk have been updated.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.6 are from TKGI MC v1.16.5 and earlier TKGI MC v1.16 patches, and TKGI MC v1.15.8 and earlier TKGI v1.15 patches.
Features and Resolved Issues
TKGI Management Console v1.16.6 does not include any enhancements or resolve any issues.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.6 are also in TKGI MC v1.16.5. For more information, see TKGI MC v1.16.5 Known Issues below.
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.6 does not include any additional known issues.
TKGI v1.16.5
Release Date: November 9, 2023
Product Snapshot
Release Details | ||
|---|---|---|
| Version | v1.16.5 | |
| Release date | November 9, 2023 | |
Internal Component Versions | ||
| Antrea | v1.6.1* | Release Notes |
| cAdvisor | v0.39.1 | |
| Containerd | Linux: v1.6.24* Windows: v1.6.24* | |
| CoreDNS | v1.9.3+vmware.18* | |
| CSI Driver for vSphere | v2.7.3* | Release Notes |
| etcd | v3.5.9* | |
| Harbor | v2.7.3* | Release Notes |
| Kubernetes | v1.25.15* | Release Notes |
| Metrics Server | v0.6.4 | |
| NCP | v4.1.0.3 | Release Notes |
| Percona XtraDB Cluster (PXC) (in BOSH pxc-release) | v5.7.38-31.59 pxc-release: v0.44.0 | Release Notes: PXC pxc-release |
| UAA | v74.5.91 | |
| Velero | v1.9.5 | Release Notes |
| Wavefront | Wavefront Collector: v1.12.0 Wavefront Proxy: v12.0 | |
Stemcell Compatibility | ||
| Ubuntu Jammy stemcells | See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| Windows stemcells | v2019.65 or later* | |
Interoperability | ||
| Ops Manager | v3.0.18* or later, v2.10.64* or later. For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| VMware Aria Operations Management Pack for Kubernetes | v2.0, v1.9 | Release Notes: v2.0, v1.9 |
| VMware Cloud Foundation (VCF) | v4.5.1** or later | Release Notes: v4.5.2, v4.5.1 |
| VMware NSX | See VMware Product Interoperability Matrices***. | |
| VMware vSphere | ||
* Components marked with an asterisk have been updated.
** VCF v4.5.1 and later are supported but have not been tested with TKGI v1.16.
*** In-Tree vSphere Storage Volume support requires vSphere 7.0u2 and later. Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.16.5 are from TKGI v1.16.4 and earlier TKGI v1.16 patches, and TKGI v1.15.8 and earlier TKGI v1.15 patches.
Breaking Changes
TKGI v1.16.5 does not have any new breaking changes.
Features and Enhancements
TKGI v1.16.5 does not include any enhancements.
Resolved Issues
TKGI v1.16.5 resolves the following issues:
- Component bumps fix the following:
- Upgrades Antrea to include downstream v1.9.1+vmware.1 enhancements: Fixes Antrea agent pods restart issue when FQDN-based rules or network policy logging is used.
- Upgrades CSI Driver for vSphere to v2.7.3: Fixes Prevent node cache update during attach and detach.
- Fixes Node Drain Operation Ignores the TKGI Deployment Plan Settings.
- Fixes Cluster Update Operations Fail Due to Duplicate Tag Keys.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.16.4 are also in Tanzu Kubernetes Grid Integrated Edition v1.16.5. For more information, see TKGI v1.16.4 Known Issues below.
TKGI Management Console v1.16.5
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Release Date: November 9, 2023
Product Snapshot
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Release Details | ||
|---|---|---|
| Version | v1.16.5 | |
| Release date | November 9, 2023 | |
| Installed TKGI version | v1.16.5 | |
| Installed Ops Manager version | v2.10.64* | Release Notes |
Component Versions | ||
| Installed Kubernetes version | v1.25.15* | Release Notes |
| Installed Harbor Registry version | v2.7.3 | Release Notes |
| Ubuntu Jammy stemcell | v1.260* | Release Notes |
* Components marked with an asterisk have been updated.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.5 are from TKGI MC v1.16.4 and earlier TKGI MC v1.16 patches, and TKGI MC v1.15.8 and earlier TKGI v1.15 patches.
Features and Resolved Issues
TKGI Management Console v1.16.5 does not include any enhancements or resolve any issues.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.4 are also in TKGI MC v1.16.5. For more information, see TKGI MC v1.16.4 Known Issues below.
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.5 does not include any additional known issues.
TKGI v1.16.4
Release Date: September 21, 2023
Product Snapshot
Release Details | ||
|---|---|---|
| Version | v1.16.4 | |
| Release date | September 21, 2023 | |
Internal Component Versions | ||
| Antrea | v1.6.0 | Release Notes |
| cAdvisor | v0.39.1 | |
| Containerd | Linux: v1.6.18 Windows: v1.6.18 | |
| CoreDNS | v1.9.3+vmware.16* | |
| CSI Driver for vSphere | v2.7.2 | Release Notes |
| etcd | v3.5.6 | |
| Harbor | v2.7.2 | Release Notes |
| Kubernetes | v1.25.13* | Release Notes |
| Metrics Server | v0.6.4 | |
| NCP | v4.1.0.3* | Release Notes |
| Percona XtraDB Cluster (PXC) (in BOSH pxc-release) | v5.7.38-31.59 pxc-release: v0.44.0 | Release Notes: PXC pxc-release |
| UAA | v74.5.87* | |
| Velero | v1.9.5 | Release Notes |
| Wavefront | Wavefront Collector: v1.12.0 Wavefront Proxy: v12.0 | |
Stemcell Compatibility | ||
| Ubuntu Jammy stemcells | See Broadcom Support. | |
| Windows stemcells | v2019.61 or later | |
Interoperability | ||
| Ops Manager | v3.0.15* or later, v2.10.61* or later. For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB.. | |
| VMware Aria Operations Management Pack for Kubernetes | v2.0, v1.9 | Release Notes: v2.0, v1.9 |
| VMware Cloud Foundation (VCF) | v4.5.1** or later | Release Notes: v4.5.2, v4.5.1 |
| VMware NSX | See VMware Product Interoperability Matrices***. | |
| VMware vSphere | ||
* Components marked with an asterisk have been updated.
** VCF v4.5.1 and later are supported but have not been tested with TKGI v1.16.
*** In-Tree vSphere Storage Volume support requires vSphere 7.0u2 and later. Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.16.4 are from TKGI v1.16.3 and earlier TKGI v1.16 patches, and TKGI v1.15.7 and earlier TKGI v1.15 patches.
Breaking Changes
TKGI v1.16.4 does not have any new breaking changes.
Features and Enhancements
TKGI v1.16.4 does not include any enhancements.
Resolved Issues
TKGI v1.16.4 resolves the following issues:
- [Security Fix] Component bumps fix the following:
- Fixes CVE-2023-3676 and CVE-2023-3955.
- Upgrades NCP to v4.1.0.3:
- Fixes
FailedCreatePodSandBoxerror during cluster creation.
- Fixes
- Fixes Cluster Might Fail to Send the
cluster_nameTag to Logging after Cluster Upgrade.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.16.3 are also in Tanzu Kubernetes Grid Integrated Edition v1.16.4. For more information, see TKGI v1.16.3 Known Issues below.
TKGI v1.16.4 does not include any additional known issues.
TKGI Management Console v1.16.4
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Release Date: September 21, 2023
Product Snapshot
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Release Details | ||
|---|---|---|
| Version | v1.16.4 | |
| Release date | September 21, 2023 | |
| Installed TKGI version | v1.16.4 | |
| Installed Ops Manager version | v2.10.61* | Release Notes |
Component Versions | ||
| Installed Kubernetes version | v1.25.13* | Release Notes |
| Installed Harbor Registry version | v2.7.2 | Release Notes |
| Ubuntu Jammy stemcell | v1.222* | Release Notes |
* Components marked with an asterisk have been updated.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.4 are from TKGI MC v1.16.3 and earlier TKGI MC v1.16 patches, and TKGI MC v1.15.7 and earlier TKGI v1.15 patches.
Features and Resolved Issues
TKGI Management Console v1.16.4 does not include any enhancements or resolve any issues.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.3 are also in TKGI MC v1.16.4. For more information, see TKGI MC v1.16.3 Known Issues below.
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.4 does not include any additional known issues.
TKGI v1.16.3
Release Date: August 10, 2023
Product Snapshot
Release Details | ||
|---|---|---|
| Version | v1.16.3 | |
| Release date | August 10, 2023 | |
Internal Component Versions | ||
| Antrea | v1.6.0 | Release Notes |
| cAdvisor | v0.39.1 | |
| Containerd | Linux: v1.6.18 Windows: v1.6.18 | |
| CoreDNS | v1.9.3+vmware.13* | |
| CSI Driver for vSphere | v2.7.2 | Release Notes |
| etcd | v3.5.6 | |
| Harbor | v2.7.2 | Release Notes |
| Kubernetes | v1.25.12* | Release Notes |
| Metrics Server | v0.6.4* | |
| NCP | v4.1.0.2* | Release Notes |
| Percona XtraDB Cluster (PXC) (in BOSH pxc-release) | v5.7.38-31.59 pxc-release: v0.44.0 | Release Notes: PXC pxc-release |
| UAA | v74.5.83* | |
| Velero | v1.9.5 | Release Notes |
| Wavefront | Wavefront Collector: v1.12.0 Wavefront Proxy: v12.0 | |
Stemcell Compatibility | ||
| Ubuntu Jammy stemcells | See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| Windows stemcells | v2019.61 or later | |
Interoperability | ||
| Ops Manager | v3.0.14* or later, v2.10.60* or later. For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| VMware Aria Operations Management Pack for Kubernetes | v2.0, v1.9 | Release Notes: v2.0, v1.9 |
| VMware Cloud Foundation (VCF) | v4.5.1** or later | Release Notes: v4.5.2, v4.5.1 |
| VMware NSX | See VMware Product Interoperability Matrices***. | |
| VMware vSphere | ||
* Components marked with an asterisk have been updated.
** VCF v4.5.1 and later are supported but have not been tested with TKGI v1.16.
*** In-Tree vSphere Storage Volume support requires vSphere 7.0u2 and later. Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.16.3 are from TKGI v1.16.2 and earlier TKGI v1.16 patches, and TKGI v1.15.6 and earlier TKGI v1.15 patches.
Breaking Changes
TKGI v1.16.3 does not have any new breaking changes.
Features and Enhancements
TKGI v1.16.3 includes the following enhancement:
- The Fluentd default refresh interval has been decreased to 30 seconds from 60 seconds. This ensures that all the logs are forwarded to VMware vRealize Log Insight consistently.
Resolved Issues
TKGI v1.16.3 resolves the following issues:
- [Security Fix] Component bumps fix the following:
- Upgrades Kubernetes to v1.25.12:
- Fixes CVE-2023-2728.
- Upgrades Metrics Server to v0.6.4:
- Fixes CVE-2022-28948 and CVE-2022-41721.
- Upgrades Kubernetes to v1.25.12:
- Fixes Some Telegraf Metric-Sink Pods Crash After TKGI Upgrade.
- Fixes API Server Audit Logs Leak Tokens.
- Fixes Telemetry Does Not Report Large Metrics.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.16.2 are also in Tanzu Kubernetes Grid Integrated Edition v1.16.3. For more information, see TKGI v1.16.2 Known Issues below.
TKGI v1.16.3 does not include any additional known issues.
TKGI Management Console v1.16.3
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Release Date: August 10, 2023
Product Snapshot
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Release Details | ||
|---|---|---|
| Version | v1.16.3 | |
| Release date | August 10, 2023 | |
| Installed TKGI version | v1.16.3 | |
| Installed Ops Manager version | v2.10.60* | Release Notes |
Component Versions | ||
| Installed Kubernetes version | v1.25.12* | Release Notes |
| Installed Harbor Registry version | v2.7.2 | Release Notes |
| Ubuntu Jammy stemcell | v1.181* | Release Notes |
* Components marked with an asterisk have been updated.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.3 are from TKGI MC v1.16.2 and earlier TKGI MC v1.16 patches, and TKGI MC v1.15.6 and earlier TKGI v1.15 patches.
Features and Resolved Issues
TKGI Management Console v1.16.3 does not include any enhancements or resolve any issues.
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.2 are also in TKGI MC v1.16.3. For more information, see TKGI MC v1.16.2 Known Issues below.
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.3 does not include any additional known issues.
TKGI v1.16.2
Release Date: June 29, 2023
Product Snapshot
Release Details | ||
|---|---|---|
| Version | v1.16.2 | |
| Release date | June 29, 2023 | |
Internal Component Versions | ||
| Antrea | v1.6.0 | Release Notes |
| cAdvisor | v0.39.1 | |
| Containerd | Linux: v1.6.18 Windows: v1.6.18 | |
| CoreDNS | v1.9.3+vmware.11* | |
| CSI Driver for vSphere | v2.7.2* | Release Notes |
| etcd | v3.5.6 | |
| Harbor | v2.7.2 | Release Notes |
| Kubernetes | v1.25.10* | Release Notes |
| Metrics Server | v0.6.1 | |
| NCP | v4.1.0.1 | Release Notes |
| Percona XtraDB Cluster (PXC) (in BOSH pxc-release) | v5.7.38-31.59 pxc-release: v0.44.0 | Release Notes: PXC pxc-release |
| UAA | v74.5.77* | |
| Velero | v1.9.5 | Release Notes |
| Wavefront | Wavefront Collector: v1.12.0 Wavefront Proxy: v12.0 | |
Stemcell Compatibility | ||
| Ubuntu Jammy stemcells | See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| Windows stemcells | v2019.61 or later* | |
Interoperability | ||
| Ops Manager | v3.0.10* or later, v2.10.58* or later. For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| VMware Aria Operations Management Pack for Kubernetes | v2.0, v1.9 | Release Notes: v2.0, v1.9 |
| VMware Cloud Foundation (VCF) | v4.5.1** or later | Release Notes: v4.5.2, v4.5.1 |
| VMware NSX | See VMware Product Interoperability Matrices***. | |
| VMware vSphere | ||
* Components marked with an asterisk have been updated.
** VCF v4.5.1 and later are supported but have not been tested with TKGI v1.16.
*** In-Tree vSphere Storage Volume support requires vSphere 7.0u2 and later. Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.16.2 are from TKGI v1.16.1 and TKGI v1.15.5 and earlier TKGI v1.15 patches.
Breaking Changes
TKGI v1.16.2 does not have any new breaking changes.
Features and Enhancements
TKGI v1.16.2 includes the following features and enhancements:
Compute Profile Enhancements
TKGI v1.16.2 includes the following compute profile enhancements:
- Supports configuring compute profiles with a node pool description. For more information, see
node_poolsBlock in Creating and Managing Compute Profiles with the CLI (vSphere). - Supports optionally skipping compute profile validation. For more information, see Assign a Compute Profile in Using Compute Profiles (vSphere).
- Prevents updating an existing compute profile with the following unsupported configuration changes:
- Changing the number of control plane nodes.
- Changing the node pool
nameproperty. - Adding a new node pool while deleting an existing node pool.
Additional Enhancements
TKGI v1.16.2 includes the following additional enhancements:
- Supports resizing clusters that have not been upgraded to the current TKGI control plane version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About TKGI Upgrades.
Resolved Issues
TKGI v1.16.2 resolves the following issues:
- Fixes Rotated TKGI Certificates Remain Listed as Expiring on the Ops Manager Certificates List.
- Fixes HTTPS Ingress Outage During VMware NSX Certificate Rotation.
- Fixes TKGI Sets the Maximum Persistent Volumes per Node to 59 Instead of 45.
- Component bumps fix the following:
- Upgrades CSI Driver for vSphere to v2.7.2:
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.16.1 are also in Tanzu Kubernetes Grid Integrated Edition v1.16.2. For more information, see TKGI v1.16.1 Known Issues below.
The TKGI v1.16.2 does not include any additional known issues.
TKGI Management Console v1.16.2
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Release Date: June 29, 2023
Product Snapshot
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Release Details | ||
|---|---|---|
| Version | v1.16.2 | |
| Release date | June 29, 2023 | |
| Installed TKGI version | v1.16.2 | |
| Installed Ops Manager version | v2.10.58* | Release Notes |
Component Versions | ||
| Installed Kubernetes version | v1.25.10* | Release Notes |
| Installed Harbor Registry version | v2.7.2* | Release Notes |
| Ubuntu Jammy stemcell | v1.125* | Release Notes |
* Components marked with an asterisk have been updated.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.2 are from TKGI MC v1.16.1 and TKGI MC v1.15.5 and earlier TKGI v1.15 patches.
Features and Resolved Issues
TKGI Management Console v1.16.2 includes the following enhancements:
- Prevents deploying multiple TKGI instances on a vCenter Server. By using the TKGI MC, you can now deploy only one instance of TKGI on a vCenter Server.
TKGI Management Console v1.16.2 resolves the following issues:
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.1 are also in TKGI MC v1.16.2. For more information, see TKGI MC v1.16.2 Known Issues below.
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.2 does not include any additional known issues.
TKGI v1.16.1
Release Date: May 17, 2023
Product Snapshot
Release Details | ||
|---|---|---|
| Version | v1.16.1 | |
| Release date | May 17, 2023 | |
Internal Component Versions | ||
| Antrea | v1.6.0 | Release Notes |
| cAdvisor | v0.39.1 | |
| Containerd | Linux: v1.6.18* Windows: v1.6.18* | |
| CoreDNS | v1.9.3+vmware.10* | |
| CSI Driver for vSphere | v2.7.0 | Release Notes |
| etcd | v3.5.6 | |
| Harbor | v2.7.1* | Release Notes |
| Kubernetes | v1.25.9* | Release Notes |
| Metrics Server | v0.6.1 | |
| NCP | v4.1.0.1* | Release Notes |
| Percona XtraDB Cluster (PXC) (in BOSH pxc-release) | v5.7.38-31.59 pxc-release: v0.44.0 | Release Notes: PXC pxc-release |
| UAA | v74.5.71* | |
| Velero | v1.9.5 | Release Notes |
| Wavefront | Wavefront Collector: v1.12.0 Wavefront Proxy: v12.0 | |
Stemcell Compatibility | ||
| Ubuntu Jammy stemcells | See Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| Windows stemcells | v2019.58* or later | |
Interoperability | ||
| Ops Manager | v3.0.8* or later, v2.10.56* or later. For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| VMware Aria Operations Management Pack for Kubernetes | v2.0, v1.9 | Release Notes: v2.0, v1.9 |
| VMware Cloud Foundation (VCF) | v4.5.1** or later | Release Notes: v4.5.2, v4.5.1 |
| VMware NSX | See VMware Product Interoperability Matrices***. | |
| VMware vSphere | ||
* Components marked with an asterisk have been updated.
** VCF v4.5.1 and later are supported but have not been tested with TKGI v1.16.
*** In-Tree vSphere Storage Volume support requires vSphere 7.0u2 and later. Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.16.1 are from TKGI v1.16.0 and TKGI v1.15.4 and earlier TKGI v1.15 patches.
Breaking Changes
TKGI v1.16.1 does not have any new breaking changes.
Features and Enhancements
TKGI v1.16.1 includes the following enhancements:
Supports configuring additional NCP Network Profiles parameters:
nsx_v3.cookie_name,nsx_v3.members_per_medium_lbs,nsx_v3.members_per_small_lbs,nsx_v3.natfirewallmatch,nsx_v3.ncp_enforced_pool_member_limit,nsx_v3.relax_scale_validation
For more information, see cni_configurations Extensions Parameters in Creating and Managing Network Profiles.
TKGI v1.16.1 includes the following minor enhancements:
- Increases the length of the
insecure_registriescolumn to 4K characters. - Supports configuring the Fluent Bit memory limit.
- Improves NSX resource clean-up when deleting a Kubernetes cluster in NSX Policy API mode.
- Optimizes the query API usage for APIs that might respond slowly in large-scale environments.
Resolved Issues
TKGI v1.16.1 resolves the following issues:
- Fixes The ‘kube-state-metrics’ ClusterRole Is Deleted during Cluster Upgrade.
- Fixes ‘Input not an X.509 certificate’ When Applying Change on the TKGI Tile.
- Fixes Kubernetes API Server and etcd Daemon Occasionally Fail to Start During BBR Restore.
- Fixes
NullPointerExceptionerror when creating a Compute Profile configured withinstances: 0and themax_worker_instancesparameter. - Fixes The Validator Secret Certificate Is Not Rotated.
- Component bumps:
- Upgrades NCP to v4.1.0.1:
- Fixes NCP cannot be restarted following
tkgi update-clusterwhile using Ubuntu Jammy stemcell v1.84 and later. - Fixes
ingress_persistence_settings.persistence_typeandx_forwarded_forNetwork Profile parameters do not accept"none"values.
- Fixes NCP cannot be restarted following
- Upgrades NCP to v4.1.0.1:
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition v1.16.0 are also in Tanzu Kubernetes Grid Integrated Edition v1.16.1. For more information, see TKGI v1.16.0 Known Issues below.
The TKGI v1.16.1 does not include any additional known issues.
TKGI Management Console v1.16.1
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Release Date: May 17, 2023
Product Snapshot
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Release Details | ||
|---|---|---|
| Version | v1.16.1 | |
| Release date | May 17, 2023 | |
| Installed TKGI version | v1.16.1 | |
| Installed Ops Manager version | v2.10.56* | Release Notes |
Component Versions | ||
| Installed Kubernetes version | v1.25.9* | Release Notes |
| Installed Harbor Registry version | v2.7.1* | Release Notes |
| Ubuntu Jammy stemcell | v1.108* | Release Notes |
* Components marked with an asterisk have been updated.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.1 are from TKGI MC v1.16.0 and TKGI MC v1.15.4 and earlier TKGI v1.15 patches.
Features and Resolved Issues
TKGI Management Console v1.16.1 has the following resolved issue:
Known Issues
Except where noted, the known issues in Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.0 are also in TKGI MC v1.16.1. For more information, see TKGI MC v1.16.1 Known Issues below.
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.1 does not include any additional known issues.
TKGI v1.16.0
Warning: Use only Ubuntu Jammy stemcell v1.83 and Ops Manager v3.0.4 or v2.10.53 through v2.10.56 with TKGI v1.16.0. NCP cannot be restarted following tkgi update-cluster while using TKGI with later versions.
Release Date: February 28, 2023
Product Snapshot
Release Details | ||
|---|---|---|
| Version | v1.16.0 | |
| Release date | February 28, 2023 | |
Internal Component Versions | ||
| Antrea | v1.6.0* | Release Notes |
| cAdvisor | v0.39.1 | |
| Containerd | Linux: v1.6.6 Windows: v1.6.6 | |
| CoreDNS | v1.9.3+vmware.4* | |
| CSI Driver for vSphere | v2.7.0* | Release Notes |
| etcd | v3.5.6* | |
| Harbor | v2.6.2* | Release Notes |
| Kubernetes | v1.25.4* | Release Notes |
| Metrics Server | v0.6.1 | |
| NCP | v4.1.0* | Release Notes |
| Percona XtraDB Cluster (PXC) (in BOSH pxc-release) | v5.7.38-31.59 pxc-release: v0.44.0 | Release Notes: PXC pxc-release |
| UAA | v74.5.63* | |
| Velero | v1.9.5* | Release Notes |
| Wavefront | Wavefront Collector: v1.12.0* Wavefront Proxy: v12.0* | |
Stemcell Compatibility | ||
| Ubuntu Jammy stemcells | v1.83 only For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| Windows stemcells | v2019.55* or later | |
Interoperability | ||
| Ops Manager | v3.0.4 or later, v2.10.53 - v2.10.56 only. For more information, see Retrieve Product Version Compatibilities from the Tanzu API in the Broadcom Support KB. | |
| VMware Aria Operations Management Pack for Kubernetes | v2.0, v1.9 | Release Notes: v2.0, v1.9 |
| VMware Cloud Foundation (VCF) | v4.5.1** or later | Release Notes: v4.5.2, v4.5.1 |
| VMware NSX | See VMware Product Interoperability Matrices***. | |
| VMware vSphere | ||
* Components marked with an asterisk have been updated.
** VCF v4.5.1 and later are supported but have not been tested with TKGI v1.16.
*** In-Tree vSphere Storage Volume support requires vSphere 7.0u2 and later. Migration from NSX Management Plane API to NSX Policy API requires VMware NSX v4.0.1.1 or later.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition v1.16.0 are from TKGI v1.15.2 and earlier TKGI v1.15 patches.
Breaking Changes
TKGI v1.16.0 has the following breaking changes:
-
Existing Telemetry Program configuration settings are ignored and telemetry must be reconfigured.
The terms of the Telemetry & Customer Experience Improvement program have been updated. Your previous selection will be reset upon upgrading to TKGI 1.16. Review VMware’s Customer Experience Improvement Program, and indicate your willingness to participate in TKGI’s CEIP Tab.
To reconfigure Telemetry, see VMware CEIP in the Installing Tanzu Kubernetes Grid Integrated Edition topic for your IaaS.
For more information on the Telemetry enhancements in this release, see Telemetry Enhancements.
-
Upgrades Kubernetes to v1.25:
-
Kubernetes no longer serves the following:
batch/v1beta1API version of CronJob.discovery.k8s.io/v1beta1API version of EndpointSlice.events.k8s.io/v1beta1API version of Event.autoscaling/v2beta1API version of HorizontalPodAutoscaler.policy/v1beta1API version of PodDisruptionBudget.- PodSecurityPolicy in the
policy/v1beta1API.
Note: Pod Security Policy configurations must be migrated to Pod Security Admission security before upgrading to TKGI v1.16. For more information, see Migrate from PSP to PSA Controller in Pod Security Admission in TKGI.
- RuntimeClass in the
node.k8s.io/v1beta1API.
For more information on Kubernetes v1.25 API removals, see Deprecated API Migration Guide - v1.25 in the Kubernetes documentation.
- No longer supports In-Tree vSphere Storage Volumes on vSphere 7.0u1 and earlier. Kubernetes v1.25 supports In-Tree vSphere Storage Volumes on vSphere 7.0u2 and later only.
For information about additional changes in Kubernetes v1.25, see CHANGELOG-1.25 in the Kubernetes GitHub repository.
-
- Support for the Xenial stemcell has been removed. For more information, see Supports the Ubuntu Jammy Stemcell below.
-
The TKGI API requires a CA certificate with a SAN field:
-
The custom CA certificate used to secure TKGI API connections must include a SAN field. If the TKGI API certificate does not include a SAN field, TKGI CLI commands will return the following error:
An error occurred in the PKS API when processing
-
Features and Enhancements
TKGI v1.16.0 has the following features:
Telemetry Enhancements
Customers who participate in the CEIP receive proactive support benefits that include a weekly report based on telemetry data. Contact your Customer Success Manager to subscribe to this report. You can view a sample report at TKGI Platform Operations Report.
vSphere CSI Driver Enhancements
TKGI v1.16.0 includes the following vSphere CSI Driver enhancements:
-
Supports the snapshot and restore feature for persistent volumes. For more information, see Customize the Maximum Number of Volume Snapshots.
-
Supports using the vSphere Container Storage Interface (CSI) Driver on cluster worker nodes that are distributed across multiple data centers. For more information, see Configure CNS Data Centers in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
Supports the Ubuntu Jammy Stemcell
Supports the Ubuntu Jammy Stemcell.
Support for the Ubuntu Jammy stemcell replaces support for the Xenial stemcell. TKGI Kubernetes cluster node VMs and the TKGI Control Plane now use the Ubuntu Jammy stemcell.
Upgrading an existing TKGI cluster to TKGI v1.16 automatically switches the cluster to the Ubuntu Jammy stemcell.
You must import a supported Ubuntu Jammy stemcell before upgrading TKGI to TKGI v1.16.0 or later. For more information, see Download and Import Stemcells in Upgrading Tanzu Kubernetes Grid Integrated Edition.
Compatible with Ops Manager v3.0
TKGI v1.16 supports Ops Manager v3.0 and Ops Manager v2.10. For more information about the new features and improvements in Ops Manager v3.0, see Ops Manager v3.0 Release Notes in the Ops Manager documentation.
For information about the Ops Manager v3.0 and v2.10 patch release versions supported by TKGI v1.16.0, see Product Snapshot above.
Supports Migrating TKGI to NSX Policy API
Supports promoting TKGI and TKGI Kubernetes clusters and workloads from NSX Management Plane API to NSX Policy API on vSphere with VMware NSX v4.0.1.1 or later.
For more information, see Migrating the NSX Management Plane API to NSX Policy API - Overview.
Supports the Velero vSphere Plugin
Supports backing up and restoring TKGI and TKGI Kubernetes clusters on vSphere using the Velero vSphere Plugin.
For more information, see Installing Velero vSphere Plugin.
vSphere CSI Supports Multiple Data Centers
Supports using the vSphere Container Storage Interface (CSI) Driver on cluster worker nodes that are distributed across multiple data centers. For more information, see Configure CNS Data Centers in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
Additional Features
TKGI v1.16.0 includes the following additional features:
- Supports nested LDAP group searching and configuring search depth. For more information, see Group Max Search Depth configuration in Integrate UAA with an LDAP Server in Connecting Tanzu Kubernetes Grid Integrated Edition to an LDAP Server or Use an External LDAP Server in Deploy TKGI by Using the Configuration Wizard.
- Supports adding new node subnets to an existing Network Profile Node Network IP Block configuration. For more information, see Configurable Node Network IP Blocks in Customize Node Networks.
- Supports running the vROPs cAdvisor daemonset without Privileged permission. No longer requires that cAdvisor be run with Privileged permission. See Interoperability with VMware Aria Operations Management Pack for Kubernetes Is Unavailable below for more information about VMware Aria interoperability.
- Enables the VMware vSphere Container Storage Plug-in ability to suspend a specific datastore for volume provisioning using Cloud Native Storage Manager. For more information, see VMware vSphere Container Storage Plug-in 2.5 Release Notes.
- Component bumps:
- Upgrades the
fluent-plugin-vmware-loginsightFluentd output plugin to v1.3.1.fluent-plugin-vmware-loginsightforwards logs to VMware Log Insight.
- Upgrades the
Resolved Issues
TKGI v1.16.0 has the following resolved issues:
- [Security Fix] Component bumps fix the following:
- Upgrades Kubernetes to v1.25.4:
- Upgrades Spring to v2.5.14:
- Fixes
spring-securityCVEs: CVE-2021-22112, CVE-2022-22976, CVE-2022-22978. - Fixes
sprint-frameworkCVEs: CVE-2022-22950, CVE-2022-22965, CVE-2022-22968, CVE-2022-22970. - Fixes
tomcatCVEs: CVE-2020-9484, CVE-2020-17527, CVE-2021-24122, CVE-2021-25122, CVE-2021-25329, CVE-2021-30640, CVE-2021-33037, CVE-2021-41079, CVE-2022-23181, CVE-2022-29885, CVE-2022-34305.
- Fixes
- Component bumps fix the following Known Issues:
- Fluent Bit:
- Upgrades NCP to v4.1.0:
- Fixes TKGI Certificate Rotation Might Remove NSX Ingress Certificates.
- Fixes Misconfigured Ingress Invalidates All Ingresses.
- CSI Driver for vSphere to v2.7.0:
- Fixes Pods on Clusters Using the containerd-Runtime Enter a CrashLoopBackOff State.
- Fixes Timeout While Switching Container Runtimes If the Docker Directory Is Too Large.
- Fixes Windows Worker Nodes Are Unresponsive after Update-Cluster and Upgrade-Cluster.
- Fixes MP2P Migration does not migrate a cluster’s SSL Profile if the cluster’s Network Profile is configured to use the default SSL Profile
nsx-default-client-ssl-profile. - FixesTelegraf In-Host Monitoring Does Not Collect kubelet Metrics.
- Fixes Updating and Upgrading a Cluster Can Timeout While Stopping Containerd.
- Fixes CSI Driver Image Missing After High Disk Utilization.
- Fixes Switching Your Default CNI to Antrea is Not Supported.
Deprecations
The following TKGI features have been deprecated or removed from TKGI v1.16:
-
Google Cloud Platform: Support for the Google Cloud Platform (GCP) is deprecated. Support for GCP will be entirely removed in TKGI v1.19.
-
The
log_dropped_trafficCNI Configuration parameter: In TKGI v1.16.0 and later, thelog_dropped_trafficCNI Configuration parameter is ignored.To configure logging in a Network Profile, modify the
log_firewall_trafficparameter. For more information, seelog_settingsin thecni_configurationsParameters section in Creating and Managing Network Profiles. -
Pod Security Policy Support: Support for Kubernetes Pod Security Policy (PSP) has been entirely removed in Kubernetes v1.25. Kubernetes v1.25 instead supports Pod Security Admission. For more information, see Enabling and Configuring Pod Security Admission.
-
Flannel Support: Support for the Flannel Container Networking Interface (CNI) is deprecated. Support for Flannel will be entirely removed in TKGI v1.19. VMware recommends that you switch your Flannel CNI-configured clusters to the Antrea CNI. For more information about Flannel CNI deprecation, see About Switching from the Flannel CNI to the Antrea CNI in About Tanzu Kubernetes Grid Integrated Edition Upgrades.
-
In-Tree vSphere Storage Volume Support: In-Tree vSphere Storage volume support has been deprecated and will be entirely removed in a future Kubernetes version. The TKGI v1.17 upgrade will automatically migrate TKGI clusters from in-tree vSphere storage to vSphere CSI. VMware strongly recommends that you migrate your in-tree vSphere storage volumes to vSphere CSI volumes as soon as possible. For information on how to manually migrate In-Tree vSphere Storage volumes on existing TKGI clusters from In-Tree vSphere Storage to the automatically installed vSphere CSI Driver, see Migrate an In-Tree vSphere Storage Volume to the vSphere CSI Driver in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
-
SecurityContextDeny Admission Controller Support: TKGI support for the SecurityContextDeny admission controller will be removed in TKGI v1.18. SecurityContextDeny has been deprecated, and the Kubernetes community recommends the controller not be used. Pod security admission (PSA) is the preferred method for providing a more secure Kubernetes environment. For more information about PSA, see Pod Security Admission in TKGI.
Known Issues
TKGI v1.16.0 has the following known issues.
High-Severity CVE-2024-21626 in runc 1.1.11 and earlier
To address CVE-2024-21626, which impacts runc v1.1.11 and earlier, follow Instructions to address CVE-2024-21626 for TKGI in the VMware Knowledge Base.
TKGI MC Unable to Manage TKGI after Restoring the TKGI Control Plane from Backup
Symptom
After you restore Ops Manager and the TKGI API VM from backup, TKGI functions normally, but your TKGI MC tabs include the following error: “…product ‘pivotal-container service’ is not deployed…”.
Explanation
TKGI MC is associated with an Ops Manager with a specific name. If you rename Ops Manager with a new name while restoring, your TKGI MC will not recognize the restored Ops Manager and cannot manage it.
PVC multi-attach errors when upgrading to 1.16
Symptom
When upgrading a cluster that uses deprecated in-tree vSphere storage volumes to TKGI 1.16, upgrade fails with error Warning FailedAttachVolume [...] Multi-Attach error for volume "pvc-[<ID-STRING>]".
Explanation
On Kubernetes v1.24 and later, a cluster’s vsphere-legacy-cloud-provider component needs ServiceAccount permissions defined in order to update node objects. Upgrading to TKGI 1.16, which runs on Kubernetes v1.24, does not automatically create these permissions.
Workaround
Before you upgrade a cluster to TKGI 1.16, create and apply the following ClusterRoleBinding and ClusterRole objects within the cluster context:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:serviceaccount:kube-system:vsphere-legacy-cloud-provider
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:serviceaccount:kube-system:vsphere-legacy-cloud-provider
subjects:
- kind: ServiceAccount
name: vsphere-legacy-cloud-provider
namespace: kube-system
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:serviceaccount:kube-system:vsphere-legacy-cloud-provider
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- patch
- update
TKGI version upgrade without new stemcell fails for Containerd runtime clusters with Istio CNI
Symptom
On clusters configured to use a containerd registry and Istio CNI, upgrading the TKGI version without also upgrading the stemcell fails with errors kubelet cannot find istio-cni binary and nsx fails to recieve message header.
This error does not occur when you upgrade to a new stemcell along with the new TKGI version.
Explanation
When TKGI cluster upgrades and drains the node during upgrade, it leaves the cluster nodes’ Istio CNI agent and CNI configuration in a corrupted state.
If the cluster nodes are not automatically re-created by a stemcell change, the corrupted Istio CNI state remains.
Workaround
For clusters that use both Containerd and Istio CNI:
-
If you have already encountered this issue, re-create all worker nodes using the
bosh recreatecommand:-
Run the
bosh vmscommand to list the cluster VMs:bosh -d service-instance-DEPLOYMENT-ID vmsWhere
DEPLOYMENT-IDis the BOSH-generated ID of your Kubernetes cluster deployment. -
For each VM instance listed as
worker/UUIDin the output, runbosh recreate VM-NAME:bosh -d service-instance-DEPLOYMENT-ID recreate worker/UUID
-
-
In the future, you can avoid this issue by upgrading a cluster’s stemcell whenever you upgrade its TKGI version.
vSphere CSI Failure When Backslash in User Name
This issue is fixed in TKGI v1.16.6.
Symptom
When installing or upgrading TKGI with vSphere Container Storage Plug-in (CSI) enabled, CSI pods fail with the error ErrImageNeverPull and cluster logs show the error unknown escape sequence.
Explanation
The CSI driver cannot correctly parse the vCenter username configuration setting if it contains a backslash (\) character.
Workaround
When entering a vCenter user name in the TKGI Configuration Wizard or Ops Manager tile, use the format user@domainname, for example: “[email protected]”. It cannot contain a backslash (\) character.
Kubernetes Pods on NSX-T Become Stuck in a Creating State
This issue is fixed by using NSX-T v3.2 or later.
Symptom
The pods in your TKGI Kubernetes clusters on NSX-T become stuck in a creating state. The connections between nsx-node-agent and hyperbus repeatedly close, log Couldn't connect to 'tcp://...' (error: 111-Connection refused), and have a status of COMMUNICATION_ERROR.
Explanation
For information and workaround steps for this Known Issue, see Issue 2795268: Connection between nsx-node-agent and hyperbus flips and Kubernetes pod is stuck at creating state in NSX Container Plugin 3.1.2 Release Notes in the VMware documentation.
Error: Could Not Execute “Apply-Changes” in Azure Environment
Symptom
After clicking Apply Changes on the TKGI tile in an Azure environment, you experience an error ‘…could not execute “apply-changes”…’ with either of the following descriptions:
- {“errors”:{“base”:[“undefined method ‘location’ for nil:NilClass”]}}
- FailedError.new(“Resource Groups in region ‘#{location}’ do not support Availability Zones”))
For example:
INFO | 2020-09-21 03:46:49 +0000 | Vessel::Workflows::Installer#run | Install product (apply changes)
2020/09/21 03:47:02 could not execute "apply-changes": installation failed to trigger: request failed: unexpected response from /api/v0/installations:
HTTP/1.1 500 Internal Server Error
Transfer-Encoding: chunked
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Mon, 21 Sep 2020 17:51:50 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Referrer-Policy: strict-origin-when-cross-origin
Server: Ops Manager
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
X-Download-Options: noopen
X-Frame-Options: SAMEORIGIN
X-Permitted-Cross-Domain-Policies: none
X-Request-Id: f5fc99c1-21a7-45c3-7f39
X-Runtime: 9.905591
X-Xss-Protection: 1; mode=block
44
{"errors":{"base":["undefined method `location' for nil:NilClass"]}}
0
Explanation
The Azure CPI endpoint used by Ops Manager has been changed and your installed version of Ops Manager is not compatible with the new endpoint.
Workaround
Run the following Ops Manager CLI command:
om --skip-ssl-validation --username USERNAME --password PASSWORD --target https://OPSMAN-API curl --silent --path /api/v0/staged/director/verifiers/install_time/IaasConfigurationVerifier -x PUT -d '{ "enabled": false }'
Where:
USERNAMEis the account to use to run Ops Manager API commands.PASSWORDis the password for the account.OPSMAN-APIis the IP address for the Ops Manager API
For more information, see Error ‘undefined method location’ is received when running Apply Change on Azure in the VMware Tanzu Knowledge Base.
VMware vRealize Operations Does Not Support Windows Worker-Based Kubernetes Clusters
VMware vRealize Operations (vROPs) does not support Windows worker-based Kubernetes clusters and cannot be used to manage TKGI-provisioned Windows workers.
TKGI Wavefront Requires Manual Installation for Windows Workers
To monitor Windows-based worker node clusters with a Wavefront collector and proxy, you must first install Wavefront on the clusters manually, using Helm. For instructions, see the Wavefront section of the Monitoring Windows Worker Clusters and Nodes topic.
Pinging Windows Worker Kubernetes Clusters Does Not Work
TKGI-provisioned Windows worker-based Kubernetes clusters inherit a Kubernetes limitation that prevents outbound ICMP communication from workers. As a result, pinging Windows workers does not work.
For information about this limitation, see Limitations > Networking in the Windows in Kubernetes documentation.
Velero Does Not Support Backing Up Stateful Windows Workloads
You can use Velero to back up stateless TKGI-provisioned Windows workers only. You cannot use Velero to back up stateful Windows applications. For more information, see Velero on Windows in Basic Install in the Velero documentation.
NSX pod creation fails when using Tanzu Application Platform
Symptom
When you deploy a workload on a TKGI-provisioned cluster with NSX networking that is running Tanzu Application Platform (TAP), you see an error Failed to create pod sandbox and no resources are created in the cluster’s nsx-system namespace.
Explanation
The total number of Kubernetes object labels and other tags created by both TKGI and TAP can exceed the number that is allowed by NSX.
Workaround
Create or update your network profile as described in Creating and Managing Network Profiles (NSX Only), setting the cni_configurations parameter extensions.ncp.k8s.label_filtering_regex_list as described under cni_configurations Extensions Parameters.
Tanzu Mission Control Integration Not Supported on GCP
TKGI on Google Cloud Platform (GCP) does not support Tanzu Mission Control (TMC) integration, which is configured in the Tanzu Kubernetes Grid Integrated Edition tile > the Tanzu Mission Control pane.
If you intend to run TKGI on GCP, skip this pane when configuring the Tanzu Kubernetes Grid Integrated Edition tile.
TMC Data Protection Feature Requires Privileged TKGI Containers
TMC Data Protection feature supports privileged TKGI containers only. For more information, see Plans in the Installing TKGI topic for your IaaS.
Windows Worker Kubernetes Clusters with Group Managed Service Account Do Not Support Compute Profiles
Windows worker-based Kubernetes clusters integrated with group Managed Service Account (gMSA) cannot be managed using compute profiles.
Windows Worker Kubernetes Clusters on Flannel Do Not Support Compute Profiles
On vSphere with NSX-T networking you can use compute profiles with both Linux and Windows worker‑based Kubernetes clusters. On vSphere with Flannel networking, you can apply compute profiles only to Linux clusters.
TKGI CLI Does Not Prevent Reducing the Control Plane Node Count
TKGI CLI does not prevent accidentally reducing a cluster’s control plane node count using a compute profile.
Warning: Reducing a cluster’s control plane node count can destroy the cluster. Do not scale out or scale in existing control plane nodes by reconfiguring the TKGI tile or by using a compute profile. Reducing a cluster’s number of control plane nodes might remove a control plane node and cause the cluster to become inactive.
Windows Cluster Nodes Not Deleted After VM Deleted
Symptom
After you delete a VM using the management console of your infrastructure provider, you notice a Windows worker node that had been on that VM is now in a notReady state.
Solution
-
To identify the leftover node:
kubectl get no -o wide - Locate nodes on the returned list that are in a
notReadystate and have the same IP address as another node in the list. -
To manually delete a
notReadynode:kubectl delete node NODE-NAMEWhere
NODE-NAMEis the name of the node in thenotReadystate.
502 Bad Gateway After OIDC Login
Symptom
You experience a “502 Bad Gateway” error from the NSX load balancer after you log in to OIDC.
Explanation
A large response header has exceeded your NSX-T load balancer maximum response header size. The default maximum response header size is 10,240 characters and should be resized to 16,384.
Workaround
If you experience this issue, manually reconfigure your NSX-T request_header_size to 4096 characters and your response_header_size to 16384. For information about configuring NSX default header sizes, see OIDC Response Header Overflow in the Knowledge Base.
Difficulty Changing Proxy for Windows Workers
You must configure a global proxy in the Tanzu Kubernetes Grid Integrated Edition tile > Networking pane before you create any Windows workers that use the proxy.
You cannot change the proxy configuration for Windows workers in an existing cluster.
Character Limitations in HTTP Proxy Password
For vSphere with NSX-T, the HTTP Proxy password field does not support the following special characters: & or ;.
Error After Modifying Your Harbor Storage Configuration
Symptom
You receive the following error after modifying your existing Harbor installation’s storage configuration:
Error response from daemon: manifest for ... not found: manifest unknown: manifest unknown
Explanation
Harbor does not support modifying an existing Harbor installation’s storage configuration.
Workaround
To modify your Harbor storage configuration, re-install Harbor. Before starting Harbor, configure the new Harbor installation with the desired configuration.
Ingress Controller Statefulset Fails to Start After Resizing Worker Nodes
Symptom
Permissions are removed from your cluster’s files and processes after resizing the persistent disk during a cluster upgrade. The ingress controller statefulset fails to start.
Explanation
When resizing a persistent disk, Bosh migrates the data from the old disk to the new disk but does not copy the files’ extended attributes.
Workaround
To resolve the problem, complete the steps in [Ingress controller statefulset fails to start after resize of worker nodes with permission denied] (https://knowledge.broadcom.com/external/article/298618/ingress-controller-statefulset-fails-to.html?language=en_US) in the VMware Tanzu Knowledge Base.
Azure Default Security Group Is Not Automatically Assigned to Cluster VMs
Symptom
You experience issues when configuring a load balancer for a multi-control plane node Kubernetes cluster or creating a service of type LoadBalancer. Additionally, in the Azure portal, the VM > Networking page does not display any inbound and outbound traffic rules for your cluster VMs.
Explanation
As part of configuring the Tanzu Kubernetes Grid Integrated Edition tile for Azure, you enter Default Security Group in the Kubernetes Cloud Provider pane. When you create a Kubernetes cluster, Tanzu Kubernetes Grid Integrated Edition automatically assigns this security group to each VM in the cluster. However, on Azure the automatic assignment might not occur.
As a result, your inbound and outbound traffic rules defined in the security group are not applied to the cluster VMs.
Workaround
If you experience this issue, manually assign the default security group to each VM NIC in your cluster.
One Plan ID Longer than Other Plan IDs
Symptom
One of your plan IDs is one character longer than your other plan IDs.
Explanation
In TKGI, each plan has a unique plan ID. A plan ID is normally a UUID consisting of 32 alphanumeric characters and 4 hyphens. However, the Plan 4 ID consists of 33 alphanumeric characters and 4 hyphens.
Solution
You can safely configure and use Plan 4. The length of the Plan 4 ID does not affect the functionality of Plan 4 clusters.
If you require all plan IDs to have identical length, do not activate or use Plan 4.
Database Cluster Stops After a Database Instance is Stopped
Symptom
After you stop one instance in a multiple-instance database cluster, the cluster stops, or communication between the remaining databases times out, and the entire cluster becomes unreachable.
The following might be in your UAA log:
WSREP has not yet prepared node for application use
Explanation
The database cluster is unable to recover automatically because a member is no longer available to reconcile quorum.
Velero Back Up Fails for vSphere PVs Attached to Clusters on Kubernetes v1.20 and Later
Symptom
Backing up vSphere persistent volumes using Velero fails and your Velero backup log includes the following error:
rpc error: code = Unknown desc = Failed during IsObjectBlocked check: Could not translate selfLink to CRD name
Explanation
This is a known issue when backing up clusters on Kubernetes v1.20 and later using the Velero Plugin for vSphere v1.1.0 or earlier.
Workaround
To resolve the problem, complete the steps in Velero backups of vSphere persistent volumes fail on Kubernetes clusters version 1.20 or higher (83314) in the VMware Tanzu Knowledge Base.
Creating Two Windows Clusters at the Same Time Fails
Symptom
The first time that you try to create two Windows clusters at the same time, the creation of one of the clusters fails. If you run pks cluster CLUSTER-NAME to examine the last action taken on the cluster, you see the following:
Last Action: Create Last Action State: failed Last Action Description: Instance provisioning failed: There was a problem completing your request. … operation: create, error-message: Failed to acquire lock … locking task id is 111, description: ‘create deployment’
Explanation
This is a known issue that occurs the first time that you create two Windows clusters concurrently.
Workaround
Recreate the failed cluster. This issue only occurs the first time that you create two Windows clusters concurrently.
Deleted Clusters are Listed in Cluster Lists
Symptom
After running tkgi delete-cluster and cluster deletion has completed, the deleted cluster continues to be listed when running tkgi clusters.
Workaround
You must manually remove the deleted cluster using a customized version of the ncp_cleanup script. For more information, see Deleting a Tanzu Kubernetes Grid Integrated Edition cluster with “tkgi delete-cluster” stuck “in progress” status in the VMware Tanzu Knowledge Base.
BOSH Director Logs the Error ‘Duplicate vm extension name’
Symptom
After you uninstall TKGI, then reinstall TKGI in the same environment, BOSH Director logs errors similar to the following:
.../gems/bosh-director-0.0.0/lib/bosh/director/deployment_plan/cloud_manifest_parser.rb:120:in `parse_vm_extensions': Duplicate vm extension name 'disk_enable_uuid' (Bosh::Director::DeploymentDuplicateVmExtensionName)
Explanation
The pivotal-container-service cloud-config was not removed when you uninstalled the TKGI tile, and it remained active. When you reinstalled the TKGI tile, an additional pivotal-container-service cloud-config was created, causing the metrics_server to fall into a crash-loop state.
Workaround
You must manually remove the pivotal-container-service cloud-config after removing your TKGI deployment, including after removing the TKGI tile from Ops Manager.
For more information, see “Duplicate vm extension name” error when metrics_server runs on Director VM in Tanzu Kubernetes Grid Integrated Edition in the VMware Tanzu Community Knowledge Base.
The TKGI API FQDN Must Not Include Trailing Whitespace
Symptom
Your TKGI logs include the following error:
'uaa'. Errors are:- Error filling in template 'uaa.yml.erb' (line 59: Client redirect-uri is invalid: uaa.clients.pks_cli.redirect-uri Client redirect-uri is invalid: uaa.clients.pks_cluster_client.redirect-uri)
Explanation
The TKGI API fully-qualified domain name (FQDN) for your cluster contains leading or trailing whitespace.
Workaround
Do not include whitespace in the TKGI tile API Hostname (FQDN) field.
TMC Cluster Data Protection Backup Fails After Upgrading TKGI
The TMC Cluster Data Protection Backup fails in TKGI environments upgraded from an earlier version.
Symptom
The TMC Cluster Data Protection Backup fails to back up your existing clusters and logs the following error:
error executing custom action (groupResource=customresourcedefinitions.apiextensions.k8s.io, namespace=, name=ncpconfigs.nsx.vmware.com): rpc error: code = Unknown desc = error fetching v1beta1 version of ncpconfigs.nsx.vmware.com: the server could not find the requested resource
Explanation
Kubernetes v1.22 disallows the spec.preserveUnknownFields: true configuration in your existing clusters and the creation of a v1 CustomResourceDefinitions configuration fails.
TMC Cluster Data Protection Restore Fails When Using Antrea CNI
The TMC Cluster Data Protection Restore operation can fail when restoring multiple Antea resources.
Symptom
The TMC Cluster Data Protection Restore fails and logs errors that requests to restore the admission webhook have been denied.
Explanation
Velero has encountered a race condition while operating a resource. For more information, see Allow customizing restore order for Kubernetes controllers and their managed resources in the Velero GitHub repository.
TKGI Does Not Support CVDS / NVDS Mixed Environments
TKGI does not support environments where there are multiple matching networks, such as a mixed CVDS/NVDS environment.
Symptom
TKGI logs errors similar to the following in an environment with multiple matching networks:
LastOperationstatus='failed', description='Instance provisioning failed:
There was a problem completing your request. Please contact your operations team providing the following information:
service: p.pks, service-instance-guid: ..., broker-request-id: ..., task-id: ..., operation: create,
error-message: Unknown CPI error 'Unknown' with message 'undefined method `mob' for <VimSdk::Vim::OpaqueNetwork:' in create_vm' CPI method
Explanation
TKGI cannot identify which of the matching networks you intend to use and has selected the wrong network.
Occasionally update-cluster Does Not Complete for Windows Workers
Occasionally, tkgi update-cluster hangs while updating a Windows worker node instance and the BOSH task cannot finish and exits.
Symptom
The ovsdb-server service has stopped but other processes report that it is running.
Explanation
The ovsdb-server.pid file uses the pid for a process that is not the ovsdb-server.
To confirm that this is the root cause for tkgi update-cluster to hang:
- To verify the
ovsdb-serverservice has actually stopped, run the PowerShellGet-servicescommand on the Windows worker node. -
To verify that other processes report the
ovsdb-serverservice is still running:-
Review the ovsdb-server
job-service-wrapper.err.loglog file.
Thejob-service-wrapper.err.loglog file is located at:C:\var\vcap\sys\log\openvswitch-windows\ovsdb-server\job-service-wrapper.err.log -
Confirm that after the flushing processes, the log includes an error similar to the following:
Pid-Guard : ovsdb-server is already runing, please stop it first At C:\var\vcap\jobs\openvswitch-windows\bin\ovsdb-server_ctl.ps1:30 char:5 + Pid-Guard $PIDFILE "ovsdb-server" + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: ( [Write-Error], WriteErrorException + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Pid-Guard
-
-
To verify the root cause:
-
Run the following PowerShell commands on the Windows worker node:
$RUN_DIR = "C:\var\vcap\sys\run\openvswitch-windows" $PIDFILE = "$RUN_DIR\ovsdb-server.pid" $pid1 = Get-Content $PidFile -First 1 echo $pid1 $rst = Get-Process -Id $pid1 -ErrorAction SilentlyContinue echo $rst - Confirm the returned
ProcessNameis notovsdb-server.
-
Workaround
To resolve this issue for a single Windows worker:
- SSH to the affected worker node.
-
Run the following:
rm C:\var\vcap\sys\run\openvswitch-windows\ovsdb-server.pid - Wait for the
ovsdb-serverprocess to start. - Confirm the dependent services also start.
Harbor Private Projects Are Inaccessible after Upgrading to TKGI v1.13.0
If LDAP is enabled, Harbor private projects are inaccessible after upgrading to TKGI v1.13.0. For more information, see Private projects become inaccessible after upgrading Harbor for TKGI to v2.4.x with LDAP feature enabled in the VMware Tanzu Knowledge Base.
Deployments Fail on TKGI Windows Worker-based Kubernetes Clusters after the January 2022 Microsoft Windows Security Patch
Microsoft changed Microsoft Windows’ support for tar file commands in the January 2022 Microsoft Windows security patch.
Packaging scripts that use tar commands for Windows worker-based Kubernetes Cluster deployments can fail after the Microsoft tar command patch update has been applied.
The BOSH agent used by vSphere stemcells built by stembuild v2019.43 and earlier use tar commands that are no longer supported and will fail if the Microsoft Windows security patch has been applied.
Workaround
stembuild v2019.44 and later include a version of the BOSH agent that does not use unsupported tar commands.
If you use vSphere stemcells, use stembuild 2019.44 or later to avoid the BOSH agent tar error.
TKGI Clusters Fail after NSX Upgrade If They Use NSGroup Policy API Resources
TKGI supports clusters that use NSGroup Policy API resources, but Policy API NSGroups created in one NSX version will be empty after upgrading NSX to a newer version.
Workaround
BOSH reconfigures a deployment’s NSGroup members if the deployment is redeployed.
After upgrading NSX, redeploy affected deployments to reconfigure their NSGroup members:
- Re-Apply Changes on the Ops Manager UI to redeploy TKGI tile deployments.
- Re-deploy the affected cluster deployments.
Kubernetes API Server and etcd Daemon Occasionally Fail to Start During BBR Restore
This issue is fixed in TKGI v1.16.1.
The Kubernetes API server or the etcd daemon on a cluster control plane node might not start during a BBR restore, stopping the restore.
Symptom
During a BBR restore, the post-restore-unlock script occasionally times out while starting the etcd daemon or Kubernetes API server.
For example, the post-restore-unlock script shows the following when the etcd daemon fails to start:
Error attempting to run post-restore-unlock for job bbr-etcd on master...
+ NAME=post-restore-unlock
+ LOG_DIR=/var/vcap/sys/log/bbr-etcd
+ exec
++ tee -a /var/vcap/sys/log/bbr-etcd/post-restore-unlock.stdout.log
...
monit has started etcd
+ timeout 1200 /bin/bash
waiting for etcd daemon to start
Process 'etcd' not monitored - start pending
...
waiting for etcd daemon to start
Process 'etcd' initializing
etcd daemon was unable to start after 1200 seconds
+ exit 1 - exit code 1
Workaround
Restart the BBR restore if the Kubernetes API server or the etcd daemon fails to start.
‘Input not an X.509 certificate’ When Applying Change on the TKGI Tile
This issue is fixed in TKGI v1.16.1.
The TKGI tile might report an error similar to the following when Applying Changes with a correctly formatted certificate.
Setting up key store, trust store and installing certs.
keytool error: java.lang.Exception: Input not an X.509 certificate
pre-start.stdout.log
Explanation
The certificate contains one or more certificate keywords, for example, BEGIN or END, and does not validate.
Interoperability with VMware Aria Operations Management Pack for Kubernetes Is Unavailable
This issue has been resolved: VMware Aria Operations Management Pack for Kubernetes v1.9 provides interoperability with TKGI v1.16. For more information, see the VMware Aria Operations for Integrations Release Notes.
Description
Interoperability with VMware Aria Operations Management Pack for Kubernetes is temporarily unavailable.
VMware Aria Operations Management Pack for Kubernetes is currently not compatible with TKGI v1.16. Interoperability between VMware Aria Operations Management Pack for Kubernetes and TKGI v1.16 is expected at a later time.
Interoperability with VMware Cloud Foundation Is Unavailable
This issue is fixed by using VMware Cloud Foundation v4.5.1 or later.
Interoperability with VMware Cloud Foundation (VCF) is temporarily unavailable.
VCF is currently not compatible with TKGI v1.16. Interoperability between VCF and TKGI v1.16 is expected at a later time.
NSX pod creation fails when using Tanzu Application Platform
Symptom
When you deploy a workload on a TKGI-provisioned cluster with NSX networking that is running Tanzu Application Platform (TAP), you see an error Failed to create pod sandbox and no resources are created in the cluster’s nsx-system namespace.
Explanation
The total number of Kubernetes object labels and other tags created by both TKGI and TAP can exceed the number that is allowed by NSX.
Workaround
Create or update your network profile as described in Creating and Managing Network Profiles (NSX Only), setting the cni_configurations parameter extensions.ncp.k8s.label_filtering_regex_list as described under label_filtering Settings.
Error Scaling Clusters when Compute Profile Lacks Node Pool
Symptom
When you scale up a cluster by passing --num-nodes to tkgi cluster-upgrade you see the error An error occurred in the PKS API.
The pks-api.log includes: Request processing failed; nested exception is java.lang.NullPointerException, which triggers within the nested ClusterService methods extractCustomizationNodePoolNames < validateComputeProfileUuidAndKubernetesWorkerInstances < updateCluster.
Explanation
The cluster’s Compute Profile lacks a node-pool specification, and setting --num-nodes for cluster updating does not work when the Compute Profile does not specify a node pool.
Workaround
Create and apply a new compute profile that specifies a node pool:
-
Create a new JSON compute profile definition that defines a node pool with a worker node instance count. For example, this code defines a compute profile
cp-1with target instance count16:cat cp-1.json { "name": "cp-1", "description": "compute profile 1", "parameters": { "cluster_customization": { "control_plane": { "cpu": 2, "memory_in_mb": 8192, "persistent_disk_in_mb": 28240, "ephemeral_disk_in_mb": 28240, "instances": 3 }, "node_pools": [ { "name": "pool-1", "description": "node pool 1", "instances": 16 } ] } } } -
Apply the new compute profile to the cluster:
tkgi update-cluster my-cluster --compute-profile cp-1.jsonWhen you apply the new profile, the worker nodes migrate from instance group
workerto the new instance group,worker-pool-1. If the instance count exceeds the number specified in plan, TKGI adds instances to the new group before updating existing instances.
After the new node pool is created, you can scale the cluster by running:
tkgi update-cluster cluster-1 --node-pool-instances "POOL:COUNT"
Where POOL is the node pool name and COUNT is its instance count.
Wrong Node Scale after Updating Cluster with New Compute Profile
This issue is fixed in TKGI v1.16.8.
Symptom
For clusters created with or assigned a compute profile as described in Using Compute Profiles, scaling the cluster by updating its compute profile and then performing additional cluster update operations may leave the cluster with the wrong node count.
For example, after the following steps, the cluster may have a node count of 12 instead of 18 in its updated node pool:
- Create cluster
my-clusterwith a compute profilecp-1that defines a node poolpool-1with 6 nodes. - Scale the cluster to 12 nodes by running
tkgi update-cluster my-cluster --node-pool-instances "pool-1:12". - Create a new compute profile
cp-2that sets thepool-1node pool to 18 nodes. - Update the cluster with the profile
cp-2by runningtkgi update-cluster my-cluster --compute-profile cp-2. - Rotate the cluster’s certificates or perform other
tkgi update-clusteroperations.
After the last step, the cluster pool’s node count may erroneously revert to 12.
Workaround
After you update a cluster with a new compute profile that changes node counts, run tkgi update-cluster CLUSTER-NAME --node-pool-instances "NODEPOOL-NAME:NODE-COUNT" where NODE-COUNT is the new node count.
With the example above, after Step 3 run tkgi update-cluster my-cluster --node-pool-instances "pool-1:18".
Interoperability with Tanzu Mission Control is Unavailable
This issue is fixed by using the June 29, 2023 or later releases of Tanzu Mission Control.
Tanzu Mission Control (TMC) is not compatible with Kubernetes v1.25 at the time of the TKGI v1.16 release and temporarily cannot manage TKGI v1.16 Kubernetes clusters. Interoperability between TMC and TKGI v1.16 is expected at a later time.
Refer to the VMware Tanzu Mission Control Release Notes for an announcement of compatibility with Kubernetes v1.25.
Limitations on Using the VMware vSphere CSI Driver
The VMware vSphere CSI Driver supports a limited set of VMware vSphere features. Before enabling the vSphere CSI Driver on a TKGI cluster, confirm the cluster and storage configuration are supported by the driver. For more information, see Unsupported Features and Limitations in Deploying and Managing Cloud Native Storage (CNS) on vSphere.
TKGI Sets the Maximum Persistent Volumes per Node to 59 Instead of 45
This issue is fixed in TKGI v1.16.2.
In TKGI, the maximum number of persistent volumes for a node is set to 59, instead of 45.
Explanation
The three available SCSI controllers in a TKGI node support 45 persistent volumes in total. However, on vSphere CSI nodes, TKGI sets the maximum number of supported persistent volumes to 59 instead of 45.
Limitations on Using a Public Cloud CSI Driver
TKGI supports using a public cloud CSI Driver on a TKGI-provisioned cluster.
Installing a Public Cloud CSI Driver on a TKGI Cluster
If you plan to use a public cloud CSI Driver on a TKGI-provisioned cluster, VMware recommends you take additional steps before installing the CSI Driver:
-
For most public clouds, VMware recommends you follow the CSI Driver installation procedure recommended by the public cloud provider.
-
For installing the Azure CSI Driver on a TKGI cluster, VMware recommends you follow the procedure in the How to install Azure file/disk CSI driver onto TKGI 1.14 cluster knowledge base article in the VMware Tanzu Support Hub.
Managing a TKGI Cluster That Uses a Public Cloud CSI Driver
If you have enabled a public cloud CSI Driver on a TKGI cluster, you must take additional steps when deleting,upgrading, or updating the cluster:
- Updating a Cluster on a Public Cloud
- Upgrading a Cluster on a Public Cloud
- Deleting a Cluster on a Public Cloud
Updating a Cluster on a Public Cloud
When updating a cluster that uses a public cloud CSI Driver:
- No preparation step are needed when updating a multi-worker node cluster.
-
To prepare a single-worker node cluster for updating:
- Resize the cluster to two or more worker nodes before updating the cluster. For more information, see Scaling Existing Clusters.
- Update the cluster.
Upgrading a Cluster on a Public Cloud
When upgrading a cluster that uses a public cloud CSI Driver:
- No preparation steps are needed when upgrading a multi-worker node cluster.
-
To prepare a single-worker node cluster for upgrading:
- Resize the cluster to two or more worker nodes before upgrading the cluster. For more information, see Scaling Existing Clusters.
- Upgrade the cluster. For more information on upgrading clusters, see Upgrading Clusters.
Deleting a Cluster on a Public Cloud
When deleting a cluster that uses a public cloud CSI Driver:
- Manually delete the workload PVCs and PVs before deleting the cluster.
- Delete the cluster. For more information on deleting clusters, see Deleting Clusters.
The ‘kube-state-metrics’ ClusterRole Is Deleted during Cluster Upgrade
This issue is fixed in TKGI v1.16.1.
The wavefront-proxy-errand deletes the kube-state-metrics ClusterRole during cluster upgrade. The deleted ClusterRole must be manually restored after upgrading a cluster.
The Validator Secret Certificate Is Not Rotated
This issue is fixed in TKGI v1.16.1.
The certificates signed by pks-ca, for example, the pks-system namespace validator, event-controller, and fluent-bit secret certificates, are not rotated by running tkgi rotate-certificates and are not automatically rotated during cluster upgrades.
Workaround
To rotate the certificates signed by pks-ca:
-
Delete the
event-controller,fluent-bit, andpks-systemnamespace validator secrets. -
If you also want to rotate the pks-ca certificate, delete the pks-ca secret.
-
To generate a new pks-ca certificate and or leaf certificates, apply the cert-generator job:
- Backup the
cert-generatorjob as yaml. - Delete the
cert-generatorjob. - Apply the backup
cert-generatoryaml. - Restart
event-controller,fluent-bit, and thepks-systemnamespace validator.
- Backup the
Rotated TKGI Certificates Remain Listed as Expiring on the Ops Manager Certificates List
This issue is fixed in TKGI v1.16.2.
After rotating certificates, the Ops Manager list of certificates shows the pks_api_internal_2018certificate on each cluster remains expiring on the original expiration date.
Explanation
The Ops Manager list of certificates is displaying stale data for pks_api_internal_2018 certificates.
The Fluent Bit Pod Restarts Due to Out-of-Memory Issue
When the LogSink feature is enabled, the Fluent Bit Pod can experience an out-of-memory issue during high memory utilization. The Fluent Bit Pod logs an OOMKilled Kubernetes exit code 137 error, and restarts.
Workaround
In TKGI v1.16.1 and later, increase the Fluent Bit Pod memory limit. For more information, see Log Sink Resources in the Installing Tanzu Kubernetes Grid Integrated Edition topic for your IaaS.
HTTPS Ingress Outage During VMware NSX Certificate Rotation
This issue is fixed in TKGI v1.16.2.
When the TKGI CLI rotates VMware NSX certificates, HTTPS Ingress with customer-defined TLS experiences a brief outage.
Explanation
When the TKGI CLI rotates VMware NSX certificates, it updates the default certificate IDs in the HTTPS Load Balancer (LB) virtual server, and removes the server name indicator (SNI) certificate IDs on the LB Virtual Server. This causes a brief outage of HTTPS Ingress with customer-defined TLS. After the certificate rotation, NSX Container Plugin (NCP) restarts and resets the removed SNI.
Cluster Deletion Incomplete If an Error Occurs
If an error occurs during cluster deletion, objects such as load balancers, NAT rules, and the cluster T1 logical router will not be deleted.
Explanation
TKGI cluster deletion uses pks-nsx-t-osb-proxy to delete cluster objects. If an error occurs while deleting one of the objects, the pks-nsx-t-osb-proxy halts and returns an error without deleting the remaining cluster objects.
For example, the following was logged by pks-nsx-t-osb-proxy before stopping for a 409 load balancer error:
{"timestamp":"1677164195.817412615","source":"pks-nsx-t-osb-proxy","message":"pks-nsx-t-osb-proxy.nsxt.delete NSX-T cluster network","log_level":2,"data":{"error":"delete network: delete lb virtual server: unknown error (status 409): {resp:0xc0004efe60} ","session":"3"}}
{"timestamp":"1677164198.642020464","source":"pks-nsx-t-osb-proxy","message":"pks-nsx-t-osb-proxy.nsxt.delete-cluster-network-end","log_level":1,"data":{"instance-id":"....","session":"3"}}
Some Telegraf Metric-Sink Pods Crash After TKGI Upgrade
This issue is fixed in TKGI v1.16.3.
After upgrading to TKGI v1.15.0 and later, some Telegraf metric-sink pods crash with this error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 4m18s (x547 over 129m) kubelet Back-off pulling image "cnabu-docker-local.artifactory.eng.vmware.com/oratos/telegraf:1a3337bb81890b3ca0848b5dd456
Explanation
During the TKGI upgrade, the metric-controller uses the Telegraf image in the previous TKGI version to deploy Telegraf in the new version. After upgrading TKGI to version 1.15.x, the old telegraf image is deleted from some worker nodes due to high disk utlization. The metric-controller is unable to deploy Telegraf on those nodes.
Workaround
Before you perform this procedure, ensure that you have collected the name and the namespace from your metricSink Custom Resource (CR).
To resolve this issue:
-
In the TKGI, run the following command to find the new tag for Telegraf:
kubectl describe deployment observability-manager -n pks-systemNote the tag that corresponds to the
Imagefield underobservability-manager. For example,efcb96f78984d7731kl99hds564e. -
Run the following command to edit the Telegraf deployment details:
kubectl edit deployment telegraf-METRICSINK-NAME -n METRICSINK-NAMESPACEWhere:
METRICSINK-NAMEis the name of MetricSink that you collected from the metricSink CR.METRICSINK-NAMESPACEis the namespace of MetricSink that you collected from the metricSink CR.
-
In the
image: cnbu-docker-local.artifactory.eng.vmware.com/oratos/telegraf:field, replace the existing tag with the tag that you noted in Step 1. -
Save the changes and exit the editor.
Telemetry Does Not Report Large Metrics
This issue is fixed in TKGI v1.16.3.
The Telemetry Server logs a token too long error in /var/vcap/sys/log/telemetry-server/telemetry-server.stderr.log for metrics that exceed 1 MB in size.
For example:
error : {"timestamp":"...","level":"fatal","source":"send-to-vac","message":"send-to-vac.error while reading generic metrics","data":{"error":"error reading generic metrics: failed to scan: bufio.Scanner: token too long","trace":"goroutine 1 [running]:\ncode.cloudfoundry.org/lager.(*logger).Fatal(0xc0000b2420, {0xce062b, 0x23}, {0xdd6c00, 0xc00000e060}, {0x0, 0xc0001fbc70, 0x3})\n\t/var/vcap/data/compile/send-to-vac/telemetry-cmds/vendor/code.cloudfoundry.org/lager/logger.go:138 +0xa5\nmain.main()\n\t/var/vcap/data/compile/send-to-vac/telemetry-cmds/cmd/send-to-vac/main.go:69 +0x339\n"}}
panic: error reading generic metrics: failed to scan: bufio.Scanner: token too long
API Server Audit Logs Leak Tokens
This issue is fixed in TKGI v1.16.3.
The API Server audit logs include clear-text tokens on clusters master nodes that use the default audit policy.
Description
JSON Web Tokens in the tokenrequests API body are being written to the API Server audit log /var/vcap/sys/log/kube-apiserver/audit/log on TKGI clusters.
TKGI Does Not Support the Antrea Egress Feature on AWS
In AWS environments, TKGI does not support the Antrea CNI Egress feature. For example, the Egress resource egressIP and externalIPPool fields in an antrea-config configuration are ignored on AWS. For more information about the Antra Egress feature, see What is Egress in the Antrea documentation.
Cluster Might Fail to Send the cluster_name Tag to Logging after Cluster Upgrade
This issue is fixed in TKGI v1.16.4.
Occasionally, a cluster might fail to send the cluster_name tag to logging after being upgraded.
Description
After upgrading a cluster, the Name record_modifier filter will occasionally be missing from the cluster’s fluent-bit ConfigMap, and the cluster_name is not included in log entries. This problem occurs if the sink-controller process configures the cluster before the observability-manager starts, which overwrites the desired configuration.
Cluster Update Operations Fail Due to Duplicate Tag Keys
This issue is fixed in TKGI v1.16.5.
The cluster update operations fail if you reuse the same key in different tags in a cluster.
Description
Tag keys must be unique within a cluster, for example, key1:value1, key2:value2. TKGI does not prevent you from reusing the same key for multiple tags in a cluster, for example, key1:value1, key1:value2. However, the cluster update operations fail.
Workaround
Use different keys for the tags within a cluster, for example, key1:value1, key2:value2. For more information, see Tagging Rules.
Node Drain Operation Ignores the TKGI Deployment Plan Settings
This issue is fixed in TKGI v1.16.5.
When upgrading a cluster, the node drain operation ignores the pod shutdown grace period specified in the deployment plan on the TKGI Tile.
Pods on NSX v3.2.3 Can Enter a NotReady State
When TKGI is deployed on NSX v3.2.3 and there are large numbers of pods with liveness probes, the pods on TKGI-provisioned clusters can enter a NotReady state.
Symptom
In addition to your pods being NotReady, if you restart NSX Manager:
- Your NSX API logs include numerous repetitions of
"POST /nsxapi/api/v1/firewall/sections/.../rules?operation=insert_bottom HTTP/1.1" .... -
Your NCP logs include errors similar to:
"nsx-container-ncp" subcomp="ncp" level="ERROR" security="True" errorCode="NCP00034"] nsx_ujo.ncp.nsx.manager.firewall_service Failed to create health check rule for port ...: Service cluster: 'https://nsx-manager.example.com' is unavailable. Please, check NSX setup and/or configuration.
Description
As pods are created or deleted, DFW firewall rules are replicated for the pod’s liveness probe. In NSX v3.2.3, the firewall rules are unintentionally duplicated during this replication. After numerous pod creation/deletion events, the compounded duplication creates a DFW firewall section large enough to create noticeable delays during pod operations and, eventually, a pod NotReady state.
Workaround
Upgrade NSX to a version that includes the fix, namely 3.2.4 or 4.1.1 or later.
TKGI Management Console v1.16.0
Tanzu Kubernetes Grid Integrated Edition Management Console provides an opinionated installation of TKGI.
Warning: Use only Ubuntu Jammy stemcell v1.83 with TKGI v1.16.0. NCP cannot be restarted following tkgi update-cluster while using TKGI with later versions.
Release Date: February 28, 2023
Product Snapshot
Note: The component versions supported by TKGI Management Console might differ from or be more limited than the versions supported by TKGI.
Release Details | ||
|---|---|---|
| Version | v1.16.0 | |
| Release date | February 28, 2023 | |
| Installed TKGI version | v1.16.0 | |
| Installed Ops Manager version | v2.10.53 | Release Notes |
Component Versions | ||
| Installed Kubernetes version | v1.25.4* | Release Notes |
| Installed Harbor Registry version | v2.6.2* | Release Notes |
| Ubuntu Jammy stemcell | v1.83* | Release Notes |
* Components marked with an asterisk have been updated.
Upgrade Path
The supported upgrade paths to Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.0 are from TKGI MC v1.15.2 and earlier TKGI v1.15 patches.
Breaking Changes
-
Existing Telemetry Program configuration settings are ignored and telemetry must be reconfigured.
To reconfigure Telemetry, see VMware CEIP in the Installing Tanzu Kubernetes Grid Integrated Edition topic for your IaaS.
For more information on the Telemetry enhancements in this release, see Telemetry Enhancements.
Features and Resolved Issues
TKGI Management Console v1.16.0 has the following features:
Telemetry Enhancements
Customers who participate in the CEIP receive proactive support benefits that include a weekly report based on telemetry data. Contact your Customer Success Manager to subscribe to this report. You can view a sample report at TKGI Platform Operations Report.
Deprecations
The following TKGI features have been deprecated or removed from TKGI Management Console v1.16:
Known Issues
The Tanzu Kubernetes Grid Integrated Edition Management Console v1.16.0 has the following known issues:
vRealize Log Insight Integration Does Not Support HTTPS Connections
Symptom
The Tanzu Kubernetes Grid Integrated Edition Management Console integration to vRealize Log Insight does not support connections to the HTTPS port on the vRealize Log Insight server.
Workaround
- Use SSH to log in to the Tanzu Kubernetes Grid Integrated Edition Management Console appliance VM.
- Open the file
/lib/systemd/system/pks-loginsight.servicein a text editor. - Add
-e LOG_SERVER_ENABLE_SSL_VERIFY=false. -
Set
-e LOG_SERVER_USE_SSL=true.The resulting file should look like the following example:
ExecStart=/bin/docker run --privileged --restart=always --network=pks -v /var/log/journal:/var/log/journal --name=pks-loginsight -e TYPE=gear2-vm -e LOG_SERVER_HOST=${LOGINSIGHT_HOST} -e LOG_SERVER_PORT=${LOGINSIGHT_PORT} -e LOG_SERVER_ENABLE_SSL_VERIFY=false -e LOG_SERVER_USE_SSL=true -e LOG_SERVER_AGENT_ID=${LOGINSIGHT_ID} pksoctopus/vrli-journald:v07092019 -
Save the file and run
systemctl daemon-reload. - To restart the vRealize Log Insight service, run
systemctl restart pks-loginsight.service.
Tanzu Kubernetes Grid Integrated Edition Management Console can now send logs to the HTTPS port on the vRealize Log Insight server.
vSphere HA causes Management Console ovfenv Data Corruption
Symptom
If you enable vSphere HA on a cluster, if the TKGI Management Console appliance VM is running on a host in that cluster, and if the host reboots, vSphere HA recreates a new TKGI Management Console appliance VM on another host in the cluster. Due to an issue with vSphere HA, the ovfenv data for the newly created appliance VM is corrupted and the new appliance VM does not boot up with the correct network configuration.
Workaround
- In the vSphere Client, right-click the appliance VM and select Power > Shut Down Guest OS.
- Right-click the appliance again and select Edit Settings.
- Select VM Options and click OK.
- Verify under Recent Tasks that a
Reconfigure virtual machinetask has run on the appliance VM. - Power on the appliance VM.
Base64 encoded file arguments are not decoded in Kubernetes profiles
Symptom
Some file arguments in Kubernetes profiles are base64 encoded. When the management console displays the Kubernetes profile, some file arguments are not decoded.
Workaround
Run echo "$content" | base64 --decode
Network profiles not immediately selectable
Symptom
If you create network profiles and then try to apply them in the Create Cluster page, the new profiles are not available for selection.
Workaround
Log out of the management console and log back in again.
Real-Time IP information not displayed for network profiles
Symptom
In the cluster summary page, only default IP pool, pod IP block, node IP block values are displayed, rather than the real-time values from the associated network profile.
Workaround
None
Error After Modifying Your Harbor Storage Configuration
Symptom
You receive the following error after modifying your existing Harbor installation’s storage configuration:
Error response from daemon: manifest for ... not found: manifest unknown: manifest unknown
Explanation
Harbor does not support modifying an existing Harbor installation’s storage configuration.
Workaround
To modify your Harbor storage configuration, re-install Harbor. Before starting Harbor, configure the new Harbor installation with the desired configuration.
Windows Stemcells Must be Re-Imported After Upgrading Ops Manager
Symptom
After upgrading Ops Manager, your Management Console does not recognize a Windows stemcell imported when using the prior version of Ops Manager.
Workaround
If your Management Console does not recognize a Windows stemcell after upgrading Ops Manager:
- Re-import your previously imported Windows stemcell.
- Apply Changes to TKGI MC.
Your New Clusters Are Not Shown In Tanzu Mission Control
Symptom
After you create a cluster, Tanzu Mission Control does not include the cluster in cluster lists. You have a “Resource not found” error similar to the following in your BOSH logs:
Cluster Name in TMC: cluster-1
Cluster Name Prefix: tkgi-my-prefix-
Group Name in TMC: my-prefix-clusters
Cluster Description in TMC: VMware Enterprise PKS Attaching cluster ''tkgi-my-prefix-cluster-1'' to TMC
Fetching token successful
request POST:/v1alpha1/clusters,
response 404 Not Found:{"error":"Resource not found - clustergroup(my-prefix-clusters)
org id(d859dc9f-g622-426d-8c91-939a9f13dea9)",
"code":5,"message":"Resource not found - clustergroup(my-prefix-clusters)
Explanation
The cluster group you assign a cluster to must be defined in Tanzu Mission Control before you assign your cluster to the cluster group in the TKGI Management Console.
Workaround
To resolve the problem, complete the steps in Attaching a Tanzu Kubernetes Grid Integrated (TKGI) cluster to Tanzu Mission Control (TMC) fails with “Resource not found - clustergroup(cluster-group-name)” in the VMware Tanzu Knowledge Base.
Previous nsx-t-superuser-certificate Is Restored during TKGI MC Upgrade
This issue is fixed in TKGI MC v1.16.1.
Upgrading the TKGI MC after rotating the nsx-t-superuser-certificate certificate restores the previous nsx-t-superuser-certificate certificate. For example, this issue occurs if you upgrade TKGI MC after following the steps in How to renew the nsx-t-superuser-certificate used by Principal Identity user (80355).
TKGI MC Unable to Create a Network Profile Configured with Source IP Ingress Persistence
This issue is fixed in TKGI v1.16.2.
The TKGI MC halts and returns the following error when creating a network profile which includes an ingress_persistence_settings.persistence_type configuration:
Failed to save network profile. ingress_persistence_settings.persistence_type in body should be one of [none cookie]
Content feedback and comments