v1.24.X
Before upgrading from earlier releases, be sure to read the Kubernetes Urgent Upgrade Notes.
Release v1.24.17+rke2r1
This release updates Kubernetes to v1.24.17, and fixes a number of issues.
Important Notes
-
⚠️ This release includes support for remediating CVE-2023-32186, a potential Denial of Service attack vector on RKE2 servers. See https://github.com/rancher/rke2/security/advisories/GHSA-p45j-vfv5-wprq for more information, including mandatory steps necessary to harden clusters against this vulnerability.
-
If your server (control-plane) nodes were not started with the
--token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.24.16+rke2r1:
- Sync Felix and calico-node datastore (#4578)
- Update Calico and Flannel on Canal (#4566)
- Update cilium to v1.14.0 (#4587)
- Update to whereabouts v0.6.2 (#4593)
- Windows fixes (#4674)
- Bumping k3s version 1.24 (#4680)
- Version bumps and backports for 2023-08 release (#4687)
- Updated the embedded containerd to v1.7.3+k3s1
- Updated the embedded runc to v1.1.8
- Updated the embedded etcd to v3.5.9+k3s1
- Security bump to docker/distribution
- Fix static pod UID generation and cleanup
- Fix default server address for rotate-ca command
- Upgrade multus chart to v4.0.2-build2023081100 (#4682)
- Update to 1.24.17 (#4689)
- Bump K3s version for v1.24 (#4704)
- Added a new
--tls-san-security
option. This flag defaults to false, but can be set to true to disable automatically adding SANs to the server's TLS certificate to satisfy any hostname requested by a client.
- Added a new
- Add additional static pod cleanup during cluster reset (#4727)
Release v1.24.16+rke2r1
This release updates Kubernetes to v1.24.16, and fixes a number of issues.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.24.15+rke2r1:
- Update Calico to v3.26.1 (#4426)
- Update multus version (#4434)
- Add log files for felix and calico (#4440)
- Update K3s for 2023-07 releases (#4450)
- Bump ingress-nginx charts to v1.7.1 (#4456)
- Add support for cni none on windows and initial windows-bgp backend (#4462)
- Updated Calico crd on Canal (#4469)
- Update to v1.24.16 (#4497)
Release v1.24.15+rke2r1
This release updates Kubernetes to v1.24.15, and fixes a number of issues.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.24.14+rke2r1:
- Update canal chart (#4345)
- Bump K3s version for v1.24 (#4359)
- Update rke2 (#4366)
- Bump harvester cloud provider 0.2.2 (#4374)
- Preserve mode when extracting runtime data (#4380)
- Use our own file copy logic instead of continuity (#4391)
Release v1.24.14+rke2r1
This release updates Kubernetes to v1.24.14, and fixes a number of issues.
Important Note
- If your server (control-plane) nodes were not started with the
--token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
- Many systems have updated their packages with newer version of container-selinux (> v2.191.0) which is incompatible with our rke2-selinux policy and require a change in policy. We have updated our policy; you will notice the rke2-selinux package being upgraded from version v0.11.1 to newer version v0.12.0.
Changes since v1.24.13+rke2r1:
- Fix drone dispatch step (#4150)
- Update Cilium to v1.13.2 (#4177)
- Bump golangci-lint for golang 1.20 compat and fix warnings (#4189)
- Enable --with-node-id flag (#4131) (#4192)
- Update Calico image on Canal (#4220)
- Move Drone dispatch pipeline (#4203)
- Backport fixes and bump K3s/containerd/runc versions (#4213)
- The bundled containerd and runc versions have been bumped to v1.7.1-k3s1/v1.1.7
- Replace
github.com/ghodss/yaml
withsigs.k8s.io/yaml
- Fix hardcoded file mount handling for default audit log filename
- Bump metrics-server to v0.6.3 (#4247)
- V1.24.14+rke2r1 (#4262)
- Bump vsphere csi/cpi charts (#4274)
- Bump vsphere csi to remove duplicate CSI deployment. (#4298)
Release v1.24.13+rke2r1
This release updates Kubernetes to v1.24.13, and fixes a number of issues.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.24.12+rke2r1:
- Update whereabouts to v0.6.1 (#4084)
- Updated Calico chart to add crd missing values (#4049)
- Bump ingress-nginx to 1.6.4 (#4095)
- Bump k3s and component versions for 2023-04 release (#4100)
- Automatically add volume mount for audit-log-path dir if set (#4110)
- Update Kubernetes to v1.24.13 (#4117)
Release v1.24.12+rke2r1
This release updates Kubernetes to v1.24.12, and fixes a number of issues.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.24.11+rke2r1:
- Update Flannel version to v0.21.3 on Canal (#3984)
- Remove Root debug + Remove unmounts (#3987)
- Bump K3s (#3991)
- Don't package empty windows folder (#3997)
- Update cilim to v1.13.0 (#4007)
- Bump harvester csi driver to v0.1.16 (#4006)
- Bump k3s and containerd (#4016)
- Improve uninstallation on RHEL based OS (#4020)
- Update 1.24 and Go (#4030)
Release v1.24.11+rke2r1
This release updates Kubernetes to v1.24.11, and fixes a number of issues.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.24.10+rke2r1:
- Don't handle kube-proxy in static pod cleanup (#3836)
- Bump cilium images (#3826)
- Update canal chart to v3.25.0-build2023020901 (#3885)
- Remove pod logs as part of killall (#3868)
- Bump wharfie and go-containerregistry (#3865)
- Update Calico to v3.25.0 (#3891)
- Bump k3s version (#3899)
- Fixed an issue where leader-elected controllers for managed etcd did not run on etcd-only nodes
- RKE2 now functions properly when the cluster CA certificates are signed by an existing root or intermediate CA. You can find a sample script for generating such certificates before RKE2 starts in the K3s repo at contrib/util/certs.sh.
- RKE2 now supports
kubeadm
style join tokens.rke2 token create
now creates join token secrets, optionally with a limited TTL. - RKE2 agents joined with an expired or deleted token stay in the cluster using existing client certificates via the NodeAuthorization admission plugin, unless their Node object is deleted from the cluster.
- ServiceLB now honors the Service's ExternalTrafficPolicy. When set to Local, the LoadBalancer will only advertise addresses of Nodes with a Pod for the Service, and will not forward traffic to other cluster members. (ServiceLB is still disabled by default)
- Bump K3s commit (#3907)
- Add bootstrap token auth handler (#3922)
- Update to kubernetes v1.24.11 (#3950)
Release v1.24.10+rke2r1
This release updates Kubernetes to v1.24.10 to backport registry changes and fix two critical issues.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.24.9+rke2r2:
- Update multus to v3.9.3 and whereabouts to v0.6 (#3792)
- Bump vSphere CPI chart to v1.24.3 (#3763)
- Generate report and upload test results (#3771) (#3795)
- Bump harvester cloud provider and harvester csi driver (#3784)
- Bump containerd to v1.6.15-k3s1 (#3779)
- Bump K3s version for tls-cipher-suites fix (#3798)
Release v1.24.9+rke2r2
This release updates containerd to v1.6.14 to resolve an issue where pods would lose their CNI information when containerd was restarted.
Important Note
If your server (control-plane) nodes were not started with the --token
CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.
You may retrieve the token value from any server already joined to the cluster:
cat /var/lib/rancher/rke2/server/token
Changes since v1.24.9+rke2r1:
- Bump containerd to v1.6.14-k3s1 (#3747)
- The embedded containerd version has been bumped to v1.6.14-k3s1. This includes a backported fix for containerd/7843 which caused pods to lose their CNI info when containerd was restarted, which in turn caused the kubelet to recreate the pod.
- Windows agents now use the k3s fork of containerd, which includes support for registry rewrites.