Skip to main content

v1.24.X

Upgrade Notice

Before upgrading from earlier releases, be sure to read the Kubernetes Urgent Upgrade Notes.

VersionRelease dateKubernetesEtcdContainerdRuncMetrics-serverCoreDNSIngress-NginxHelm-controllerCanal (Default)CalicoCiliumMultus
v1.24.17+rke2r1Sep 06 2023v1.24.17v3.5.9-k3s1v1.7.3-k3s1v1.1.8v0.6.3v1.10.14.6.1v0.15.4Flannel v0.22.1
Calico v3.26.1
v3.26.1v1.14.0v4.0.2
v1.24.16+rke2r1Jul 28 2023v1.24.16v3.5.7-k3s1v1.7.1-k3s1v1.1.7v0.6.3v1.10.14.6.1v0.15.2Flannel v0.22.0
Calico v3.25.1
v3.26.1v1.13.2v4.0.2
v1.24.15+rke2r1Jun 27 2023v1.24.15v3.5.7-k3s1v1.7.1-k3s1v1.1.7v0.6.3v1.10.14.5.2v0.15.0Flannel v0.22.0
Calico v3.25.1
v3.25.0v1.13.2v3.9.3
v1.24.14+rke2r1May 30 2023v1.24.14v3.5.7-k3s1v1.7.1-k3s1v1.1.7v0.6.3v1.10.14.5.2v0.14.0Flannel v0.21.3
Calico v3.25.1
v3.25.0v1.13.2v3.9.3
v1.24.13+rke2r1Apr 21 2023v1.24.13v3.5.7-k3s1v1.6.19-k3s1v1.1.5v0.6.2v1.10.14.5.2v0.13.3Flannel v0.21.3
Calico v3.25.0
v3.25.0v1.13.0v3.9.3
v1.24.12+rke2r1Mar 27 2023v1.24.12v3.5.4-k3s1v1.6.19-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.21.3
Calico v3.25.0
v3.25.0v1.13.0v3.9.3
v1.24.11+rke2r1Mar 10 2023v1.24.11v3.5.4-k3s1v1.6.15-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.21.1
Calico v3.25.0
v3.25.0v1.12.5v3.9.3
v1.24.10+rke2r1Jan 27 2023v1.24.10v3.5.4-k3s1v1.6.15-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.20.2
Calico v3.24.5
v3.24.5v1.12.4v3.9.3
v1.24.9+rke2r2Jan 10 2023v1.24.9v3.5.4-k3s1v1.6.14-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.20.2
Calico v3.24.5
v3.24.5v1.12.4v3.9
v1.24.9+rke2r1Dec 20 2022v1.24.9v3.5.4-k3s1v1.6.12-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.20.2
Calico v3.24.5
v3.24.5v1.12.4v3.9
v1.24.8+rke2r1Nov 18 2022v1.24.8v3.5.4-k3s1v1.6.8-k3s1v1.1.4v0.6.1v1.9.34.1.0v0.13.0Flannel v0.19.1
Calico v3.24.1
v3.24.1v1.12.3v3.8
v1.24.7+rke2r1Oct 20 2022v1.24.7v3.5.4-k3s1v1.6.8-k3s1v1.1.4v0.6.1v1.9.34.1.0v0.12.3Flannel v0.19.1
Calico v3.24.1
v3.24.1v1.12.1v3.8
v1.24.6+rke2r1Sep 27 2022v1.24.6v3.5.4v1.6.8-k3s1v1.1.4v0.5.0v1.9.34.1.0v0.12.3Flannel v0.19.1
Calico v3.23.3
v3.24.1v1.12.1v3.8
v1.24.4+rke2r1Aug 26 2022v1.24.4v3.5.4v1.6.6-k3s1v1.1.2v0.5.0v1.9.34.1.0v0.12.3Flannel v0.17.0
Calico v3.22.2
v3.23.1v1.12.0v3.8
v1.24.3+rke2r1Jul 21 2022v1.24.3v3.5.4v1.6.6-k3s1v1.1.2v0.5.0v1.9.34.1.0v0.12.3Flannel v0.17.0
Calico v3.22.2
v3.23.1v1.11.5v3.8
v1.24.2+rke2r1Jul 05 2022v1.24.2v3.5.4v1.6.6-k3s1v1.1.2v0.5.0v1.9.34.1.0v0.12.3Flannel v0.17.0
Calico v3.22.2
v3.23.1v1.11.5v3.8
v1.24.1+rke2r2Jun 13 2022v1.24.1v3.5.4v1.5.13-k3s1v1.1.2v0.5.0v1.9.14.1.0v0.12.1Flannel v0.17.0
Calico v3.22.2
v3.23.1v1.11.5v3.8

Release v1.24.17+rke2r1

This release updates Kubernetes to v1.24.17, and fixes a number of issues.

Important Notes

  • ⚠️ This release includes support for remediating CVE-2023-32186, a potential Denial of Service attack vector on RKE2 servers. See https://github.com/rancher/rke2/security/advisories/GHSA-p45j-vfv5-wprq for more information, including mandatory steps necessary to harden clusters against this vulnerability.

  • If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

    You may retrieve the token value from any server already joined to the cluster:

    cat /var/lib/rancher/rke2/server/token

Changes since v1.24.16+rke2r1:

  • Sync Felix and calico-node datastore (#4578)
  • Update Calico and Flannel on Canal (#4566)
  • Update cilium to v1.14.0 (#4587)
  • Update to whereabouts v0.6.2 (#4593)
  • Windows fixes (#4674)
  • Bumping k3s version 1.24 (#4680)
  • Version bumps and backports for 2023-08 release (#4687)
    • Updated the embedded containerd to v1.7.3+k3s1
    • Updated the embedded runc to v1.1.8
    • Updated the embedded etcd to v3.5.9+k3s1
    • Security bump to docker/distribution
    • Fix static pod UID generation and cleanup
    • Fix default server address for rotate-ca command
  • Upgrade multus chart to v4.0.2-build2023081100 (#4682)
  • Update to 1.24.17 (#4689)
  • Bump K3s version for v1.24 (#4704)
    • Added a new --tls-san-security option. This flag defaults to false, but can be set to true to disable automatically adding SANs to the server's TLS certificate to satisfy any hostname requested by a client.
  • Add additional static pod cleanup during cluster reset (#4727)

Release v1.24.16+rke2r1

This release updates Kubernetes to v1.24.16, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.15+rke2r1:

  • Update Calico to v3.26.1 (#4426)
  • Update multus version (#4434)
  • Add log files for felix and calico (#4440)
  • Update K3s for 2023-07 releases (#4450)
  • Bump ingress-nginx charts to v1.7.1 (#4456)
  • Add support for cni none on windows and initial windows-bgp backend (#4462)
  • Updated Calico crd on Canal (#4469)
  • Update to v1.24.16 (#4497)

Release v1.24.15+rke2r1

This release updates Kubernetes to v1.24.15, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.14+rke2r1:

  • Update canal chart (#4345)
  • Bump K3s version for v1.24 (#4359)
  • Update rke2 (#4366)
  • Bump harvester cloud provider 0.2.2 (#4374)
  • Preserve mode when extracting runtime data (#4380)
  • Use our own file copy logic instead of continuity (#4391)

Release v1.24.14+rke2r1

This release updates Kubernetes to v1.24.14, and fixes a number of issues.

Important Note

  1. If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token
  1. Many systems have updated their packages with newer version of container-selinux (> v2.191.0) which is incompatible with our rke2-selinux policy and require a change in policy. We have updated our policy; you will notice the rke2-selinux package being upgraded from version v0.11.1 to newer version v0.12.0.

Changes since v1.24.13+rke2r1:

  • Fix drone dispatch step (#4150)
  • Update Cilium to v1.13.2 (#4177)
  • Bump golangci-lint for golang 1.20 compat and fix warnings (#4189)
  • Enable --with-node-id flag (#4131) (#4192)
  • Update Calico image on Canal (#4220)
  • Move Drone dispatch pipeline (#4203)
  • Backport fixes and bump K3s/containerd/runc versions (#4213)
    • The bundled containerd and runc versions have been bumped to v1.7.1-k3s1/v1.1.7
    • Replace github.com/ghodss/yaml with sigs.k8s.io/yaml
    • Fix hardcoded file mount handling for default audit log filename
  • Bump metrics-server to v0.6.3 (#4247)
  • V1.24.14+rke2r1 (#4262)
  • Bump vsphere csi/cpi charts (#4274)
  • Bump vsphere csi to remove duplicate CSI deployment. (#4298)

Release v1.24.13+rke2r1

This release updates Kubernetes to v1.24.13, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.12+rke2r1:

  • Update whereabouts to v0.6.1 (#4084)
  • Updated Calico chart to add crd missing values (#4049)
  • Bump ingress-nginx to 1.6.4 (#4095)
  • Bump k3s and component versions for 2023-04 release (#4100)
  • Automatically add volume mount for audit-log-path dir if set (#4110)
  • Update Kubernetes to v1.24.13 (#4117)

Release v1.24.12+rke2r1

This release updates Kubernetes to v1.24.12, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.11+rke2r1:

  • Update Flannel version to v0.21.3 on Canal (#3984)
  • Remove Root debug + Remove unmounts (#3987)
  • Bump K3s (#3991)
  • Don't package empty windows folder (#3997)
  • Update cilim to v1.13.0 (#4007)
  • Bump harvester csi driver to v0.1.16 (#4006)
  • Bump k3s and containerd (#4016)
  • Improve uninstallation on RHEL based OS (#4020)
  • Update 1.24 and Go (#4030)

Release v1.24.11+rke2r1

This release updates Kubernetes to v1.24.11, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.10+rke2r1:

  • Don't handle kube-proxy in static pod cleanup (#3836)
  • Bump cilium images (#3826)
  • Update canal chart to v3.25.0-build2023020901 (#3885)
  • Remove pod logs as part of killall (#3868)
  • Bump wharfie and go-containerregistry (#3865)
  • Update Calico to v3.25.0 (#3891)
  • Bump k3s version (#3899)
    • Fixed an issue where leader-elected controllers for managed etcd did not run on etcd-only nodes
    • RKE2 now functions properly when the cluster CA certificates are signed by an existing root or intermediate CA. You can find a sample script for generating such certificates before RKE2 starts in the K3s repo at contrib/util/certs.sh.
    • RKE2 now supports kubeadm style join tokens. rke2 token create now creates join token secrets, optionally with a limited TTL.
    • RKE2 agents joined with an expired or deleted token stay in the cluster using existing client certificates via the NodeAuthorization admission plugin, unless their Node object is deleted from the cluster.
    • ServiceLB now honors the Service's ExternalTrafficPolicy. When set to Local, the LoadBalancer will only advertise addresses of Nodes with a Pod for the Service, and will not forward traffic to other cluster members. (ServiceLB is still disabled by default)
  • Bump K3s commit (#3907)
  • Add bootstrap token auth handler (#3922)
  • Update to kubernetes v1.24.11 (#3950)

Release v1.24.10+rke2r1

This release updates Kubernetes to v1.24.10 to backport registry changes and fix two critical issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.9+rke2r2:

  • Update multus to v3.9.3 and whereabouts to v0.6 (#3792)
  • Bump vSphere CPI chart to v1.24.3 (#3763)
  • Generate report and upload test results (#3771) (#3795)
  • Bump harvester cloud provider and harvester csi driver (#3784)
  • Bump containerd to v1.6.15-k3s1 (#3779)
  • Bump K3s version for tls-cipher-suites fix (#3798)

Release v1.24.9+rke2r2

This release updates containerd to v1.6.14 to resolve an issue where pods would lose their CNI information when containerd was restarted.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.9+rke2r1:

  • Bump containerd to v1.6.14-k3s1 (#3747)
    • The embedded containerd version has been bumped to v1.6.14-k3s1. This includes a backported fix for containerd/7843 which caused pods to lose their CNI info when containerd was restarted, which in turn caused the kubelet to recreate the pod.
    • Windows agents now use the k3s fork of containerd, which includes support for registry rewrites.

Release v1.24.9+rke2r1

⚠️ WARNING

This release is affected by https://github.com/containerd/containerd/issues/7843, which causes the kubelet to restart all pods whenever RKE2 is restarted. For this reason, we have removed this RKE2 release from the channel server. Please use v1.24.9+rke2r2 instead.

This release updates Kubernetes to v1.24.9, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.8+rke2r1:

  • Add more tests to the windows env (#3606)
  • Update Canal version (#3626)
  • [Backport 1.24] update rke2-calico chart to v3.24.501 (#3630)
  • [Backport 1.24] update multus chart to v3.9-build2022102805 (#3636)
  • Backports for 2022-12 (#3648)
  • Updated cilium version and added new cilium images (#3643)
  • Support autodetection interface methods in windows (#3651)
  • Bump hardened-ingress-nginx to v1.4.1 (#3616)
  • Update to version 1.24.9 (#3659)
  • Bump K3s and containerd version for v1.24 (#3676)
  • [Backport v1.24] Fixed cilium chart when enabled hubble images (#3689)
  • Bump ingress-nginx (#3705)

Release v1.24.8+rke2r1

This release updates Kubernetes to v1.24.8, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.7+rke2r1:

  • Remove the CNI plugin dir when uninstalling rke2 (#3503)
  • Update Cilium to 1.12.3 and use portmap as default (#3511)
  • Read VXLAN_ADAPTER env and use it to create the external network (#3524)
  • Bump K3s version for v1.24 (#3528)
  • Update Calico chart to support PSP for CIS 1.6 (#3536)
  • Bump vsphere charts (#3538)
  • Use the Cilium chart that fixes the portmap issue with system_default… (#3554)

Release v1.24.7+rke2r1

This release updates Kubernetes to v1.24.7, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.6+rke2r1:

  • Upgrade Calico version on Windows (#3396)
  • Bump vsphere csi/cpi charts and images (#3358)
  • Updated Canal chart to fix token renewal from calico-node (#3431)
  • K3s pull-through and backports from master (#3435)
  • Update canal to v3.24.1 (#3447)
  • Bump CCM image tag (#3466)

Release v1.24.6+rke2r1

This release updates Kubernetes to v1.24.6, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.4+rke2r1:

  • Update Cilium version and remove startup-script (#3274)
  • Update channel server stable to 1.24.4 (#3269)
  • Update canal version (#3272)
  • Bump the cilium chart version (#3289)
  • Rework vagrant install tests (#3237)
  • [release-1.24] Bump containerd v1.6.8 / runc v1.1.4
  • The bundled version of runc has been bumped to v1.1.4
  • The embedded containerd version has been bumped to v1.6.8-k3s1 (#3301)
  • [Release 1.24] Update calico to v3.23.3 (#3318)
  • [release-1.24] Fix static pod cleanup when using container-runtime-endpoint (#3332)
  • [Release 1.24] Update calico to v3.24.1 (#3343)
  • Update for 1.24 patches (#3351)
  • Update k8s to 1.24.6 (#3370)

Release v1.24.4+rke2r1

This release updates Kubernetes to v1.24.4, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.3+rke2r1:

  • Updating stable to 1.23.9 (#3179)
  • Updated cilium to v1.12.0 (#3185)
  • Remove refs to migration (#3191)
  • Add health checks to apiserver and kube-proxy (#3146)
  • Fix broken links to landscape.cncf.io in docs (#3149)
  • Don't create force-restart dir in current working dir when restoring (#3197)
  • Improve static pod probing
  • RKE2 static pods now include readiness, liveness, and startup probes with defaults that match those configured by kubeadm.
  • RKE2 static pod probe timings and thresholds can be customized with the --control-plane-probe-configuration flag. (#3204)
  • Bump K3s and remotedialer (#3208)
  • Add ingress network policy (#3220)
  • Adding format for adrs, adding process for updating and revisiting (#3161)
  • Use container-runtime-endpoint flag for criConnection (#3232)
  • Convert codespell from Drone to GH actions (#3246)
  • Upgrade to v1.24.4-rke2r1 (#3243)
  • Work around issue with empty servicelb namespace (#3255)
  • Fix issue setting multiple control-plane config values from config file (#3257)
  • Document ingress in CIS mode issue (#3264)
  • Bump K3s version for v1.24 (#3265)

Release v1.24.3+rke2r1

This release updates Kubernetes to v1.24.3, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.2+rke2r1:

  • Update channels (#3135)
  • Bump ingress-nginx to 4.1.004 (#3131)
  • Bump harvester cloud provider 0.1.13 (#3138)
  • Update migration doc to reflect unsupported / experimental (#3143)
  • July release 1.24 r1 (#3152)
  • Add a readme to explain our implementation of ADRs (#3154)
  • Bump CRI timeout to 34 minutes (#3158)
  • Consolidate staticPod timeout to static 30 minutes (#3166)

Release v1.24.2+rke2r1

This release updates Kubernetes to v1.24.2, fixes a number of minor issues, and includes security updates.

Important Notes

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.24.1+rke2r2:

  • Adding tolerations for master nodes (#2884)
  • Remove kube-ipvs0 interface when cleaning up (#3018)
  • Update canal docs (#2959)
  • Update dual-stack docs (#3019)
  • E2E: split-server test and groundwork for test-pad tool (#2997)
  • Update default component requests (#2987)
  • Removed dweomer from maintainers (#2941)
  • Update RKE2 hardening guide (#2507)
  • Extend registry mirror configuration (#2819)
  • Add Static Pod Startup Hook + K3s Bump + CoreDNS bump (#3054)
  • Updated cilium chart to support IPv6 only config (#3069)
  • Update Canal issues with IP exhaustion (#3039)
  • Update multus charts to v3.8-build2021110403 (#3043)
  • Use serializable health checks for etcd probes (#3073)
  • Bump K3s and helm-controller versions (#3079)
  • Update channel server for May patch release (#3053)
  • Adding additional line to update rke2 (#2990)
  • June release 1.24 (#3088)
  • Double number of steps for criBackoff
  • Bump K3s version for cluster upgrade egress-selector-mode fix (#3122)

Release v1.24.1+rke2r2

This release is RKE2's first in the v1.24 line. This release updates Kubernetes to v1.24.1.

As this release includes a number of significant changes from previous versions, we will not make v1.24 available via the stable release channel until v1.24.2+rke2r1 or later.

Before upgrading from earlier releases, be sure to read the Kubernetes Urgent Upgrade Notes.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.23.6+rke2r2:

  • Bump upgrade controller version and improve readability (#2818)
  • Updating the vSphere CPI chart version. (#2868)
  • Update Cilium version (#2865)
  • Update calico version (#2866)
  • Update Multus chart and sriov/multus images (#2854)
  • Update rke2 channel (#2872)
  • Multiple docs updates (#2894)
  • Include subcommands in docs sidebar (#2896)
  • Bump Kubernetes and k3s to v1.24.0 (#2869)
  • Remove failure:ignore instruction in .drone.yml (#2699)
  • Bump containerd for S390x (#2900)
  • Update canal to 3.22.2 (#2903)
  • Move windows agent dependencies check to CI (#2880)
  • Updated calico to 3.23.0 (#2904)
  • Update multus to the latest chart version (#2921)
  • Ipv6 support (#2926)
  • Support rpm creation and installation from individual commits (#2542)
  • Ignore Windows artifacts when creating Linux bundle (#2928)
  • Updated Calico chart to fix update (#2948)
  • Updated Calico to v3.23.1 (#2950)
  • Change windows calico setup to generate a sa token (#2940)
  • Update network_options.md (#2957)
  • Added flannel wireguard interface deletion (#2961)
  • Bump K3s for calico egress-selector fix (#2931)
  • Bump k3s and etcd (#2964)
  • Fixed canal chart for IPv6 only setup (#2966)
  • Bump hardened-etcd version (#2968)
  • Fix calico chart to accept ipPool value (#2979)
  • Update Cilium to 1.11.5 (#2971)
  • Upgrade kubernetes to v1.24.1 (#2996)
  • Bump K3s version to fix issue with kubelet addresses (#3009)
  • Disable egress-selector for calico/multus/none (#3012)
  • Bump K3s version to fix dual-stack node-ip issue (#3015)
  • Bump containerd, runc, k3s (#3022)
  • Remove cni-based egress-selector config and bump k3s (#3025)
  • May release v1.24.1+rke2r2 (#3030)
  • Unconditionally set egress-selector-mode to disabled (#3035)