Skip to main content

v1.25.X

Upgrade Notice

Before upgrading from earlier releases, be sure to read the Kubernetes Urgent Upgrade Notes.

VersionRelease dateKubernetesEtcdContainerdRuncMetrics-serverCoreDNSIngress-NginxHelm-controllerCanal (Default)CalicoCiliumMultus
v1.25.16+rke2r2Feb 28 2024v1.25.16v3.5.9-k3s1v1.7.7-k3s1v1.1.12v0.6.3v1.10.1nginx-1.9.3-hardened1v0.15.4Flannel v0.23.0
Calico v3.26.3
v3.26.3v1.14.4v4.0.2
v1.25.16+rke2r1Dec 05 2023v1.25.16v3.5.9-k3s1v1.7.7-k3s1v1.1.8v0.6.3v1.10.1nginx-1.9.3-hardened1v0.15.4Flannel v0.23.0
Calico v3.26.3
v3.26.3v1.14.4v4.0.2
v1.25.15+rke2r2Nov 08 2023v1.25.15v3.5.9-k3s1v1.7.7-k3s1v1.1.8v0.6.3v1.10.14.8.2v0.15.4Flannel v0.22.1
Calico v3.26.1
v3.26.1v1.14.2v4.0.2
v1.25.15+rke2r1Oct 31 2023v1.25.15v3.5.9-k3s1v1.7.7-k3s1v1.1.8v0.6.3v1.10.14.8.2v0.15.4Flannel v0.22.1
Calico v3.26.1
v3.26.1v1.14.2v4.0.2
v1.25.14+rke2r1Sep 18 2023v1.25.14v3.5.9-k3s1v1.7.3-k3s1v1.1.8v0.6.3v1.10.14.6.1v0.15.4Flannel v0.22.1
Calico v3.26.1
v3.26.1v1.14.1v4.0.2
v1.25.13+rke2r1Sep 06 2023v1.25.13v3.5.9-k3s1v1.7.3-k3s1v1.1.8v0.6.3v1.10.14.6.1v0.15.4Flannel v0.22.1
Calico v3.26.1
v3.26.1v1.14.0v4.0.2
v1.25.12+rke2r1Jul 28 2023v1.25.12v3.5.7-k3s1v1.7.1-k3s1v1.1.7v0.6.3v1.10.14.6.1v0.15.2Flannel v0.22.0
Calico v3.25.1
v3.26.1v1.13.2v4.0.2
v1.25.11+rke2r1Jun 27 2023v1.25.11v3.5.7-k3s1v1.7.1-k3s1v1.1.7v0.6.3v1.10.14.5.2v0.15.0Flannel v0.22.0
Calico v3.25.1
v3.25.0v1.13.2v3.9.3
v1.25.10+rke2r1May 30 2023v1.25.10v3.5.7-k3s1v1.7.1-k3s1v1.1.7v0.6.3v1.10.14.5.2v0.14.0Flannel v0.21.3
Calico v3.25.1
v3.25.0v1.13.2v3.9.3
v1.25.9+rke2r1Apr 21 2023v1.25.9v3.5.7-k3s1v1.6.19-k3s1v1.1.5v0.6.2v1.10.14.5.2v0.13.2Flannel v0.21.3
Calico v3.25.0
v3.25.0v1.13.0v3.9.3
v1.25.8+rke2r1Mar 27 2023v1.25.8v3.5.4-k3s1v1.6.19-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.2Flannel v0.21.3
Calico v3.25.0
v3.25.0v1.13.0v3.9.3
v1.25.7+rke2r1Mar 10 2023v1.25.7v3.5.4-k3s1v1.6.15-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.2Flannel v0.21.1
Calico v3.25.0
v3.25.0v1.12.5v3.9.3
v1.25.6+rke2r1Jan 26 2023v1.25.6v3.5.4-k3s1v1.6.15-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.20.2
Calico v3.24.5
v3.24.5v1.12.4v3.9.3
v1.25.5+rke2r2Jan 10 2023v1.25.5v3.5.4-k3s1v1.6.14-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.20.2
Calico v3.24.5
v3.24.5v1.12.4v3.9
v1.25.5+rke2r1Dec 20 2022v1.25.5v3.5.4-k3s1v1.6.12-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.20.2
Calico v3.24.5
v3.24.5v1.12.4v3.9
v1.25.4+rke2r1Nov 18 2022v1.25.4v3.5.4-k3s1v1.6.8-k3s1v1.1.4v0.6.1v1.9.34.1.0v0.13.0Flannel v0.19.1
Calico v3.24.1
v3.24.1v1.12.3v3.8
v1.25.3+rke2r1Oct 20 2022v1.25.3v3.5.4-k3s1v1.6.8-k3s1v1.1.4v0.6.1v1.9.34.1.0v0.12.3Flannel v0.19.1
Calico v3.24.1
v3.24.1v1.12.1v3.8
v1.25.2+rke2r1Sep 27 2022v1.25.2v3.5.4v1.6.8-k3s2v1.1.4v0.5.0v1.9.34.1.0v0.12.3Flannel v0.19.1
Calico v3.23.3
v3.24.1v1.12.1v3.8
v1.25.0+rke2r1Sep 15 2022v1.25.0v3.5.4v1.6.8-k3s1v1.1.4v0.5.0v1.9.34.1.0v0.12.3Flannel v0.19.1
Calico v3.24.1
v3.24.1v1.12.1v3.8

Release v1.25.16+rke2r2

This is a special security release addressing a runc CVE.

Important Notes

Addresses the runc CVE: CVE-2024-21626 by updating runc to v1.1.12.

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.16+rke2r1:

Charts Versions

ComponentVersion
rke2-cilium1.14.400
rke2-canalv3.26.3-build2023110900
rke2-calicov3.26.300
rke2-calico-crdv3.26.300
rke2-coredns1.24.006
rke2-ingress-nginx4.8.200
rke2-metrics-server2.11.100-build2023051510
rancher-vsphere-csi3.0.1-rancher101
rancher-vsphere-cpi1.5.100
harvester-cloud-provider0.2.200
harvester-csi-driver0.1.1600
rke2-snapshot-controller1.7.202
rke2-snapshot-controller-crd1.7.202
rke2-snapshot-validation-webhook1.7.302

Release v1.25.16+rke2r1

This release updates Kubernetes to v1.25.16.

Important Notes

This release includes a version of ingress-nginx affected by CVE-2023-5043 and CVE-2023-5044. Ingress administrators should set the --enable-annotation-validation flag to enforce restrictions on the contents of ingress-nginx annotation fields.

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.15+rke2r2:

  • Add chart validation tests (#5012)
  • Update canal to v3.26.3 (#5019)
  • Update calico to v3.26.3 (#5028)
  • Bump cilium chart to 1.14.400 (#5058)
  • Bump K3s version for v1.25 (#5032)
    • Containerd may now be configured to use rdt or blockio configuration by defining rdt_config.yaml or blockio_config.yaml files.
    • Disable helm CRD installation for disable-helm-controller
    • Omit snapshot list configmap entries for snapshots without extra metadata
    • Add jitter to client config retry to avoid hammering servers when they are starting up
  • Bump K3s version for v1.25 (#5075)
    • Don't apply S3 retention if S3 client failed to initialize
    • Don't request metadata when listing S3 snapshots
    • Print key instead of file path in snapshot metadata log message
  • Kubernetes patch release (#5063)
  • Remove s390x steps temporarily since runners are disabled (#5098)

Charts Versions

ComponentVersion
rke2-cilium1.14.400
rke2-canalv3.26.3-build2023110900
rke2-calicov3.26.300
rke2-calico-crdv3.26.300
rke2-coredns1.24.006
rke2-ingress-nginx4.8.200
rke2-metrics-server2.11.100-build2023051510
rancher-vsphere-csi3.0.1-rancher101
rancher-vsphere-cpi1.5.100
harvester-cloud-provider0.2.200
harvester-csi-driver0.1.1600
rke2-snapshot-controller1.7.202
rke2-snapshot-controller-crd1.7.202
rke2-snapshot-validation-webhook1.7.302

Release v1.25.15+rke2r2

This release fixes an issue with identifying additional container runtimes.

Important Notes

This release includes a version of ingress-nginx affected by CVE-2023-5043 and CVE-2023-5044. Ingress administrators should set the --enable-annotation-validation flag to enforce restrictions on the contents of ingress-nginx annotation fields.

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.15+rke2r1:

  • Bump k3s, include container runtime fix (#4982)
    • Fixed an issue with identifying additional container runtimes
  • Update hardened kubernetes image (#4985)

Release v1.25.15+rke2r1

This release updates Kubernetes to v1.25.15.

Important Notes

This release includes a version of ingress-nginx affected by CVE-2023-5043 and CVE-2023-5044. Ingress administrators should set the --enable-annotation-validation flag to enforce restrictions on the contents of ingress-nginx annotation fields.

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.14+rke2r1:

  • Add a time.Sleep in calico-win to avoid polluting the logs (#4793)
  • Support generic "cis" profile (#4799)
  • Update calico chart to accept felix config values (#4816)
  • Remove unnecessary docker pull (#4821)
  • Mirrored pause backport (#4825)
  • Write pod-manifests as 0600 in cis mode (#4840)
  • K3s bump (#4861)
  • Filter release branches (#4859)
  • Update charts to have ipFamilyPolicy: PreferDualStack as default (#4847)
  • Bump K3s, Cilium, Token Rotation support (#4871)
  • Bump containerd to v1.7.7+k3s1 (#4882)
  • Bump K3s version for v1.25 (#4886)
    • RKE2 now tracks snapshots using custom resource definitions. This resolves an issue where the configmap previously used to track snapshot metadata could grow excessively large and fail to update when new snapshots were taken.
    • Fixed an issue where static pod startup checks may return false positives in the case of pod restarts.
  • Bump k3s (#4899)
  • Bump K3s version for v1.25 (#4919)
    • Re-enable etcd endpoint auto-sync
    • Manually requeue configmap reconcile when no nodes have reconciled snapshots
  • Update Kubernetes to v1.25.15 (#4920)
  • Remove pod-manifests dir in killall script (#4928)
  • Revert mirrored pause backport (#4937)
  • Bump ingress-nginx to v1.9.3 (#4958)
  • Bump K3s version for v1.25 (#4971)

Release v1.25.14+rke2r1

This release updates Kubernetes to v1.25.14.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.13+rke2r1:

  • Update cilium to 1.14.1 (#4758)
  • Update Kubernetes to v1.25.14 (#4763)

Release v1.25.13+rke2r1

This release updates Kubernetes to v1.25.13, and fixes a number of issues.

Important Notes
  • ⚠️ This release includes support for remediating CVE-2023-32186, a potential Denial of Service attack vector on RKE2 servers. See https://github.com/rancher/rke2/security/advisories/GHSA-p45j-vfv5-wprq for more information, including mandatory steps necessary to harden clusters against this vulnerability.

  • If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

    You may retrieve the token value from any server already joined to the cluster:

    cat /var/lib/rancher/rke2/server/token

Changes since v1.25.12+rke2r1:

  • Sync Felix and calico-node datastore (#4577)
  • Update Calico and Flannel on Canal (#4565)
  • Update cilium to v1.14.0 (#4588)
  • Update to whereabouts v0.6.2 (#4592)
  • Version bumps and backports for 2023-08 release (#4599)
    • Updated the embedded containerd to v1.7.3+k3s1
    • Updated the embedded runc to v1.1.8
    • Updated the embedded etcd to v3.5.9+k3s1
    • Updated the rke2-snapshot-validation-webhook chart to enable VolumeSnapshotClass validation
    • Security bump to docker/distribution
    • Fix static pod UID generation and cleanup
    • Fix default server address for rotate-ca command
  • Fix wrongly formatted files (#4613)
  • Fix repeating "cannot find file" error (#4619)
  • Bump k3s version to recent 1.25 (#4637)
  • Bump K3s version for v1.25 (#4648)
    • The version of helm used by the bundled helm controller's job image has been updated to v3.12.3
    • Bumped dynamiclistener to address an issue that could cause the supervisor listener on 9345 to stop serving requests on etcd-only nodes.
    • The RKE2 supervisor listener on 9345 now sends a complete certificate chain in the TLS handshake.
  • Install BGP windows packages in Windows image for tests (#4653)
  • Allow OS env variables to be consumed (#4658)
  • Upgrade multus chart to v4.0.2-build2023081100 (#4665)
  • Fix bug. Add VXLAN_VNI env var to Calico-node exec (#4672)
  • Update to v1.25.13 (#4685)
  • Bump K3s version for v1.25 (#4703)
    • Added a new --tls-san-security option. This flag defaults to false, but can be set to true to disable automatically adding SANs to the server's TLS certificate to satisfy any hostname requested by a client.
  • Add additional static pod cleanup during cluster reset (#4726)

Release v1.25.12+rke2r1

This release updates Kubernetes to v1.25.12, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.11+rke2r1:

  • Update Calico to v3.26.1 (#4425)
  • Update multus version (#4433)
  • Add log files for felix and calico (#4439)
  • Update K3s for 2023-07 releases (#4449)
  • Bump ingress-nginx charts to v1.7.1 (#4455)
  • Add support for cni none on windows and initial windows-bgp backend (#4461)
  • Updated Calico crd on Canal (#4468)
  • Update to 1.25.12 (#4496)

Release v1.25.11+rke2r1

This release updates Kubernetes to v1.25.11, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.10+rke2r1:

  • Update canal chart (#4344)
  • Bump K3s version for v1.25 (#4360)
  • Update rke2 (#4367)
  • Bump harvester cloud provider 0.2.2 (#4375)
  • Preserve mode when extracting runtime data (#4379)
  • Use our own file copy logic instead of continuity (#4390)

Release v1.25.10+rke2r1

This release updates Kubernetes to v1.25.10, and fixes a number of issues.

Important Note

  1. If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token
  1. Many systems have updated their packages with newer version of container-selinux (> v2.191.0) which is incompatible with our rke2-selinux policy and require a change in policy. We have updated our policy; you will notice the rke2-selinux package being upgraded from version v0.11.1 to newer version v0.12.0.

Changes since v1.25.9+rke2r1:

  • Fix drone dispatch step (#4149)
  • Update Cilium to v1.13.2 (#4176)
  • Bump golangci-lint for golang 1.20 compat and fix warnings (#4188)
  • Enable with node id 1.25 (#4191)
  • Update Calico image on Canal (#4219)
  • Move Drone dispatch pipeline (#4204)
  • Backport fixes and bump K3s/containerd/runc versions (#4212)
    • The bundled containerd and runc versions have been bumped to v1.7.1-k3s1/v1.1.7
    • Replace github.com/ghodss/yaml with sigs.k8s.io/yaml
    • Fix hardcoded file mount handling for default audit log filename
  • Bump metrics-server to v0.6.3 (#4246)
  • V1.25.10+rke2r1 (#4259)
  • Bump vsphere csi/cpi and csi snapshot charts (#4273)
  • Bump vsphere csi to remove duplicate CSI deployment. (#4297)

Release v1.25.9+rke2r1

This release updates Kubernetes to v1.25.9, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.8+rke2r1:

  • Update whereabouts to v0.6.1 (#4083)
  • Updated Calico chart to add crd missing values (#4048)
  • Bump ingress-nginx to 1.6.4 (#4094)
  • Bump k3s and component versions for 2023-04 release (#4099)
  • Automatically add volume mount for audit-log-path dir if set (#4109)
  • Update Kubernetes to v1.25.9 (#4116)

Release v1.25.8+rke2r1

This release updates Kubernetes to v1.25.8, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.7+rke2r1:

  • Update Flannel version to v0.21.3 on Canal (#3983)
  • Remove Root debug + Remove unmounts (#3988)
  • Bump K3s (#3992)
  • Don't package empty windows folder (#3996)
  • Update cilim to v1.13.0 (#4005)
  • Bump harvester csi driver to v0.1.16 (#4004)
  • Bump k3s and containerd (#4014)
  • Improve uninstallation on RHEL based OS (#4019)
  • Update 1.25 and Go (#4031)

Release v1.25.7+rke2r1

This release updates Kubernetes to v1.25.7, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.6+rke2r1:

  • Don't handle kube-proxy in static pod cleanup (#3835)
  • Bump cilium images (#3827)
  • Update canal chart to v3.25.0-build2023020901 (#3886)
  • Remove pod logs as part of killall (#3867)
  • Bump wharfie and go-containerregistry (#3864)
  • Update Calico to v3.25.0 (#3890)
  • Bump K3s version (#3898)
    • Fixed an issue where leader-elected controllers for managed etcd did not run on etcd-only nodes
    • RKE2 now functions properly when the cluster CA certificates are signed by an existing root or intermediate CA. You can find a sample script for generating such certificates before RKE2 starts in the K3s repo at contrib/util/certs.sh.
    • RKE2 now supports kubeadm style join tokens. rke2 token create now creates join token secrets, optionally with a limited TTL.
    • RKE2 agents joined with an expired or deleted token stay in the cluster using existing client certificates via the NodeAuthorization admission plugin, unless their Node object is deleted from the cluster.
    • ServiceLB now honors the Service's ExternalTrafficPolicy. When set to Local, the LoadBalancer will only advertise addresses of Nodes with a Pod for the Service, and will not forward traffic to other cluster members. (ServiceLB is still disabled by default)
  • Bump K3s commit (#3906)
  • Add bootstrap token auth handler (#3921)
  • Bump helm-controller/klipper-helm (#3937)
    • The embedded helm-controller job image now correctly handles upgrading charts that contain resource types that no longer exist on the target Kubernetes version. This includes properly handling removal of PodSecurityPolicy resources when upgrading from <= v1.24.
  • Add sig-storage snapshot controller and validation webhook (#3943)
  • Add a quick host-path CSI snapshot to the basic CI test (#3947)
  • Update kubernetes to v1.25.7 (#3952)

Release v1.25.6+rke2r1

This release updates Kubernetes to v1.25.6 to backport registry changes and fix two critical issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.5+rke2r2:

  • Update multus to v3.9.3 and whereabouts to v0.6 (#3793)
  • Generate report and upload test results (#3771) (#3794)
  • Bump harvester cloud provider and harvester csi driver (#3785)
  • Bump containerd to v1.6.15-k3s1 (#3778)
  • Bump K3s version for tls-cipher-suites fix (#3799)

Release v1.25.5+rke2r2

This release updates containerd to v1.6.14 to resolve an issue where pods would lose their CNI information when containerd was restarted.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.5+rke2r1:

  • Bump containerd to v1.6.14-k3s1 (#3746)
    • The embedded containerd version has been bumped to v1.6.14-k3s1. This includes a backported fix for containerd/7843 which caused pods to lose their CNI info when containerd was restarted, which in turn caused the kubelet to recreate the pod.
    • Windows agents now use the k3s fork of containerd, which includes support for registry rewrites.

Release v1.25.5+rke2r1

⚠️ WARNING

This release is affected by https://github.com/containerd/containerd/issues/7843, which causes the kubelet to restart all pods whenever RKE2 is restarted. For this reason, we have removed this RKE2 release from the channel server. Please use v1.25.5+rke2r2 instead.

This release update Kubernetes to v1.25.5+rke2r1, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.4+rke2r1:

  • Don't try to validate Linux CIS profile compliance on Windows (#3568)
  • Update channels.yaml for November (#3575)
  • Utilize Jenkins env vars for required cluster creation variables (#3576)
  • Terminate pods directly via CRI instead of waiting for kubelet cleanup (#3567)
  • Bump K3s and component versions (#3577)
  • Refactor Windows Calico code (#3543)
  • Take nodeIP into account to configure the calico networks (#3530)
  • Switching from GCP gcs to AWS s3 buckets (#3563)
  • Remove old docs (#3584)
  • DualStack e2e test fix and additional netpol test (#3574)
  • Create upgrade test in TF and refactor to allow running packages separately (#3583)
  • Fix aws s3 artifact upload issues (#3601)
  • Add more tests to the windows env (#3594)
  • Update tf variable for AWS to be more clear (#3609)
  • Add rke2 standalone install script for Windows (#3608)
  • Support autodetection interface methods in windows (#3615)
  • Update rke2-multus chart to v3.9-build2022102805 (#3622)
  • Update Canal version (#3625)
  • Update rke2-calico chart to v3.24.501 (#3620)
  • Fix Jenkinsfile typo and clarify support for oracle in TF automation (#3611)
  • Updated cilium version and added new cilium images (#3642)
  • Bump metrics-server tag (#3647)
  • Bump K3s version for v1.25 (#3646)
  • Bump ingress-nginx to 1.4.1 (#3653)
  • Update to version 1.25.5 (#3670)
  • Bump K3s and containerd versions for v1.25 (#3675)
  • [Backport v1.25] Fixed cilium chart when enabled hubble images (#3688)
  • Bump ingress-nginx (#3709)

Release v1.25.4+rke2r1

This release update Kubernetes to v1.25.4+rke2r1, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.3+rke2r1:

  • Updated cilium chart for private registry (#3483)
  • Fixed dualstack e2e tests (#3472)
  • Fix handling of manifests with multiple resources (#3470)
  • Remove the CNI plugin binaries when uninstalling rke2 (#3500)
  • Sync docs with rke2-docs (#3506)
  • Update Cilium and use portmap as default (#3507)
  • Revert "Unconditionally set egress-selector-mode to disabled" (#3495)
  • Put sensitive variables in Jenkins creds (#3514)
  • Typo in the -Channel option (#3521)
  • Read VXLAN_ADAPTER env and use it to create the external network (#3517)
  • Update Trivy version to 0.31.3 (#3348)
  • Bump K3s version for v1.25 (#3527)
  • Bump vsphere charts (#3537)
  • Use the Cilium chart that fixes the portmap issue with system_default… (#3553)

Release v1.25.3+rke2r1

This release update Kubernetes to v1.25.3+rke2r1, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.2+rke2r1:

  • Update docs with extra option (#3336)
  • Upgrade Calico version on Windows (#3346)
  • Update docs with iptables requirement on canal and calico (#3367)
  • Add support for Calico interface overrides for Windows (#3375)
  • Update latest in channels.yaml to v1.24.6+rke2r1 (#3389)
  • Bump vsphere csi/cpi charts and images (#3356)
  • The embedded metrics-server version has been bumped to v0.6.1 (#3399)
  • Update docs for multus with cilium (#3326)
  • Bump k3s for servicelb ccm change; add servicelb support (#3404)
  • Add v1.25 channel to the channel server (#3382)
  • Allow CNI none on windows (#3403)
  • Update fips_support.md (#3405)
  • Change static pod uid/hash generation/checking (#3415)
  • Pass through kubelet-args to temporary kubelet (#3418)
  • Initial terraform automation (#3390)
  • Bump vsphere CSI to v2.6.1 (#3420)
  • Updated Canal chart to fix token renewal from calico-node (#3426)
  • E2E: Parallel and Logging Improvements (#3433)
  • Bump K3s version for v1.25 (#3434)
  • Update canal to v3.24.1 (#3444)
  • Update release docs to include content discussed during release retro (#3421)
  • Update documentation with PSP removal (#3360)
  • October RKE2 K8s Update v1.25.3 (#3460)
  • Bump CCM image tag (#3465)
  • Add fapolicyd configuration rules (#3416)
  • Prevent script fail when fapolicyd doesn't exist (#3478)

Release v1.25.2+rke2r1

This release update Kubernetes to v1.25.2+rke2r1, fixes a number of minor issues, and includes security updates.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.25.0+rke2r1:

  • Update for 1.25 patches (#3352)
  • Add exception for tigera-operator namespace (#3365) (#3366)
  • Update k8s to 1.25.2 (#3374)

Release v1.25.0+rke2r1

This release is RKE2's first in the v1.25 line. This release updates Kubernetes to v1.25.0.

Before upgrading from earlier releases, be sure to read the Kubernetes Urgent Upgrade Notes.

Important Notes
  1. If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

    You may retrieve the token value from any server already joined to the cluster:

    cat /var/lib/rancher/rke2/server/token
  2. Kubernetes v1.25 removes the beta PodSecurityPolicy admission plugin. Please follow the upstream documentation to migrate from PSP if using the built-in PodSecurity Admission Plugin, prior to upgrading to v1.25.0+rke2r1.

  3. RKE2 now supports version 1.23 of the CIS Benchmark for Kubernetes. The legacy CIS 1.5 and 1.6 profiles (profile: cis-1.5 and profile: cis-1.6) have been removed as they do not apply to Kubernetes 1.25. Servers using one of the legacy profiles must be updated to specify the cis-1.23 profile when upgrading to RKE2 1.25, or RKE2 will fail to start.

Changes since v1.24.4+rke2r1:

  • Update Cilium version and remove startup-script (#3274)
  • Update channel server stable to 1.24.4 (#3269)
  • Update canal version (#3272)
  • Bump the cilium chart version (#3289)
  • Rework vagrant install tests (#3237)
  • Add PSA to Kubernetes v1.25 (#3282)
  • Update Kubernetes image to v1.25.0-rke2r1-build20220901 (#3295)
  • Fix static pod cleanup when using container-runtime-endpoint (#3308)
  • Bump containerd v1.6.8 / runc v1.1.4 (#3300)
  • Update calico to v3.23.3 (#3317)
  • Bump K3s version for v1.25 (#3323)
  • Update install script with option to skip reload (#3248)
  • Add exception for cis-operator-system namespace (#3324)
  • Fix config directory permissions (#3338)
  • Update calico to v3.24.1 (#3340)