跳到主要内容

v1.26.X

Upgrade Notice

Before upgrading from earlier releases, be sure to read the Kubernetes Urgent Upgrade Notes.

VersionRelease dateKubernetesEtcdContainerdRuncMetrics-serverCoreDNSIngress-NginxHelm-controllerCanal (Default)CalicoCiliumMultus
v1.26.13+rke2r1Feb 06 2024v1.26.13v3.5.9-k3s1v1.7.11-k3s2v1.1.12v0.6.3v1.10.1nginx-1.9.3-hardened1v0.15.8Flannel v0.23.0
Calico v3.26.3
v3.26.3v1.14.4v4.0.2
v1.26.12+rke2r1Dec 26 2023v1.26.12v3.5.9-k3s1v1.7.11-k3s2v1.1.10v0.6.3v1.10.1nginx-1.9.3-hardened1v0.15.4Flannel v0.23.0
Calico v3.26.3
v3.26.3v1.14.4v4.0.2
v1.26.11+rke2r1Dec 05 2023v1.26.11v3.5.9-k3s1v1.7.7-k3s1v1.1.8v0.6.3v1.10.1nginx-1.9.3-hardened1v0.15.4Flannel v0.23.0
Calico v3.26.3
v3.26.3v1.14.4v4.0.2
v1.26.10+rke2r2Nov 08 2023v1.26.10v3.5.9-k3s1v1.7.7-k3s1v1.1.8v0.6.3v1.10.14.8.2v0.15.4Flannel v0.22.1
Calico v3.26.1
v3.26.1v1.14.2v4.0.2
v1.26.10+rke2r1Oct 31 2023v1.26.10v3.5.9-k3s1v1.7.7-k3s1v1.1.8v0.6.3v1.10.14.8.2v0.15.4Flannel v0.22.1
Calico v3.26.1
v3.26.1v1.14.2v4.0.2
v1.26.9+rke2r1Sep 18 2023v1.26.9v3.5.9-k3s1v1.7.3-k3s1v1.1.8v0.6.3v1.10.14.6.1v0.15.4Flannel v0.22.1
Calico v3.26.1
v3.26.1v1.14.1v4.0.2
v1.26.8+rke2r1Sep 06 2023v1.26.8v3.5.9-k3s1v1.7.3-k3s1v1.1.8v0.6.3v1.10.14.6.1v0.15.4Flannel v0.22.1
Calico v3.26.1
v3.26.1v1.14.0v4.0.2
v1.26.7+rke2r1Jul 28 2023v1.26.7v3.5.7-k3s1v1.7.1-k3s1v1.1.7v0.6.3v1.10.14.6.1v0.15.2Flannel v0.22.0
Calico v3.25.1
v3.26.1v1.13.2v4.0.2
v1.26.6+rke2r1Jun 27 2023v1.26.6v3.5.7-k3s1v1.7.1-k3s1v1.1.7v0.6.3v1.10.14.5.2v0.15.0Flannel v0.22.0
Calico v3.25.1
v3.25.0v1.13.2v3.9.3
v1.26.5+rke2r1May 30 2023v1.26.5v3.5.7-k3s1v1.7.1-k3s1v1.1.7v0.6.3v1.10.14.5.2v0.14.0Flannel v0.21.3
Calico v3.25.1
v3.25.0v1.13.2v3.9.3
v1.26.4+rke2r1Apr 21 2023v1.26.4v3.5.7-k3s1v1.6.19-k3s1v1.1.5v0.6.2v1.10.14.5.2v0.13.2Flannel v0.21.3
Calico v3.25.0
v3.25.0v1.13.0v3.9.3
v1.26.3+rke2r1Mar 27 2023v1.26.3v3.5.5-k3s1v1.6.19-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.2Flannel v0.21.3
Calico v3.25.0
v3.25.0v1.13.0v3.9.3
v1.26.2+rke2r1Mar 10 2023v1.26.2v3.5.5-k3s1v1.6.15-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.2Flannel v0.21.1
Calico v3.25.0
v3.25.0v1.12.5v3.9.3
v1.26.1+rke2r1Jan 26 2023v1.26.1v3.5.5-k3s1v1.6.15-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.20.2
Calico v3.24.5
v3.24.5v1.12.4v3.9.3
v1.26.0+rke2r2Jan 10 2023v1.26.0v3.5.5v1.6.14-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.20.2
Calico v3.24.5
v3.24.5v1.12.4v3.9
v1.26.0+rke2r1Dec 15 2022v1.26.0v3.5.5v1.6.12-k3s1v1.1.4v0.6.2v1.9.34.1.0v0.13.1Flannel v0.20.2
Calico v3.24.5
v3.24.5v1.12.4v3.9

Release v1.26.13+rke2r1

This release updates Kubernetes to v1.26.13.

Important Notes

Addresses the runc CVE: CVE-2024-21626 by updating runc to v1.1.12.

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.12+rke2r1:

  • Use dl.k8s.io for getting kubectl (#5179)
  • Ensure charts directory exists in Windows runtime image (#5185)
  • Bump versions of different components (#5170)
  • Update coredns chart to fix bug (#5202)
  • Update multus chart to add optional dhcp daemonset (#5212)
  • Add e2e test about dnscache (#5228)
  • Update rke2-whereabouts to v0.6.3 and bump rke2-multus parent chart (#5246)
  • Bump sriov image build versions (#5257)
  • Enable arm64 based images for calico, multus and harvester (#5267)
  • Improve kube-proxy and calico logging in Windows (#5286)
  • Bump k3s for v1.26 (#5271)
  • Update to 1.26.13 (#5293)
  • Update base image (#5308)
  • Bump K3s and runc versions for v1.26 (#5352)

Release v1.26.12+rke2r1

This release updates Kubernetes to v1.26.12.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.11+rke2r1:

  • Bump containerd and runc (#5121)
    • Bumped containerd/runc to v1.7.10/v1.1.10
  • Bump containerd to v1.7.11 (#5131)
  • Update to 1.26.12 for december 2023 (#5149)

Charts Versions

ComponentVersion
rke2-cilium1.14.400
rke2-canalv3.26.3-build2023110900
rke2-calicov3.26.300
rke2-calico-crdv3.26.300
rke2-coredns1.24.006
rke2-ingress-nginx4.8.200
rke2-metrics-server2.11.100-build2023051510
rancher-vsphere-csi3.0.1-rancher101
rancher-vsphere-cpi1.5.100
harvester-cloud-provider0.2.200
harvester-csi-driver0.1.1600
rke2-snapshot-controller1.7.202
rke2-snapshot-controller-crd1.7.202
rke2-snapshot-validation-webhook1.7.302

Release v1.26.11+rke2r1

This release updates Kubernetes to v1.26.11.

Important Notes

This release includes a version of ingress-nginx affected by CVE-2023-5043 and CVE-2023-5044. Ingress administrators should set the --enable-annotation-validation flag to enforce restrictions on the contents of ingress-nginx annotation fields.

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.10+rke2r2:

  • Add chart validation tests (#5003)
  • Update canal to v3.26.3 (#5017)
  • Update calico to v3.26.3 (#5027)
  • Bump cilium chart to 1.14.400 (#5059)
  • Bump K3s version for v1.26 (#5031)
    • Containerd may now be configured to use rdt or blockio configuration by defining rdt_config.yaml or blockio_config.yaml files.
    • Disable helm CRD installation for disable-helm-controller
    • Omit snapshot list configmap entries for snapshots without extra metadata
    • Add jitter to client config retry to avoid hammering servers when they are starting up
  • Bump K3s version for v1.26 (#5074)
    • Don't apply S3 retention if S3 client failed to initialize
    • Don't request metadata when listing S3 snapshots
    • Print key instead of file path in snapshot metadata log message
  • Kubernetes patch release (#5064)
  • Remove s390x steps temporarily since runners are disabled (#5097)

Charts Versions

ComponentVersion
rke2-cilium1.14.400
rke2-canalv3.26.3-build2023110900
rke2-calicov3.26.300
rke2-calico-crdv3.26.300
rke2-coredns1.24.006
rke2-ingress-nginx4.8.200
rke2-metrics-server2.11.100-build2023051510
rancher-vsphere-csi3.0.1-rancher101
rancher-vsphere-cpi1.5.100
harvester-cloud-provider0.2.200
harvester-csi-driver0.1.1600
rke2-snapshot-controller1.7.202
rke2-snapshot-controller-crd1.7.202
rke2-snapshot-validation-webhook1.7.302

Release v1.26.10+rke2r2

This release fixes an issue with identifying additional container runtimes.

Important Notes

This release includes a version of ingress-nginx affected by CVE-2023-5043 and CVE-2023-5044. Ingress administrators should set the --enable-annotation-validation flag to enforce restrictions on the contents of ingress-nginx annotation fields.

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.10+rke2r1:

  • Bump k3s, include container runtime fix (#4981)
    • Fixed an issue with identifying additional container runtimes
  • Update hardened kubernetes image (#4986)

Release v1.26.10+rke2r1

This release updates Kubernetes to v1.26.10.

Important Notes

This release includes a version of ingress-nginx affected by CVE-2023-5043 and CVE-2023-5044. Ingress administrators should set the --enable-annotation-validation flag to enforce restrictions on the contents of ingress-nginx annotation fields.

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.9+rke2r1:

  • Add a time.Sleep in calico-win to avoid polluting the logs (#4792)
  • Support generic "cis" profile (#4798)
  • Update calico chart to accept felix config values (#4815)
  • Remove unnecessary docker pull (#4822)
  • Mirrored pause backport (#4827)
  • Write pod-manifests as 0600 in cis mode (#4839)
  • Bumping k3s (#4863)
  • Filter release branches (#4858)
  • Update charts to have ipFamilyPolicy: PreferDualStack as default (#4846)
  • Bump K3s, Cilium, Token Rotation support (#4870)
  • Bump containerd to v1.7.7+k3s1 (#4881)
  • Bump K3s version for v1.26 (#4885)
    • RKE2 now tracks snapshots using custom resource definitions. This resolves an issue where the configmap previously used to track snapshot metadata could grow excessively large and fail to update when new snapshots were taken.
    • Fixed an issue where static pod startup checks may return false positives in the case of pod restarts.
  • K3s Bump (#4898)
  • Bump K3s version for v1.26 (#4918)
    • Re-enable etcd endpoint auto-sync
    • Manually requeue configmap reconcile when no nodes have reconciled snapshots
  • Update Kubernetes to v1.26.10 (#4921)
  • Remove pod-manifests dir in killall script (#4927)
  • Revert mirrored pause backport (#4936)
  • Bump ingress-nginx to v1.9.3 (#4957)
  • Bump ingress-nginx to v1.9.3 (#4959)
  • Bump ingress-nginx to v1.9.3 (#4960)
  • Bump K3s version for v1.26 (#4970)

Release v1.26.9+rke2r1

This release updates Kubernetes to v1.26.9.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.8+rke2r1:

  • Update cilium to 1.14.1 (#4757)
  • Update Kubernetes to v1.26.9 (#4762)

Release v1.26.8+rke2r1

This release updates Kubernetes to v1.26.8, and fixes a number of issues.

Important Notes
  • ⚠️ This release includes support for remediating CVE-2023-32186, a potential Denial of Service attack vector on RKE2 servers. See https://github.com/rancher/rke2/security/advisories/GHSA-p45j-vfv5-wprq for more information, including mandatory steps necessary to harden clusters against this vulnerability.

  • If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

    You may retrieve the token value from any server already joined to the cluster:

    cat /var/lib/rancher/rke2/server/token

Changes since v1.26.7+rke2r1:

  • Sync Felix and calico-node datastore (#4576)
  • Update Calico and Flannel on Canal (#4564)
  • Update cilium to v1.14.0 (#4586)
  • Update to whereabouts v0.6.2 (#4591)
  • Version bumps and backports for 2023-08 release (#4598)
    • Updated the embedded containerd to v1.7.3+k3s1
    • Updated the embedded runc to v1.1.8
    • Updated the embedded etcd to v3.5.9+k3s1
    • Updated the rke2-snapshot-validation-webhook chart to enable VolumeSnapshotClass validation
    • Security bump to docker/distribution
    • Fix static pod UID generation and cleanup
    • Fix default server address for rotate-ca command
  • Fix wrongly formatted files (#4612)
  • Fix repeating "cannot find file" error (#4618)
  • Bump k3s version to recent 1.26 (#4636)
  • Bump K3s version for v1.26 (#4647)
    • The version of helm used by the bundled helm controller's job image has been updated to v3.12.3
    • Bumped dynamiclistener to address an issue that could cause the supervisor listener on 9345 to stop serving requests on etcd-only nodes.
    • The RKE2 supervisor listener on 9345 now sends a complete certificate chain in the TLS handshake.
  • Install BGP windows packages in Windows image for tests (#4652)
  • Allow OS env variables to be consumed (#4657)
  • Upgrade multus chart to v4.0.2-build2023081100 (#4664)
  • Fix bug. Add VXLAN_VNI env var to Calico-node exec (#4671)
  • Update to v1.26.8 (#4684)
  • Bump K3s version for v1.26 (#4702)
    • Added a new --tls-san-security option. This flag defaults to false, but can be set to true to disable automatically adding SANs to the server's TLS certificate to satisfy any hostname requested by a client.
  • Add additional static pod cleanup during cluster reset (#4725)

Release v1.26.7+rke2r1

This release updates Kubernetes to v1.26.7, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.6+rke2r1:

  • Update Calico to v3.26.1 (#4424)
  • Update multus version (#4432)
  • Add log files for felix and calico (#4438)
  • Update K3s for 2023-07 releases (#4448)
  • Bump ingress-nginx charts to v1.7.1 (#4454)
  • Add support for cni none on windows and initial windows-bgp backend (#4460)
  • Updated Calico crd on Canal (#4467)
  • Update to 1.26.7 (#4493)

Release v1.26.6+rke2r1

This release updates Kubernetes to v1.26.6, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.5+rke2r1:

  • Update canal chart (#4343)
  • Bump K3s version for v1.26 (#4358)
  • Update rke2 (#4368)
  • Bump harvester cloud provider 0.2.2 (#4376)
  • Preserve mode when extracting runtime data (#4378)
  • Use our own file copy logic instead of continuity (#4389)

Release v1.26.5+rke2r1

This release updates Kubernetes to v1.26.5, and fixes a number of issues.

Important Note

  1. If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token
  1. Many systems have updated their packages with newer version of container-selinux (> v2.191.0) which is incompatible with our rke2-selinux policy and require a change in policy. We have updated our policy; you will notice the rke2-selinux package being upgraded from version v0.11.1 to newer version v0.12.0.

Changes since v1.26.4+rke2r1:

  • Fix drone dispatch step (#4148)
  • Update Cilium to v1.13.2 (#4175)
  • Bump golangci-lint for golang 1.20 compat and fix warnings (#4186)
  • Enable --with-node-id flag (#4190)
  • Backport fixes and bump K3s/containerd/runc versions (#4211)
    • The bundled containerd and runc versions have been bumped to v1.7.1-k3s1/v1.1.7
    • Replace github.com/ghodss/yaml with sigs.k8s.io/yaml
    • Fix hardcoded file mount handling for default audit log filename
  • Update Calico image on Canal (#4218)
  • Move Drone dispatch pipeline (#4205)
  • Upgrade docker/docker package (#4225) (#4234)
  • Bump metrics-server to v0.6.3 (#4245)
  • V1.26.5+rke2r1 (#4260)
  • Bump vsphere csi/cpi and csi snapshot charts (#4272)
  • Bump vsphere csi to remove duplicate CSI deployment. (#4296)

Release v1.26.4+rke2r1

This release updates Kubernetes to v1.26.4, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.3+rke2r1:

  • Adding decision against rc version removal (#3155)
  • Bump to 1.24.12 (#4064)
  • Add skipfiles step to skip drone runs based on files in PR (#3977)
  • Update whereabouts to v0.6.1 (#4080)
  • Automatically add volume mount for audit-log-path dir if set (#4027)
  • Updated Calico chart to add crd missing values (#4044)
  • Clean up static pods on etcd member removal (#4066)
  • Add ADR for security bumps automation (#3570)
  • Make commands for terraform automation and fix upgrade split role tests (#4056)
  • Bump ingress-nginx to 1.6.4 (#4090)
  • Fix wrong dependency name (#4093)
  • Bump k3s and component versions for 2023-04 release (#4096)
  • Update Kubernetes to v1.26.4 (#4115)

Release v1.26.3+rke2r1

This release updates Kubernetes to v1.26.3, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.2+rke2r1:

  • Remove root --debug flag (#3955)
  • Remove unmounts in killall script (#3954)
  • Update Flannel version to v0.21.3 on Canal (#3980)
  • Remove unnecessary bits from testing dockerfile (#3975)
  • Expand SUC upgrade check to check pods as well as nodes (#3938)
  • Don't package empty Windows folder in Linux tar (#3970)
  • Bump K3s (#3990)
  • Improve uninstallation on RHEL based OS (#3919)
  • Update cilim to v1.13.0 (#4003)
  • Bump harvester csi driver to v0.1.16 (#3999)
  • Update stable channel to v1.24.11+rke2r1 (#4010)
  • Bump k3s and containerd (#4015)
  • Add automation for Restart command for Rke2 (#3962)
  • Update 1.26 and Go (#4033)

Release v1.26.2+rke2r1

This release updates Kubernetes to v1.26.2, and fixes a number of issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.1+rke2r1:

  • Remove pod logs as part of killall (#3821)
  • Update channel server (#3853)
  • Bump cilium images (#3802)
  • Update canal chart to v3.25.0-build2023020901 (#3877)
  • Bump wharfie and go-containerregistry (#3863)
  • Update Calico to v3.25.0 (#3887)
  • Updated RKE2 README's header image to point to the new rke2-docs repo (#3727)
  • Bump K3s version (#3897)
    • Fixed an issue where leader-elected controllers for managed etcd did not run on etcd-only nodes
    • RKE2 now functions properly when the cluster CA certificates are signed by an existing root or intermediate CA. You can find a sample script for generating such certificates before RKE2 starts in the K3s repo at contrib/util/certs.sh.
    • RKE2 now supports kubeadm style join tokens. rke2 token create now creates join token secrets, optionally with a limited TTL.
    • RKE2 agents joined with an expired or deleted token stay in the cluster using existing client certificates via the NodeAuthorization admission plugin, unless their Node object is deleted from the cluster.
    • ServiceLB now honors the Service's ExternalTrafficPolicy. When set to Local, the LoadBalancer will only advertise addresses of Nodes with a Pod for the Service, and will not forward traffic to other cluster members. (ServiceLB is still disabled by default)
  • Bump K3s commit (#3905)
  • Add bootstrap token auth handler (#3920)
  • Add support for legacy kubelet logging flags (#3932)
  • Bump helm-controller/klipper-helm (#3936)
    • The embedded helm-controller job image now correctly handles upgrading charts that contain resource types that no longer exist on the target Kubernetes version. This includes properly handling removal of PodSecurityPolicy resources when upgrading from <= v1.24.
  • Add sig-storage snapshot controller and validation webhook (#3944)
  • Add a quick host-path CSI snapshot to the basic CI test (#3946)
  • Update kubernetes to v1.26.2 (#3953)

Release v1.26.1+rke2r1

This release updates Kubernetes to v1.26.1 to backport registry changes and fix two critical issues.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.26.0+rke2r2:

  • Don't clean up kube-proxy every time agents start (#3737)
  • Add rke2 e2e test run script and adjustments (#3766)
  • Update channels (#3768)
  • Bump containerd to v1.6.15-k3s1 (#3767)
  • Fix typos (#3741)
  • Generate report and upload test results (#3771)
  • Update multus to v3.9.3 and whereabouts to v0.6 (#3789)
  • Bump harvester cloud provider and harvester csi driver (#3781)
  • Bump K3s version for tls-cipher-suites and etcd snapshot conflict fix (#3772)

Release v1.26.0+rke2r2

This release updates containerd to v1.6.14 to resolve an issue where pods would lose their CNI information when containerd was restarted.

Changes since v1.26.0+rke2r1:

  • Bump containerd to v1.6.14-k3s1 (#3746)
    • The embedded containerd version has been bumped to v1.6.14-k3s1. This includes a backported fix for containerd/7843 which caused pods to lose their CNI info when containerd was restarted, which in turn caused the kubelet to recreate the pod.
    • Windows agents now use the k3s fork of containerd, which includes support for registry rewrites.

Release v1.26.0+rke2r1

⚠️ WARNING

This release is affected by https://github.com/containerd/containerd/issues/7843, which causes the kubelet to restart all pods whenever RKE2 is restarted. For this reason, we have removed this RKE2 release from the channel server. Please use v1.26.0+rke2r2 instead.

This release is RKE2's first in the v1.26 line. This release updates Kubernetes to v1.26.0.

Before upgrading from earlier releases, be sure to read the Kubernetes Urgent Upgrade Notes.

Changes since v1.25.4+rke2r1:

  • Bump ingress-nginx (#3703)
  • Fixed cilium chart when enabled hubble images (#3687)
  • Update kubernetes to v1.26.0 (#3599)
  • Bump ingress-nginx to 1.4.1 (#3653)
  • Bump k3s version for v1.25 (#3646)
  • Bump metrics-server tag (#3647)
  • Updated cilium version and added new cilium images (#3642)
  • Fix jenkinsfile typo and clarify support for oracle in tf automation (#3611)
  • Update rke2-calico chart to v3.24.501 (#3620)
  • Update canal version (#3625)
  • Update rke2-multus chart to v3.9-build2022102805 (#3622)
  • Support autodetection interface methods in windows (#3615)
  • Add rke2 standalone install script for windows (#3608)
  • Update tf variable for aws to be more clear (#3609)
  • Add more tests to the windows env (#3594)
  • Fix aws s3 artifact upload issues (#3601)
  • Create upgrade test in tf and refactor to allow running packages separately (#3583)
  • Dualstack e2e test fix and additional netpol test (#3574)
  • Remove old docs (#3584)
  • Switching from gcp gcs to aws s3 buckets (#3563)
  • Take nodeip into account to configure the calico networks (#3530)
  • Refactor windows calico code (#3543)
  • Bump k3s and component versions (#3577)
  • Terminate pods directly via cri instead of waiting for kubelet cleanup (#3567)
  • Utilize jenkins env vars for required cluster creation variables (#3576)
  • Update channels.yaml for november (#3575)
  • Don't try to validate linux cis profile compliance on windows (#3568)