It’s really exciting to see this on the front page. The project actually started during a SUSE Hackweek by my colleague Hussein. It was initially envisioned as a "Kubernetes version of k3d," but it evolved into something more ambitious and eventually became a real product. We’ve always been big believers in the power of open source. For the current default "shared" mode, we even experimented with Virtual Kubelet, another CNCF project, during our development process.
I’ll be hanging around the thread today, so if you have any questions about the history, the tech stack, or where we're headed next, feel free to ask!
Since I do have context, the original Rancher labs CTO created k3s, one of the earliest severely stripped down versions of Kubernetes, which bundles all of the required executables into a single multi-call binary, in order to be able to run Kubernetes on a Raspberry Pi. Along the lines of kind, k3d was released to be able to run k3s in Docker containers instead of full Linux hosts. The main use case is testing. We used it extensivel in the early days of Air Force and IC cloud migrations that insisted we needed to rehost all systems in Kubernetes so developers could have local targets to work with. Rancher eventually rebuilt its Kubernetes engine when Docker fell out of favor and based rke2 on k3s, but with the Kubernetes components as static pods instead of embedded multi-call binaries and kubelet and containerd extracted from an embedded virtual filesystem to the host when rke2 is first run.
When KubeVirt came out, Rancher also released an HCI product that uses it, Harvester, running on top of rke2 and Rancher's storage project Longhorn. This runs a full virtual machine manager with virtualized networking and storage, a la something like ESXI, vSAN, and vSphere, with Multus and the bridge CNI plugin providing the networking (it now has KubeOVN as well).
Harvester relies on being imported to and managed by Rancher to have things like SSO and Rancher's multi-cluster RBAC and node provisioners for Harvester to run guest clusters. A whole lot of customers migrating off of VMWare since the Broadcom acquisition want all of that, but without necessarily having an external Rancher. Early on, Harvester offered an experimental vCluster addon that created a guest cluster with Rancher installed on it and that automatically managed Harvester.
This had a lot of problems. I'm not going to rehash them because I don't want to come across as bashing vCluster, but it was not a tenable long-term option that crashed hard on most who tried to use it. Since Rancher already had k3d, it was pretty natural step to just create their own virtualized Kubernetes that runs in Kubernetes by adapting k3d to become k3k, which runs k3s in Kubernetes rather than in Docker. Now you can get a guest cluster to install Rancher onto and get the full suite of Rancher features and a much better experience than the bare Harvester UI without needing to run full VMs.
Why not just install Rancher directly onto the same rke2 cluster that is running Harvester itself? Because it already has one, but that was considered an implementation detail that developers used to bootstrap and not have to duplicate work that was already done, but not meant to be exposed to users. If you try to install a second Rancher to actually use, you'll conflict with a whole bunch of resources that already exist and it won't work.
It's a tangled mess of confusing layers, but that's the world we live in. It's why we still have IPv4, VLAN, VXLAN, virtual terminals, discretionary access control for Linux. We build on top of what is already there instead of rebuilding from scratch in a saner way. This isn't just how software works. It's why city designs rarely make sense. It's why life itself has vestigial anti-features. Cruft rarely disappears. It just gets buried underneath whatever comes next.
K3k, Kubernetes in Kubernetes, is a tool that empowers you to create and manage isolated K3s clusters within your existing Kubernetes environment. It enables efficient multi-tenancy, streamlined experimentation, and robust resource isolation, minimizing infrastructure costs by allowing you to run multiple lightweight Kubernetes clusters on the same physical host. K3k offers both "shared" mode, optimizing resource utilization, and "virtual" mode, providing complete isolation with dedicated K3s server pods. This allows you to access a full Kubernetes experience without the overhead of managing separate physical resources.
K3k integrates seamlessly with Rancher for simplified management of your embedded clusters.
Resource Isolation: Ensure workload isolation and prevent resource contention between teams or applications. K3k allows you to define resource limits and quotas for each embedded cluster, guaranteeing that one team's workloads won't impact another's performance.
Simplified Multi-Tenancy: Easily create dedicated Kubernetes environments for different users or projects, simplifying access control and management. Provide each team with their own isolated cluster, complete with its own namespaces, RBAC, and resource quotas, without the complexity of managing multiple physical clusters.
Lightweight and Fast: Leverage the lightweight nature of K3s to spin up and tear down clusters quickly, accelerating development and testing cycles. Spin up a new K3k cluster in seconds, test your application in a clean environment, and tear it down just as quickly, streamlining your CI/CD pipeline.
Optimized Resource Utilization (Shared Mode): Maximize your infrastructure investment by running multiple K3s clusters on the same physical host. K3k's shared mode allows you to efficiently share underlying resources, reducing overhead and minimizing costs.
Complete Isolation (Virtual Mode): For enhanced security and isolation, K3k's virtual mode provides dedicated K3s server pods for each embedded cluster. This ensures complete separation of workloads and eliminates any potential resource contention or security risks.
Rancher Integration: Simplify the management of your K3k clusters with Rancher. Leverage Rancher's intuitive UI and powerful features to monitor, manage, and scale your embedded clusters with ease.
This section provides instructions on how to install K3k and the k3kcli.
Note: If you do not have a storage provider, you can configure the cluster to use ephemeral or static storage. Please consult the k3kcli advance usage for instructions on using these options.
Add the K3k Helm repository:
helm repo add k3k https://rancher.github.io/k3k
helm repo update
Install the K3k controller:
helm install --namespace k3k-system --create-namespace k3k k3k/k3k
We recommend using the latest released version when possible.
k3kcliThe k3kcli provides a quick and easy way to create K3k clusters and automatically exposes them via a kubeconfig.
To install it, simply download the latest available version for your architecture from the GitHub Releases page.
For example, you can download the Linux amd64 version with:
wget -qO k3kcli https://github.com/rancher/k3k/releases/download/v1.0.2/k3kcli-linux-amd64 && \
chmod +x k3kcli && \
sudo mv k3kcli /usr/local/bin
You should now be able to run:
-> % k3kcli --version
k3kcli version v1.0.2
This section provides examples of how to use the k3kcli to manage your K3k clusters.
K3k operates within the context of your currently configured kubectl context. This means that K3k respects the standard Kubernetes mechanisms for context configuration, including the --kubeconfig flag, the $KUBECONFIG environment variable, and the default $HOME/.kube/config file. Any K3k clusters you create will reside within the Kubernetes cluster that your kubectl is currently pointing to.
To create a new K3k cluster, use the following command:
k3kcli cluster create mycluster
[!NOTE] Creating a K3k Cluster on a Rancher-Managed Host Cluster
If your host Kubernetes cluster is managed by Rancher (e.g., your kubeconfig's
serveraddress includes a Rancher URL), use the--kubeconfig-serverflag when creating your K3k cluster:k3kcli cluster create --kubeconfig-server <host_node_IP_or_load_balancer_IP> myclusterThis ensures the generated kubeconfig connects to the correct endpoint.
When the K3s server is ready, k3kcli will generate the necessary kubeconfig file and print instructions on how to use it.
Here's an example of the output:
INFO[0000] Creating a new cluster [mycluster]
INFO[0000] Extracting Kubeconfig for [mycluster] cluster
INFO[0000] waiting for cluster to be available..
INFO[0073] certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1738746570: notBefore=2025-02-05 09:09:30 +0000 UTC notAfter=2026-02-05 09:10:42 +0000 UTC
INFO[0073] You can start using the cluster with:
export KUBECONFIG=/my/current/directory/mycluster-kubeconfig.yaml
kubectl cluster-info
After exporting the generated kubeconfig, you should be able to reach your Kubernetes cluster:
export KUBECONFIG=/my/current/directory/mycluster-kubeconfig.yaml
kubectl get nodes
kubectl get pods -A
You can also directly create a Cluster resource in some namespace, to create a K3k cluster:
kubectl apply -f - <<EOF
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: mycluster
namespace: k3k-mycluster
EOF
and use the k3kcli to retrieve the kubeconfig:
k3kcli kubeconfig generate --namespace k3k-mycluster --name mycluster
To delete a K3k cluster, use the following command:
k3kcli cluster delete mycluster
For a detailed explanation of the K3k architecture, please refer to the Architecture documentation.
For more in-depth examples and information on advanced K3k usage, including details on shared vs. virtual modes, resource management, and other configuration options, please see the Advanced Usage documentation.
If you're interested in building K3k from source or contributing to the project, please refer to the Development documentation.
Copyright (c) 2014-2025 SUSE
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
It's Kubernetes in Kubernetes and a reference in k3s which is also a project we are heavily contributing to, at SUSE.
The only trade-off is that K3s currently requires privileged mode to operate. We are actively exploring ways to address this limitation and improve security, such as implementing user namespaces or microVMs.
I understood from the host cluster perspective you won’t see the child cluster pods. And what is the perspective on nodes?
Can you have like a host cluster spawning on host nodes and the host cluster has control over spawning separate physical nodes which contain the child cluster (api server) + workload pods ?
But I don’t fully understand what you meant with content adressed :)
Maybe one has to ensure in the host cluster that the image pull policy is set to Always or all references to images have to be based on the shasum rather than Tags.