There was a discussion open on containerd's GitHub on removing the dependency on the pause image but it has been closed as won't fix: https://github.com/containerd/containerd/issues/10505
Also, if you are using kubeadm to create your cluster, beware that kubeadm may be pre-pulling a different pause image if it does not match your containerd configuration: https://github.com/kubernetes/kubeadm/issues/2020
Instead of just swapping out the registry, try baking it into your machine image.
EDIT: I loaded the page from a cloud box, and wow, I'm getting MITMed! Seems to only be for this site, wonder if it's some kind of sensitivity to the .family TLD.
-----BEGIN CERTIFICATE-----
MIIFAjCCA+qgAwIBAgISBZR6PR4jNhx4fBFvqKwzJWx4MA0GCSqGSIb3DQEBCwUA
MDMxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQwwCgYDVQQD
EwNSMTMwHhcNMjUwOTE4MTM1OTEwWhcNMjUxMjE3MTM1OTA5WjAeMRwwGgYDVQQD
ExNreWxlLmNhc2NhZGUuZmFtaWx5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEA55JknkVzyq5QGaRXn2TAzaOGYTHUVxl89lGOFgEEaWEvH5pcZL7xkqfv
Edee7l5MeRKuK1zJ+ISPQQaEjGTk51y1aXXfOKs62NiNy6QQUbzQ+euecqrKsJVN
l3PC3EYlEGibKI1gZ2x/ht8WJU9o4KiswCLqHrY7nC7BeEByv/ehiYyRTTxAXJsr
2X4LgPX6MQ1Iu10S2Bp9jnOlEV7n4RCTPFeWtfQ0CdXH45ykuwL/zrTaD111oNQE
BQPNq7Ig7OihLZcJQo8TMJ3FUgzDI9z6kMy7QHNR1I8uODVUohQCO6E7A29x8nRJ
UBV5DN1as3aHYFJ4FbX9s2tuLwCTiwIDAQABo4ICIzCCAh8wDgYDVR0PAQH/BAQD
AgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAA
MB0GA1UdDgQWBBTXwJ21Mudr9rplbA970jxJk44pEDAfBgNVHSMEGDAWgBTnq58P
LDOgU9NeT3jIsoQOO9aSMzAzBggrBgEFBQcBAQQnMCUwIwYIKwYBBQUHMAKGF2h0
dHA6Ly9yMTMuaS5sZW5jci5vcmcvMB4GA1UdEQQXMBWCE2t5bGUuY2FzY2FkZS5m
YW1pbHkwEwYDVR0gBAwwCjAIBgZngQwBAgEwLwYDVR0fBCgwJjAkoCKgIIYeaHR0
cDovL3IxMy5jLmxlbmNyLm9yZy8xMjEuY3JsMIIBAwYKKwYBBAHWeQIEAgSB9ASB
8QDvAHYApELFBklgYVSPD9TqnPt6LSZFTYepfy/fRVn2J086hFQAAAGZXVTEhwAA
BAMARzBFAiAVfJZ/XSqNq0sdf49o/8Mhs1uG9H/iPAHynYubtxfw4wIhAPiDa5S5
DoawcZlWePa+uKZRiIaZwlVVOigiZEfm+75VAHUAzPsPaoVxCWX+lZtTzumyfCLp
hVwNl422qX5UwP5MDbAAAAGZXVTEmAAABAMARjBEAiAJTtUg1SkZlRsuvXiWbeon
ehJiRiOvQBBjCrDhPk+EmAIgRy7+96Uq7sFF2iQqlDbBJTbfxqVxsLAKKsv/4mUQ
76gwDQYJKoZIhvcNAQELBQADggEBADwJpGkcEI9YQJBcBqJ1k4lkUBI4zdhjYYuv
Z2kbXFRkl041ulyel48qZZW20T9myTL4dI/2kqLP4VSrz+vk4xTzyXtTCJZHDeke
dzoQ7lplxTfZRwDVx19PkJIFPIShHyS/Ia0XTqHC8F81PmwXULRAgMYrBS3sKLXg
aIyf00xq7W6s0uPd0XDn5CsmJgHzEcBZ0F423V42iedwgGNv6GnlgzKP3Q8fkf21
4KdRYBgyYBfi33jQFf5fuMuSTtFak++BYe/ZWVAoehlw0gLh5BBmBXtCFrVFZc+q
uXXe4q5MVQmDRa0A+QtKbwkyZxIiwJ8Xi+eBTKQSscpdINy5bUs=
-----END CERTIFICATE-----> This should be part of the containerd distribution
containerd is not the only CRI runtime out there.
The nomad team made this configurable afterwards.
I am way more comfortable managing a system that is k3s rather than something that is still using tmux that gets wiped every reboot.
Well... it's what I would have said until bitnami pulled the rug and pretty much ruined the entire ecosystem as now you don't have a way to pull something that you know is trusted with similar configuration and all from a single repository which makes deployments a pain in the ass.
However, on the plus side I've just been creating my own every time I need one with the help of claude using bitnami as reference and honestly it doesn't take that much more time and keeping them up to date is relatively easy as well with ci automations.
Very easy, reliable.
Without k3s I would have use Docker, but k3s really adds important features: easier to manage network, more declarative configuration, bundled Traefik...
So, I'm convinced that quite a few people can happily and efficiently use k8s.
In the past I used other k8s distro (Harvester) which was much more complicated to use and fragile to maintain.
Right, that’s the point. A user of the CRI should not have to care about this implementation detail.
> containerd is not the only CRI runtime out there.
Any CRI that needs a pause executable should come with one.
You can also setup a separate service to "push" images directly to your container runtime, someone even demoed one in Show HN post some time ago I think.
I knew bitnami were trouble when I saw their paid tier prices. Relevant article: https://devoriales.com/post/402/from-free-to-fee-how-broadco...
Oh, and it's owned by Broadcom.
A lot of security is posturing and posing to legally cover your ass by following an almost arbitrary set of regulations. In practice, most end up running the same code as the rest of us anyway. People need to get stuff done.
Thoughts on Tmux-resurrect[1] , it can even resurrect programs running inside of it as well. It feels like it can as such reduce complexity from something like k3s back to tmux. What are your thoughts on it?
[1]:https://github.com/tmux-plugins/tmux-resurrect?tab=readme-ov...
Anything else, most companies aren't Web scale enough to set their full Kubernetes clusters with failover regions from scratch.
And because they are "immutable" - I found it's significantly more complicated to use with no tangible benefits. I do not want to learn and deal declarative machine configs, learn how to create custom images with GPU drivers...
Quite a few things which I get done on Ubuntu / Debian under 60 seconds - takes me half an hour to figure out with Talos.
It sounds like an immutable kubernetes distro doesn't solve any problems for you.
I haven't used the tool itself so I am curious as I was thinking of a similar workflow as well sometime ago
Now please answer the above questions but also I am going to assume that you are right about tmux-ressurect, even then there are other ways of doing the same thing as well.
https://www.baeldung.com/linux/process-save-restore
This mentions either Criu if you want a process to persist after a shutdown, or the shutdown utility's flags if you want to temporarily do it.
I have played around with Criu and docker, docker can even use criu with things like docker checkpoint and I have played with that as well (I used it to shutdown mid compression of a large file and recontinue compression exactly from where I left)
What are your thoughts on using criu+docker/criu + termux, I think that it itself might be an easier thing than k3s for your workflow.
Plus, I have seen some people mention vps where they are running the processes for 300 days or even more without a single shutdown iirc and I feel like modern VPS providers are insanely good at uptime, even more so than sometimes cloud providers.
My team’s service implements a number of performance and functionality improvements on top of your typical registry to support the company’s needs.
I can’t say much more than that sadly.
even using tmux resurrect on my personal machine I've had it fail to resurrect anything
again - lack of documentation and loosy tmux resurrect state is not what I want to go thru when working in unfamilar environments
why are you getting downvoted
docker compose also has issues but at least it is defined, again if you are managing 10+ machines docker becomes a challenge to maintain especially when you have 4 to 5 clusters, when you are familiar with kubernetes there's virtually no difference between docker tmux or raw k8s, although I heavily recommend k3s due to its ability to maintain itself.
Publish Date: November 3, 2025
I don’t normally write blog posts that regurgitate information from normal documentation, but this particular subject irks me.
If you are running an internal Kubernetes (k8s) platform, you owe it to yourself to make sure there is nothing external to your platform determining your reliability.
You could ask yourself: How many internet dependencies do you have to start a pod? Should be zero, right???
If you use stock k8s, you might be surprised to know that each of your k8s nodes is actually reaching out to registry.k8s.io on first pod creation to get the pause image:
$ sudo crictl images
IMAGE TAG IMAGE ID SIZE
registry.k8s.io/pause 3.9 e6f1816883972
If you want to change that, you can update your containerd (1.x) toml:
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "YOUR_REGISTRY/pause:3.10"
And depend on one less thing. The rest of the blog post will go deeper into why this is the case.
The pause image is the container image that backs the k8s “sandbox” of a pod. This pause container is designed to hold the linux namespaces. The pause container used to also reap zombie processes from the other containers in a pod, its duty as PID1, but that isn’t the case by default anymore in k8s 1.8+.
The sandbox of a pod is part of the CRI spec. The CRI spec is a generic way for k8s to talk pods (and sandboxes) that is not specific to any particular container runtime (like containerd). Any container runtime that implements the CRI spec can, in theory, run k8s pods.
This means that the pause image has more to do with CRI than it does with k8s.
When a CRI-enabled container runtime needs to create a sandbox, at least with the case of containerd, it does this by creating a real container.
The image containerd is configured to use (by default) to create that sandbox, is the pause image. You can see this in code here.
Per the current docs, you can overwrite the containerd sandbox image with a containerd configuration like this (assuming you have mirrored to a local registry):
(containerd 1.x)
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "YOUR_REGISTRY/pause:3.10"
(containerd 2.x)
version = 3
[plugins]
[plugins.'io.containerd.cri.v1.images']
...
[plugins.'io.containerd.cri.v1.images'.pinned_images]
sandbox = 'YOUR_REGISTRY/pause:3.10'
Don’t take my word for it here, this particular setting has changed over time, check the official docs.
If you go to registry.k8s.io you will see:
Please note that there is NO uptime SLA as this is a free, volunteer managed service. We will however do our best to respond to issues and the system is designed to be reliable and low-maintenance. If you need higher uptime guarantees please consider mirroring images to a location you control.
So yea, this is your PSA. Please mirror like they recommend and reconfigure as needed to not depend on the internet.