Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?
Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.
Yes, I’ll die on this hill.
But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!
In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.
kubernetes
Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.
Those terms do mean something, but they’re a lot simpler than execs claim they are.
…oh shit, the RAM is on fire.
The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.
Burn mothercucker, burn.
(Thanks phone for the spelling mistakes that I’m leaving).
Learning this fact is what got me to finally dockerize my setup
Move over, bud. That’s my hill to die on, too.
Speak english doctor! But really is this a fancy way of saying its ok to docker all the things?
That I’ve yet to see a containerization engine that actually makes things easier, especially once a service does fail or needs any amount of customization. I’ve two main services in docker, piped and webodm, both because I don’t have the time (read: am too lazy) to write a PKGBUILD. Yet, docker steals more time than maintaining a PKGBUILD, with random crashes (undebuggable, as the docker command just hangs when I try to start one specific container), containers don’t start properly after being updated/restarted by watchtower, and debugging any problem with piped is a chore, as logging in docker is the most random thing imagineable. With systemd, it’s in journalctl, or in /var/log if explicitly specified or obviously useful (eg. in multi-host nginx setups). With docker, it could be a logfile on the host, on the guest, or stdout. Or nothing, because, why log after all, when everything “just works”? (Yes, that’s a problem created by container maintainers, but one you can’t escape using docker. Or rather, in the time you have, you could more easily properly(!) install it bare metal) Also, if you want to use unix sockets to more closely manage permissions and prevent roleplaying a DHCP and DNS server for ports (by remembering which ports are used by which of the 25 or so services), you’ll either need to customize the container, or just use/write a PKGBUILD or similar for bare metal stuff.
Also, I need to host a python2.7 django 2.x or so webapp (yes, I’m rewriting it), which I do in a Debian 13 VM with Debian 9 and Debian 9 LTS repos, as it most closely resembles the original environment, and is the largest security risk in my setups, while being a public website. So into qemu it goes.
And, as I mentioned, either stuff is officially packaged by Arch, is in the AUR or I put it into the AUR.
Do you host on more than one machine? Containerization / virtualization begins to shine most brightly when you need to scale / migrate across multiple servers. If you’re only running one server, I definitely see how bare metal is more straight-forward.
One main server, with backup servers being very easy to get up and running, either by full-restoring the backup, or installing and restoring specific services. As everything’s backed up to a Hetzner Storage Box, I can always restore it (if I have my USB sticks with the keyfiles).
I don’t really see the need for multiple running hosts, apart from:
- Router
- Workstation which has a 1070 in it, if I need a GPU for something. My 1U server only has space for a low profile and one slot GPU/HPC processor, and one of those would cost way more than its value over my old 1070 would be.
You can customize and debug pretty easily, I’ve found. You can create your own Dockerfile based on one you’re using and add customizations there, and
exec
will get you into the container.Personally I have seen the opposite from many services. Take Jitsi Meet for example. Without containers, it’s like 4 different services, with logs and configurations all over the system. It is a pain to get running, as none of the services work without everything else being up. In containers, Jitsi Meet is managed in one place, and one place only. (When using docker compose,) all logs are available with
docker compose logs
, and all config is contained in one directory.It is more a case-by-case thing whether an application is easier to set up and maintain with or without docker.
especially once a service does fail or needs any amount of customization.
A failed service gets killed and restarted. It should then work correctly.
If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
So, either build your recovery process to account for this… or fix it so it can recover.
It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.As for customisation, if it isn’t exposed via env vars then it can’t be altered.
If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
It’s using a chisel incorrectly.Exactly. Therefore, docker is not useful for those purposes to me, as using arch packages (or similar) is easier to fulfill my needs.
My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.
As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.
Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)
So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.
A NAS as bare metal makes sense.
It can then correctly interact with the raw disks.You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
Let a storage device be a storage device, and let a hypervisor be a hypervisor.
I started hosting stuff before containers were common, so I got used to doing it the old fashioned way and making sure everything played nice with each other.
Beyond that, it’s mostly that I’m not very used to containers.
I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.
Yes containers have made everything so easy.
Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?
Considering I have a full backup, all services are Arch packages and all important data is on its own drive, I’m not concerned about anything
I’ve always done things bare metal since starting the selfhosting stuff before containers were common. I’ve recently switched to NixOS on my server, which also solves the dependency hell issue that containers are supposed to solve.
All my services run on bare metal because its easy. And the backups work. It heavily simplifies the work and I don’t have to worry about things like a virtual router, using more cpu just to keep the container…contained and running. Plus a VERY tiny system can run:
- Peertube
- GoToSocial + client
- RSS
- search engine
- A number of custom sites
- backups
- Matrix server/client
- and a whole lot more
Without a single docker container. Its using around 10-20% of the RAM and doing a dd once in a while keeps everything as is. Its been 4 years-ish and has been working great. I used to over-complicate everything with docker + docker compose but I would have to keep up with the underlining changes ALL THE TIME. It sucked, and its not something I care about on my weekends.
I use docker, kub, etc…etc… all at work. And its great when you have the resources + coworkers that keep things up to date. But I just want to relax when I get home. And its not the end of the world if any of them go down.
Assuming you run Synapse, that uses more than 1.5GB RAM just idling, your system has at the very least 16GB of RAM… Hardly what I’d call “very tiny”
Do you use any tools for management, such as Ansible or similar?
Oh so the other 80% of your RAM can sit there and do nothing? My RAM is always around 80% or so as its caching stuff like it’s supposed to.
Hahaha that’s funny. I hope your not serous.
I use Raspberry Pi 4 with 16GB SD-card. I simply don’t have enough memory and CPU power for 15 separate database containers for every service which I want to use.
So, are you running 15 services on the Pi 4 without containers?
I see. Are you the only user?
Databases on sd cards are a nightmare for sd card lifetimes. I would really recommend getting at least a USD SSD stick instead if you want to keep it compact.
Your SD card will die suddenly someday in the near future otherwise.
Thank you for your advice. I do use an external hard drive for my data.
TrueNAS is on bare metal has I have a dedicated NAS machine that’s not doing everything else and also is not recommended to virtualize. Not sure if that counts.
Same for the firewall (opnsense) since it is it’s own machine.
Have you tried running containers on Truenas?
For me it’s lack of understanding usually. I haven’t sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven’t gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.
I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it’s not something I normally use.
I guess I just haven’t been forced to see the upsides yet. But am always wanting to learn
containerisation is to applications as virtual machines are to hardware.
VMs share the same CPU, memory, and storage on the same host.
Containers share the same binaries in an OS.When you say binaries do you mean locally stored directories kind of like what Lutris or Steam would do for a Windows game. (Create a fake c:\ )
Not so much a fake one but overlay the actual directory with specific needed files for that container.
Take the Linux lib directory. It exists on the host and had python version 3.12 installed. Your docker container may need python 3.14 so an overlay directory is created that redirects calls to /lib/python to /lib/python3.14 instead of the regular symlinked /lib/python3.12.
So let’s say I theoretically wanted to move a docker container to another device or maybe if I were re-installing an OS or moving to another distro, could I in theory drag my local docker container to an external and throw my device in a lake and pull that container off into the new device? If so … what then, I link the startups, or is there a “docker config” where they are all able to be linked and I can tell it which ones to launch on OS launch, User launch, delay or what not?
For ease of moving containers between hosts I would use a docker-compose.yaml to set how you want storage shared, what ports to present to the host, what environment variables your application wants. Using Wordpress as an example this would be your starting point
https://github.com/docker/awesome-compose/blob/master/wordpress-mysql/compose.yamlall the settings for the database is listed under the db heading. You would have your actual database files stored in /home/user/Wordpress/db_data and you would link /home/user/Wordpress/db_data to /var/lib/MySQL inside the container with the line
volumes: - db_data:/var/lib/mysql
As the compose file will also be in home/user/Wordpress/ you can drop the common path.
That way if you wanted to change hosts just copy the /home/user/Wordpress folder to the new server and run docker compose up -d and boom, your server is up. No need to faf about.
Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.
“Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.”
So that’s really why they should be good for Jellyfin/File servers, as the data isn’t needing to be stored in container, just the run files. I suppose the config files as well.
When I reverse proxy to my network using wireguard (set up on the jellyfin server, I also think I have a rustdesk server on there) on the other hand, is it worth using a container, or is that just the same either way?
I have shoved way to many things on an old laptop, but I never have to touch it really, and the latest update mint put out actually cured any issues I had. I used to have to reboot once a week or so to get everything back online when it came to my Pihole and shit. Since the latest update I ran in September 4th, I haven’t touched it for anything. Screen just stays closed in a corner of my desk with other shit stacked on top
@kiol I mean, I use both. If something has a Debian package and is well-maintained, I’ll happily use that. For example, prosody is packaged nicely, there’s no need for a container there. I also don’t want to upgrade to the latest version all the time. Or Dovecot, which just had a nasty cache bug in the latest version that allows people to view other peoples’ mailboxes. Since I’m still on Debian 12 on my mail server, I remain unaffected and I can let the bugs be shaken out before I upgrade.
@kiol On the other hand, for doing builds (debian packages and random other stuff), I’ll use podman containers. I’ve got a self-built build environment that I trust (debootstrap’d), and it’s pretty simple to create a new build env container for some package, and wipe it when it gets too messy over time and create a new one. And for building larger packages I’ve got ccache, which doesn’t get wiped by each different build; I’ve got multiple chromium build containers w/ ccache, llvm build env, etc
@kiol And then there’s the stuff that’s not packaged in Debian, like navidrome. I use a container for that for simplicity, and because if it breaks it’s not a big deal - temporary downtime of email is bad, temporary downtime of my streaming flac server means I just re-listen to the stuff that my subsonic clients have cached locally.
@kiol Syncthing? Restic? All packaged nicely in Debian, no need for containers. I do use Ansible (rather than backups) for ensuring if a drive dies, I can reproduce the configuration. That’s still very much a work-in-progress though, as there’s stuff I set up before I started using Ansible…
I’m doing this on a couple of machines. Only running NFS, Plex (looking at a Jellyfin migration soon), Home Assistant, LibreNMS and some really small other stuff. Not using VMs or LXC due to low-end hardware (pi and older tiny pc). Not using containers due to lack of experience with it and a little discomfort with the central daemon model of Docker, running containers built by people I don’t know.
The migration path I’m working on for myself is changing to Podman quadlets for rootless, more isolation between containers, and the benefits of management and updates via Systemd. So far my testing for that migration has been slow due to other projects. I’ll probably get it rolling on Debian 13 soon.
Erm. I’d just say there’s no benefit in adding layers just for the sake of it.
It’s just different needs. Say I have a machine where I run a dedicated database on, I’d install it just like that because as said there’s no advantage in making it more complicated.
I use k3s and enjoy benefits like the following over bare metal:
- Configuration as code where my whole setup is version controlled in git
- Containers and avoiding dependency hell
- Built-in reverse proxy with the Traefik ingress controller. Combined with DNS in my OpenWRT router, all of my self hosted apps can be accessed via appname.lan (e.g., jellyfin.lan, forgejo.lan)
- Declarative network policies with Calico, mainly to make sure nothing phones home
- Managing secrets securely in git with Bitnami Sealed Secrets
- Liveness probes that automatically “turn it off and on again” when something goes wrong
These are just some of the benefits just for one server. Add more and the benefits increase.
Edit:
Sorry, I realize this post is asking why go bare metal, not why k3s and containers are great. 😬