

I have a very similar NAS I built. The Home Assistant usage doesn’t really even move the needle. I’m running around 50 docker containers and chilling at about 10% cpu.
I have a very similar NAS I built. The Home Assistant usage doesn’t really even move the needle. I’m running around 50 docker containers and chilling at about 10% cpu.
My amd igpu works just fine for jellyfin. LLMs are a little slow, but that’s to be expected.
Not quite what you’re asking for, but you can self-host ollama. And based on some recent lawsuits against meta, I’m pretty sure all companies are using as many books as they can get their hands on to train their models. And so their training set contains the books you have in Calibre and more.
Try asking llama3.3 or whichever model you choose your questions.
Some advice, TrueNAS isn’t very newbie friendly. Between permissions and their wonky kubernettes setup that no containers actually leverage, it’s not great. It is free, but expect bumps in the road. Unraid and OpenMediaVault are much easier to use. I switched to Unraid, and it’s been amazing, I highly recommend it. It’s nice that you can install random sized drives, they don’t need to match. You can toss in a few ssds for cache, and the docker containers are super easy to setup and maintain. Jellyfin works just fine for instance. OMV has some great offerings too, but lack the docker/VM hosting side. It’s a NAS and nothing else. It’s expected to have proxmox or something hosted elsewhere that uses OMV as storage.
#2 opinion, build your own NAS. Especially if you’ve already built your own Gaming PC, it’s pretty straight forward. Pick a low powered cpu, toss in some ram, a ton of hdds, and maybe some old graphics card you have lying around for transcoding or hosting local AI for kicks. You’ll get a lot more for your money this way.
Cloud-based. If a product won’t work if my internet dies, or I can’t access my data without internet or a subscription, I won’t buy it.
Just home assistant doesn’t move the needle. The llms hit the igpu hard and my cpu usage spikes to 70-80% when one is thinking.
But my llms i’m running are ollama and invokeai each with several different models just for fun.