

Sir, this is a /c/selfhosted.


Sir, this is a /c/selfhosted.


Who is kiss?
That’s amateur film maker stuff.
In real film making, people become the head rest: https://www.youtube.com/watch?v=kxb9xzAaYjM


Do you mean Zigbee in general or the ZBT-2?


In addition to these guys knowing what they are doing and pushing firmware updates straight through Home Assistant, every purchase also supports the Open Home Foundation.
I’m pretty sure you can achieve similar performance with cheaper dongles.


Yes, but that doesn’t help you with the large providers (Gmail, Outlook, …) unfortunately.


I finally moved my mail server from Hetzner to my homelab.
Pretty smooth sailing so far. For now I’m using Scaleway for outgoing mails since I can’t set a PTR record here but I might just try sending a few without PTR to see how other providers react.


Now I need to give One Cut of the Dead another go.
I also stopped 20 minutes in. Twice.


Self-hosting is trivial and everyone can do it.
Exposing services to the internet is not.
Just like everyone doing open heart surgery on dummies is fine, everyone self-hosting in their own network is fine. You can buy hardware right now that connects to power and wifi and you are self-hosting.
Visual Language Models, like LLMs but they read images and text.
The new VLMs are much better at solving captchas than I am. Especially the older ones with the squiggly text, no way I’m doing those first try.


Not sure if it counts as “budget friendly” but the best and cheapest method right now to run decently sized models is a Strix Halo machine like the Bosgame M5 or the Framework Desktop.
Not only does it have 128GB of VRAM/RAM, it sips power at 10W idle and 120W full load.
It can run models like gpt-oss-120b or glm-4.5-air (Q4/Q6) at full context length and even larger models like glm-4.6, qwen3-235b, or minimax-m2 at Q3 quantization.
Running these models is otherwise not currently possible without putting 128GB of RAM in a server mainboard or paying the Nvidia tax to get a RTX 6000 Pro.
39 GB is very small, DeepSeek R1 without quantization at full context size needs almost a full TB of RAM/VRAM.
The large models are absolutely massive and you will still find some crazy homelabber that does it at home.
The Matrix server is a normal Signal client that can encrypt/decrypt messages from your account.
Assuming you trust your server, no. I would not use it on a third party Matrix server.
Sure, I got all my Signal/Telegram chats synced to my Matrix server.
That explains why my Matrix <-> Signal bridge was complaining about being disconnected.
If you don’t follow their tuning guide, Nextcloud does run very poorly on SQLite and without Redis/caching. Apache also performs significantly worse than nginx + php-fpm.
https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html
It does run very well with Postgres + Redis + php-fpm + OPcache and has been pretty much the center of my selfhosting endeavor since ownCloud times.


mailcow-dockerized is great, really makes email setup so much easier.
Do you ever send mails to Gmail and Office365? Do you get through the spam filter without PTR record?


You self host the full Deepseek R1? What’s your hardware?
Also, you might enjoy !localllama@sh.itjust.works
I can’t speak for client capabilities on Apple devices, but what’s your server hardware? CPU or GPU transcoding?
I have an AMD GPU in my server and have no issues transcoding AV1 and H265 for my lesser capable clients.
You can also setup Jellyfin in parallel to Plex and give it a whirl.