

How much money are you willing to spend? Resiliency is expensive.


How much money are you willing to spend? Resiliency is expensive.


Self-hosting is trivial and everyone can do it.
So is open heart surgery. Unless you want it to end successfully.


Have you forgotten that you too started at 0?
Not at all. In fact I remember the day my server was hacked because I’d left a service running that had a vulnerability in it. I remember changing passwords, calling my bank to ensure there had been no fraudulent charges, etc. I remember “war driving” to find vulnerable WiFi networks. I remember changing default passwords on a service setup by a client of mine.
As I said - it’s not gate-keeping it’s experience.
Yes, it sometimes can be difficult and frustrating, but so long as someone, anyone, is willing to try and learn and fail and retry, they can get my help
Teaching is “gate-keeping” apparently. You can’t tell somebody that they need to learn something! You just need to give them a link to a url and say “run this thing as root and your stuff will work - totally not a scam tho”.


“Has anyone noticed that medical doctors gate-keep people doing open heart surgery?”
Why do you assume self-hosting is and can be trivial? It is NOT for everybody. You should have some base level of technical knowledge. You should expect to need to learn some things. It’s not a badge of honor, it’s experience.
My project focuses on building a tool that makes self-hosting more accessible without sacrificing data ownership
Good luck with that. Don’t get your users pwned in the process. You’re now responsible for the security of people who think “opening a command line” is too difficult.


You are waaaay over-thinking this…


I’m happy you’re discovering the Linux CLI, but this is pretty ridiculous. mpv, vlc, mplayer, etc. all serve very different uses from jellyfin.


I don’t.


Clearly you don’t know.


If I wanted to run updates frequently I would run arch lmao. Even if I did apt update every day, debian stable doesn’t get that many updates.
You’re not updating for features you’re updating for bug and security fixes. That’s why Debian stable doesn’t have many updates. But the ones they do are typically important.


That’s… Not how it works… Debian is “stable” not “secure”. You use Debian so that is easier to run updates frequently since they’ll be unlikely to break things.


All systems, daily via a single ansible script. That’s apt update, upgrade and reboot if needed (some systems set to only reboot with a separate script so I can handle them separately).
Rarely have any sort of problems.


Sounds like you bookmarked the whole flippin’ Internet.


Then how do you know that “most streaming services don’t work on Linux?”


Something that can make troubleshooting DNS issues a real pain is that there can be a lot of caching at multiple levels. Each DNS server can do caching, the OS will do caching (nscd), the browsers do caching, etc. Flushing all those caches can be a real nightmare. I had issues recently with nscd causing issues kinda like what you’re seeing. You may or may not have it installed but purging it if it is may help.


It’s not resolving, play around with dig a bit to troubleshoot: https://phoenixnap.com/kb/linux-dig-command-examples
I’d start with “dig @your.providers.dns.server your. domain.name” to query the provider servers directly and to see if the provider actually responds for your entry.
If so then it may be that you haven’t properly configured the provider to be authoritative for your domain. Query @8.8.8.8 or one of the root servers. If they don’t resolve it then they don’t know where to send your query.
If they do, the problem is probably closer to home either your local network or Internet provider.


This is an awful analogy…
squeezing every last drop of resource form tired old hardware
This is such a myth. 99% of the time your hardware is doing there doing nothing. Even when running “bloated” services.
Nextcloud, for example, uses practically zero cpu and a few tens on mb when sitting around yet people avoid it for “bloat”.


I’m not sure how the *arr stuff works but hard links don’t let you “edit a file while preserving the original” - they let you have mulltiple paths to the same file.
$ echo "hello" > file1
$ ln file1 file2
$ echo "world" > file2
$ cat file1 file2
world
world
Does *arr have some sort of copy-on-write behavior? Some modern file-systems have de-duping behavior and copy-on-write built in that you may be able to save some space with.
But the point of topic 1 was to simplify. You can keep doing your hardlink stuff but standardize it and simplify setup/configuration. If you always do things in the same way it’s less complicated to keep track of and fix.
You’ve understood the difference in terraform/ansible, and yeah terraform is probably not going to be as helpful. Ansible would be much more likely to help. It can seem burdensome to have to write configuration files for things at first, but it forces you to do things in a way that is standardized and repeatable.
enough, a lot, more demanding.
You need to give some sort of guidance here.