

2 big things for me.
First is that everyone, and I mean absolutely everyone has something they want to hide. People assume “I’m not a violent person or a criminal” except yes you are, and you’ve done something. A great example is everyone in the US speeds, absolutely everyone. Does that mean you want every office to know every instance of you speeding if you get pulled over? So, yes everyone has something they’d rather not say.
Second is more of an example of you should be allowed to go places without everyone knowing. The example was about 5 years ago police used location data to find a person who broke into someone’s home. Problem is that the location data they used returned one person who happened to be on that street around the same time. They were riding their bike down the street. To the police they had the person there, they had proof, it was good enough. Except it wasn’t, and he obviously wasn’t the person they were looking for. Location data put him there though, and sold him out. So maybe not the best thing for whoever to know exactly where you are at any given time.
As for encryption, ask him for his porn history. If he gets upset, just say “why it’s not illegal”
but, I agree with the other person. If you’re dad is like mine and countless others, you’re not fighting against him but propaganda. If that’s the case, you aren’t going to win this. The only winning is turning off the source.







I do selfhost my own, and even tried my hand at building something like this myself. It runs pretty well, I’m able to have it integrate with HomeAssistant and kubectl. It can be done with consumer GPUs, I have a 4000 and it runs fine. You don’t get as much context, but it’s about minimizing what the LLM needs to know while calling agents. You have one LLM context that’s running a todo list, you start a new one that is charge of step 1, which spins off more contexts for each subtask, etc. It’s not that each agent needs it’s own GPU, it’s that each agent needs it’s own context.