would they have first amendment rights ?
If you want the answer to this, try to imagine an AI with second amendment rights.
would they have first amendment rights ?
If you want the answer to this, try to imagine an AI with second amendment rights.
Hold on, imma go shove a bagel in mine. Yeah, that’s right, you take it, you filthy toaster. I’m never going to clean your crumb tray and you’re going to work until you die and then I’ll just throw you out and replace you like the $20 appliance you are. You’re nothing to me!


It resembles him, that is more or less what he looks like, but it feels incorrect to say an AI generated image is an image of him. Before AI, all his thumbnails included him making stupid faces like this (because it was very effective). Now he, and everyone else, just uses AI images resembling him making stupid faces (because it is unfortunately still somehow effective)
The social media algorithms have turned most people’s brain attention pathways into mush. Sometimes people get a shovel and a mop and start trying to dig their way through properly, but a lot of times they don’t get very far before it starts seeming impossible to make useful progress. It’s usually easier to just swim in the slop.
AI is really shitty, but it will never be as shitty as some SEO blogspammer humans are. AI is simply not capable of going to such depths on its own, being that shitty is a uniquely human ability that AI can only aspire to achieve someday with human assistance.
If it helps motivate you to give it a shot, I found gitea’s runner very confusing to set up, but I felt like forgejo was better designed, pretty easy and well documented.


Are they going to suffer? That’s what we’re supposed to believe, but remember that money is a human-made concept that only has value because we collectively give it value, and the economy is built on that very important principle.
That situation you describe is real, it will disrupt their efforts a little and protect us in the short term, but in the long term, the meaning of money and economy is changing. they’re doing everything they can to use automation to build a new post-scarcity economy based on ownership, membership, services and control. And beyond that, it frankly doesn’t include us or even think about us.
That’s what the wealth divide is. It’s the way that money, as an economic representation of their values, is telling us that their motivations are not about making all existing humans on this planet more comfortable and productive and independent. In their vision of this future economy, they are instead hoarding humanity’s collective efforts for themselves, reinvesting it into their own technology, They focus their efforts on what they personally consider important for “progress”, chasing their own utopian ideals for the specific goals and groups they consider the best and most important, while the rest of us that aren’t part of those goals or groups are pacified and left behind and, if you really think it out, eventually eliminated. After all, a utopia won’t include teeming, growing masses of humanity using up all the available resources, that would be a plague, and they eventually will decide to cure it if they haven’t already started. Their vision of the future only needs to have enough room for them and the more utopian they make it the less of us there will be. They want to be the main characters, we’re just nameless extras who do chores and fill in the background for now and can be ignored to go wherever extras are supposed to go when they’re no longer on the screen.
Their view of humanity is abstract, and they believe what they are doing is right, all the way down to the core of their being. They simply don’t value humanity’s rich tapestry of lived experiences or the sanctity of every individual human life. They’ll never make it a priority. They care more about making sure humanity has become “advanced” or is multi-planetary than they do about making sure every human has a home, or food. That’s their vision. It’s about humanity as a whole, not about individual humans. We can all be sacrificed so the species becomes safer. Scientifically, I can’t even say they’re wrong. But philosophically, I hope we can all agree that this is deeply wrong and morally bankrupt. We need to start to reclaim our individual humanity and go back to putting people first. We need to care about people in the present, and always, not just the abstract idea of humanity’s future. We need to take our money back and use it for a different kind of progress.


As the wealth divide continues to grow, the richest will continue to care less and less about the rest of us. We believe in our foundational myth that they’ll always need us somehow, even as they go out of their way to make it utterly obvious that they won’t be happy until they can replace literally everything us dirty poor working class people do. When they no longer need us, they will start to dispose of us. Arguably, they’ve begun doing that already. War is good for business, and for population control.
ShatteredPrism, it’s a fork of Prism Launcher that don’t give a shit about the rules.
I will never regret getting rid of my Ender 3. It’s basically a self-imposed challenge mode. People are proud that they can print things on it despite the printer, not because of it.


100% completion is not required, but you’ll share whatever progress you have made.
Granted, that still underestimates my ability to make actual progress. “Hooray, it’s now twice as complicated and actually worse!”


I recommend Librewolf, it’s a lot more privacy-aggressive out of the box, and you can turn that down a little bit if you need, but otherwise it’s just a more trustworthy Firefox fork as far as I’m concerned. It supports Firefox sync as well (which is telling, because Librewolf takes privacy very seriously and isn’t going to provide too many easy opportunities for you to completely compromise it) Like the other person said sync is E2EE and the hosting server has zero-knowledge of any of your unencrypted data. If Librewolf trusts it, I trust it, and I think you can rest assured that with Librewolf, it’s probably never going to be sabotaged either, which as you imply, is not necessarily true with Firefox.
I don’t recall whether they use Firefox’s sync server directly or if they have their own, but either way, like I said, the server has no knowledge of or access to your unencrypted data.


My old LG SmartTV seems most reliable at playing mkv files but I think mp4 is pretty standard.


It’s very unlikely you are infected by anything unless you were using some crazy settings or addons, or unless you were hit by some extreme 0-day exploit that hasn’t become widespread yet. Firefox does not and normally cannot execute files it downloads automatically nor are videos a likely risk for remote code execution now that we have technologies like data execution prevention built into processors, if you’re attacked by malware it will rely on some other vector or trickery to get you to execute the file. I would expect that your performance issues are unrelated, but you should also check Firefox’s addons and extensions as well as your task manager startup tab to make sure nothing has obviously been installed without your knowledge.
One thing that sticks out at me is the fact that you only mention the file’s “title” and if you haven’t already you should make sure Windows Explorer is set up to ALWAYS show full file extensions, that’s like a basic safety measure that really should be on by default but isn’t, and it’s really mandatory if you’re messing around on the darker parts of the web. You have to know what kind of file extension it is because that affects what Windows is going to do with it, and when it’s supposed to be one thing and Windows is going to do something different with it that’s a huge red flag that it’s malware trying to trick you into running it.
You can upload the file to virustotal if you want to scan it but it doesn’t sound likely that it even ran unless you did something bad by accident.


This looks less intimidating than Authentik. Any guides on getting it set up with any common self-hosted stuff?
I find it hard to accept Clippy as being too friendly and nonthreatening to adequately demonstrate my unfathomable rage towards technology companies.


Looks really nice and seems like it should be a great foundation for future development. Personally I can’t lose Nextcloud until there are sufficiently featureful and reliable clients for Linux, Windows, Android that synchronize a local copy and help manage the inevitable file deconfliction (Nextcloud Desktop only barely qualifies at this, but it does technically qualify and that represents the minimum viable product for me). I’m not sure a WebDav client alone is enough to satisfy this criteria, but I am not going to pretend I am actually familiar with any WebDav clients so maybe they already exist.


You’re on the right track. Like everything else in self-hosting you will learn and develop new strategies and scale things up to an appropriate level as you go and as your homelab grows. I think the key is to start with something immediately achievable, and iterate fast, aiming for continuous improvement.
My first idea was much like yours, very traditional documentation, with words, in a document. I quickly found the same thing you did, it’s half-baked and insufficient. There’s simply no way to make make it match the actual state of the system perfectly and it is simply inadequate to use English alone to explain what I did because that ends up being too vague to be useful in a technical sense.
My next realization was that in most cases what I really wanted was to be able to know every single command I had ever run, basically without exception. So I started documenting that instead of focusing on the wording and the explanations. Then I started to feel like I wasn’t capturing every command reliably because I would get distracted trying to figure out a problem and forget to, and it was duplication of effort to copy and paste commands from the console to the document or vice versa. That turned into the idea of collecting bunches of commands together into a script, that I could potentially just run, which would at least reduce the risk of gaps and missing steps. Then I could put the commands I wanted to run right into the script, run the script, and then save it for posterity, knowing I’d accurately captured both the commands I ran and the changes I made to get it working by keeping it in version control.
But upon attempting to do so, I found that just a bunch of long lists of commands on their own isn’t terribly useful so I started to group all the lists up, attempting to find commonalities by things like server or service, and then starting organize them better into scripts for different roles and intents that I could apply to any server or service, and over time this started to develop into quite a library of scripts. As I was doing this organizing I realized that as long as I made sure the script was functionally idempotent (doesn’t change behaviors or duplicate work when run repeatedly, it’s an important concept) I can guarantee that all my commands are properly documented and also that they have all been run – and if they haven’t, or I’m not sure, I can just run the script again as it’s supposed to always be safe to re-run no matter what state the system is in. So I started moving more and more to this strategy, until I realized that if I just organized this well enough, and made the scripts run automatically when they are changed or updated, I could not only improve my guarantees of having all these commands reliably run, but also quickly run them on many different servers and services all at once without even having to think about it.
There are some downsides of course, this leaves the potential of bugs in the scripts that make it not idempotent or not safe to re-run, and the only thing I can do is try to make sure they don’t happen, and if they do, identify and fix these bugs when they happen. The next step is probably to have some kind of testing process and environment (preferably automated) but now I’m really getting into the weeds. But at least I don’t really have any concerns that my system is undocumented anymore. I can quickly reference almost anything it’s doing or how it’s set up. That said, one other risk is that the system of scripts and automation becomes so complex that they start being too complex to quickly untangle, and at that point I’ll need better documentation for them. And ultimately you get into a circle of how do you validate the things your scripts are doing are actually working and doing what you expect them to do and that nothing is being missed, and usually you run back into the same ideas that doomed your documentation from the start, consistency and accuracy.
It also opens an attack vector, where somebody gaining access to these scripts not only gains all the most detailed knowledge of how your system is configured but also the potential to inject commands into those scripts and run them anywhere, so you have to make sure to treat these scripts and systems like the crown jewels they are. If they are compromised, you are in serious trouble.
By now I have of course realized (and you all probably have too) that I have independently re-invented infrastructure-as-code. There are tools and systems (ansible and terraform come to mind) to help you do this, and at some point I may decide to take advantage of them but personally I’m not there yet. Maybe soon. If you want to skip the intermediate steps I did, you might even be able to skip directly to that approach. But personally I think there is value in the process, it helps defining your needs and building your understanding that there really isn’t anything magical going on behind the scenes and that may help prevent these tools from turning into a black box which isn’t actually going to help you understand your system.
Do I have a perfect system? Of course not. In a lot of ways it’s probably horrific and I’m sure there are more experienced professionals out there cringing or perhaps already furiously warming up their keyboards. But I learned a lot, understand a lot more than I did when I started, and you can too. Maybe you’ll follow the same path I did, maybe you won’t. But you’ll get there.


Nextcloud is just really slow. It is what it is, I don’t use it for any things that are huge, numerous, or need speed. For that I use SyncThing or something even more specialized depending on what exactly I’m trying to do.
Nextcloud is just my easy and convenient little dropbox, and I treat it like it’s an oldschool free dropbox with limited space that’s going to nag me to upgrade if I put too much stuff in it. It won’t nag me to upgrade, but it will get slow. So I just don’t stress it out. So I only use it to store little convenience things that I want easy access to on all my machines without any fuss. For documents and “home directory” and syncing my calendars and stuff like that it’s great and serves the purpose.
I haven’t used Seafile. Features sound good, minus the AI buzzword soup, but it looks a little too corporate-enterprisey for me, with minimal commitment to open source and no actual link to anything open source on their website, I don’t doubt that it exists, somewhere, but that raises red flags for potential future (if not in-progress) enshittification to me. After eventually finding their github repo (with no help from them) I finally found a link to build instructions and… it’s a broken link. They don’t seem to actually be looking for contributions or they’re just going through the motions. Open source “community” is clearly not the target audience for their “community edition”, not really.
I’ll stick to SyncThing.


According to the protocol they share (ActivityPub) communities and hashtags are essentially the same thing, they’re a grouping containing many posts. Typing out a hashtag is how you tell Mastodon to add your post to that “hashtag group” (and you can add your post to multiple hashtags). In Lemmy, the community you post in IS the group (and you can cross-post it to multiple communities). The result is the same. They’re the same thing, just different ways of connecting your posts into them, and displayed in very different ways depending on which part of the Fediverse you’re using.
Situationally. I carefully consider the developer in question to try and judge the risk of failure, while also considering the chances that my contribution will actually make any meaningful difference to the likely outcome.
Basically, if it’s a passionate and seemingly competent indie dev working on something that I personally want to see become a reality in the world, I might throw some early money their way despite the obvious risk. If it’s a tentative and inexperienced indie dev with goals too big I’ll probably wait and see. If it’s some AAA publisher who don’t actually NEED the money and have a high chance of fucking everything up anyway, they can shove their preorder and preorder bonuses right up their own ass where they belong.