

I would imagine having a less than honest friend with bolt cutters would suffice with at least bringing your hand usage back, then from there you can try to find something else later on.
Just your normal everyday casual software dev. Nothing to see here.
People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.
been trying to lower my social presence on services as of late, may go inactive randomly as a result.


I would imagine having a less than honest friend with bolt cutters would suffice with at least bringing your hand usage back, then from there you can try to find something else later on.


Yea i get what you mean. Also I know what case you are talking about I think, That was the case where they never even noticed he did it until he said something right? I saw a youtube video on that a year or two ago.


I agree but, thats sort of the point. The first alternative is a lot of money that takes a bunch of time to setup, just for the city to very cheaply and quickly reverse it. They had already /tried/ that approach and the city said no, doing it themselves was just a bad plan to begin with.
The city at the moment is out maybe 20 minutes to take the sign down, and then can go back to sticking their head in the sand.
A damaged road? can take weeks to months to fix, and requires a dedicated crew and equipment, all while forcing vehicles to slow down due to it, while using tools that are likely just laying about the garage. Don’t take me wrong, both methods are super illegal, but, one is morally bad, cheap and hard to fix, where one is morally good, expensive, time consuming and easily fixed.
Our local playground has no traffic signs (aside from a playground sign) and a very faded crosswalk, but everyone knows to slow way down before reaching it because if they don’t the potholes(winter kills the roads) will make them regret it.
The town “fixes” it every few years or so.


imo if you are going to start changing how the road is, start blocking it or start damaging the road to force a speedbump or hole. It’s a lot cheaper than spending 1000$ on a sign they can easily just take down, a lot faster and less likely to get caught in the act.


it would need to be so much cheaper to be worth it. Like current GP ultimate already is a scam and a half unless you were essentially buying a AAA title every 2 months, unless the price of this subscription is at least a quarter the price of ultimate it wouldn’t be worth it imo.
this is an absolutely toxic take of the issue. I took OP’s statement as less of a “I won’t read the manual” and more of a “I struggle to be able to read manuals”
There are many times I had read the manual, and then had to look up the issue further anyway because I either missed the poorly written section, or misunderstood what it was saying.
If you want a prime example of that, go look at ffmpeg and try to figure out how to select a specific language for subtitles on a video without looking it up online. its via -map as an advanced option, which is described as a parameter to extract specific streams (which also means they would need to map the video and the audio streams since including a -map removes every auto stream). but map doesn’t tell you subtitle tracks are index:s. it does tell you that you can look at stream specifiers for valid search options, which does include s as a type, and lets you know that you can use m for metadata tagging, but you would need to make the connection that the type is s, and the meta data search flag would be m:language:langcode, and you need to make the connection the entire string has to be concated so its index:s:m:language:langcode For someone who is learning ffmpeg and video transcoding, that is not a very good setup. The stream specifiers give a few examples of what the potentials are but, the location where it specifies the types are in a different area than the one where it specifies the metadata keys. At that point just asking online or searching is way easier.
Note: this is just an issue I have see people come across because ffmpeg is one of the more complicated programs (the man page is over 2300 lines)
is it in the manual? yes. is someone who doesn’t know how to use ffmpeg and is trying to learn it going to find it? that’s debatable.
If I was in that situation, my next step would be googling it, and if I couldn’t find it via searching, I would be reaching out to communities. At that point “RTFM” is useless to me.
there’s a standardized man format? News to me. I thought developers just threw everything in at random order.



it depends on the type of zip encryption, the default doesn’t encrypt metadata
edit: upon looking into it further, the other commenter is right, the zip format itself doesn’t actually support encrypting metadata at all, you would need to use a different format such as 7z to obtain it.


No, I disagree.
It is not one person’s doing. That is the deflection.
I will not downplay the effect of this by saying they are the only one involved. Every maintainer so far that has locked or approved any changes that they did are equally at fault here. In fact, one of those linked articles even stated that the primary reason they locked it is because they didn’t like the amount of coverage it got. This is a failure on the community as a whole, not the individual.
edit for clarification: By failure, I’m talking more on projects that are humoring it and actually going through with it without considering the potential side effects of just blanket applying that.
Currently considering that a handful of these are locked or posted as we don’t know if we’re going to be doing this yet, I haven’t quite put them in that same sector yet, but it’s rapidly approaching it.


I forsee in the future Microsoft implementing a core electron service, one electron instance that is persistant and just runs, but sandboxes every app that calls it. so instead of an electron instance per app, its just an electron instance that has sandboxed pages that are only managable by the parent process.
it would still be pretty bloaty but, since core structure should only be present in the parent process, hopefully it would be less bloaty


I treat postgame content the same as I treat new game+
It’s super unlikely I will look into it unless the game rocked my world.
Many games treat post game content as a grindfest for people who really liked the game but didn’t want it to end, I usually have no interest in it.
The last games I looked into both NG+ and post game content on was final fantasy XV and God of War (the ps4 one), and I didn’t finish it on either of them.


In the US companies(where the company is located last I knew) are legally mandated to report specific things such as CSAM and other things if they come across it.
What the issue should be isn’t the fact that they are reporting it, the issue should be they have the capability to see it in the first place to be able to report it.
This isn’t me defending CSAM or anything like that but, in a decent storage system, google shouldn’t be able to even see what you have, let alone what the images actually are.
What pisses me off about that statement is that it won’t even fix the bots. It’s public knowledge that most of the bots on the platform are intentional to maintain the image that the site is super popular still, which means those accounts would just get manually verified and skip the process.


I like that you used the term significantly here, because usually the question is use will disappear, which it never will. Being said? open usage of image generation isn’t going to go away, and the same is likely to be said about casual ai chat bots. I do think that eventually when the bubble pops and investors realize that they are blindly tossing money into what is essentially a paper shredder most commercial usage of it will nosedive.
This effect is generally rather rapid, once one major company decides to drop it, usually it starts to snowball. Being said, with less commercial avenues of it, non-commercial projects that use cloud based services for it will likely have their prices increased to make up for the difference, so you may see /some/ non-commercial projects go down if the models aren’t being self hosted, but I don’t think it’s going anywhere
You mentioned it already seems to be decreasing as well? I might agree with that. It’s reached the point where the everyday consumer is saying “well this is cool, and makes it easier, but I don’t know how well I can trust this” and we also have some locations (such as the US) starting to put restrictions making it harder to copyright the outputs, which lowers a lot of its value in a commercial sector, but at the same time, we have big companies still going all in on it. So this concerns me.
mine is this way as well. it will count inputs as setting the clock and if you put an invalid time in has a hissy fit and isn’t clear what it is asking of you. The amount of times my grandfather has tried to use it after losing power and got frusterated because he was trying to cook something and the clock wanted to set the time as something stupid like 30:22 or something like that is annoying.
mine has a way to change the clock to 24h and the ability to calibrate the hot keys, but sadly doesn’t seem to have a sound control.


I’m not PC but, one benefit of using a central server for syncthing is an always on backup that doesn’t require another client device to be on, it also allows for easier creation of new shares.
For example, with syncthing you can set the “servers” client device to auto approve/accept any shares that are to trusted devices, then when you get a new device, instead of needing to add that device to every device you share on the syncthing network, you only need to add that device to the server and then you can have your other clients connect to the servers share instead of device to device. It’s easier. You can also configure the shares on the server to use encryption by default too, since you don’t really ever need to actually see the files on the server since it’s basically a install and forget style client.
As an example of what I mean:
I have 10 different devices that run syncthing, 9 clients and a “server” client. these clients are not always on at the same time, and as such when I change a file, the files can become desynced and cause issues with conflicts. By having a centralized server, as long as the server is on(it always is) and client itself is online, it’s going to always sync. I don’t need to worry about file conflicts between my clients as the server should always have the newest file.
Then for example say my phone died. Instead of needing to readd every seperate client that the phone needs to share with to the new device, I only need to add the phone as a trusted source on the “server” client via the webui -> click share to that device on every share the phone needs, and then remap the shares to the proper directories on the mobile device. this is vs having to add every device to the phone, and the phone to every device it needs access to ontop of reconfiguring all the shares. It’s simpler, but fair warning does cause a single point of failure if the server goes offline.


yea I have the machine backed up in case this happens. I have noticed that its a mess UI wise. But ipfire doesn’t seem to be stable. every few months it’ll randomly kill itself which will take everything on the network down until i manually restart the machine and then force tell it a new DNS server. It’s something I’ve never managed to resolve on the machine, and I don’t seem to have that issue with my test network with OPNsense.


I’m in the process of switching from ipfire to opnsense myself.
I hate how bloaty opnsense is at first glance but it has so much more control so once I copy my current config I’ll be leaving ipfire in the dust.
So I had in my mind what I thought a handcuff key looked like when I went to look it up and then the results that I saw did not match what I thought it was going to look like.