

don’t take me wrong. I would love an alternative as well! I just think that the personality of the fediverse as a whole goes against what companies are actually looking for in partners
Just your normal everyday casual software dev. Nothing to see here.
People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.
been trying to lower my social presence on services as of late, may go inactive randomly as a result.


don’t take me wrong. I would love an alternative as well! I just think that the personality of the fediverse as a whole goes against what companies are actually looking for in partners


Fully agree. if they actually go through with banning it, piracy will thrive. People aren’t going to just not play games as a result of not having access to a game. Smaller launchers will rise, people will download from other sources. A method of obtainment will be found.


An LI alternative wouldn’t be super helpful, you would need mainstream and companies to want to use it, and any open alternative would fail to meet that goal. The wants of the employee and the wants of the employer don’t mix, thats why LinkedIn looks so bad to the employee. It’s not meant to be for the employee, its meant to be for the employer. If it was the other way around the employers wouldn’t use it.


I don’t think thats a unfair ask. One local representative in each country seems perfectly fair for me.
Being said? the user information part? strictly locked to their own content. If the user account is registered in that country they have access. Providers could 100% do that with most operational databases out there. It’s a requirement for stores in order to do payment information. Steam and Epic already do this as it is.
Should they be able to access that information in the first place is a different discussion, that needs to be had in that corresponding country, but if the country has already decided it needs access to continue, there’s no reason it should have access to all user data. The only thing they really have claim to is their own countries data.


my issue with what would happen if this ruling solidifies is the precident that it causes.
I could not care less about reaction videos, they are really low effort videos that I don’t understand why are so popular.
My issue entirely is that if the plaintiff wins in this case, it’s effectively saying any type of downloaded video on youtube would classify as circumventing DRM, which would open an avenue aside from a fair use violation for studios to go after content creators for.
Look at lets plays for example. Those operate almost entirely on fair use clauses. I fear that if we start ruling that recording or downloading videos that your computer is able to decode (as this is all that the youtube downloader is doing, just instead of it going to the client its sending to a file), that means by same principle, recording a video game that contains DRM would also be considered circumventing a DRM. Which would outlaw lets plays.
This is a very bad precedent regardless of if its just low quality trash reaction videos or not.


I should ask them at some point how it is now that its been deployed for a bit. I wouldn’t expect so either based off how I’ve seen open sourced projects using stuff like that, but they also haven’t been complaining about it screwing up at all.


That was my general thought process prior to them telling me how the system worked as well. I had seen claude workflows which does similar, but to that level I had not seen before. It was an eye opener.


yea I’m not sure, maybe something to do with the term SAS but yea not enough unredacted stuff to really know.


Thank you for expanding on it. That was a pretty interesting read, gotta love indecisiveness in your standards


can you elaborate on type=“datetime-local” not existing? It’s been supported in almost every mainstream browser since basically 2012. The last mainstream to adopt it was Safari in 2021. There is argument that FF didn’t have proper support till 2021 as well but, that’s because it was lacking the “time” part of the element. So they modified how it worked for awhile to work like the type=“date” element, that has since been resolved.
being said, I do agree with you on a lot of those. it would be nice to have some form of UI validation. That is one of it’s flaws that could be expanded on. a disabled dates or invalid days tag on the input would be a lot easier (like allowedDays being a comma separated list of daynames or numbers like how the time standard is), but also add a lot of complexity to it for something that should be being validated via scripts both server and client side. Not all browsers have the clear button as well which is a problem because it’s an extra step when you do make a mistake on it. They do offer a valid range tag though to allocate valid ranges for dates, but it’s so primitive that for a scheduler it can’t really be used unless its on a week by week basis


I do agree, LLM generated code is inaccurate, which is why they have to have the throw it back in stage and a human eye looking at it.
They told me their main concern is that they aren’t sure they are going to properly understand the code the AI is spitting out to be able to properly audit it (which is fair), then of course any issue with the code will fall on them since it’s their job to give final say of “yes this is good”


The scary part is how it already somewhat is.
My friend is currently(or at least considering) job hunting because they added AI to their flow and it does everything past the initial issue report.
the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn’t work -> AI lints the final product once working -> AI submits the patch as pull.
Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says “yes this is good” or “no send this back in”


I know exactly what date UI you are talking about and it’s a firm agree. Whoever decided that a date UI needed to have the inability to select a year without hitting back 3 times, then ontop of that decided to make it so it undid your month and day selection when you did so, did the world a massive disfavor designing it.
What is wrong with the simple type=“datetime-local” or type=“date” UI’s that every mainstream browser has native. It’s 3 clicks, you can specify the year at the top, and then month & date in the main body. Why even introduce layers to it. Have everything on the same layer.


more than partially actually. 85% of Mozillas income comes from search engine deals.Then when you look at the revenue reports for the year, its stated in it that.
Approximately 85% and 81% of Mozilla’s revenues from customers with contracts were derived from one customer for the years ended December 31, 2023 and 2022, respectively. Receivables from that one customer represented 70% and 64% of the December 31, 2023 and 2022 outstanding receivables, respectively.
I’m no accountant and while Google is not specified. That sounds like the signs are pointing at google being 85% of the projects income.


IMO any type of touch control in a car shouldn’t be a thing. Drivers rely on tactile feedback on controls, when you replace them with touch buttons it takes more concentration and therefore decreases the drivers awareness of their surroundings.
Granted the argument is you shouldn’t be adjusting it while driving but, my response is why have it in the first place.


That’s hilarious. I’m guessing its a result of an auto-redacter which is set to redacts urls or something? since the original would be
--enable-largefile
Enable support for large files (http://www.sas.com/standards/large_
file/x_open.20Mar96.html) if the operating system requires special compiler
options to build programs which can access large files. This is enabled by
default, if the operating system provides large file support.


yea was about to say the only difference between this article and the US is that in the US it would be death in the office or at home not the hospital bed.


They are very nice. They share kernelspace so I can understand wanting isolation but, the ability to just throw a base Debian container on, assign it a resource pool and resource allocation, and install a service directly to it, while having it isolated from everything without having to use Docker’s emphereal by design system(which does have its perks but I hate troubleshooting containers on it) or having to use a full VM is nice.
And yes, by Docker file I would mean either the Docker file or the compose file(usually compose). By straight on the container I mean on the container, My CTs don’t run Docker period, aside from the one that has the primary Docker stack. So I don’t have that layer to worry about on most CT’s
As for the memory thing, I was just mentioning that Docker does the same thing that containers do if you don’t have enough RAM for what’s been provisioned. The way I had taken that original post is that specifying 2 gigs of RAM to the point the system exhausts it’s ram would cause corruption and the system crashes, which is true but docker falls for the same issue if the system exhausts it’s ram. That’s all I meant by it. Also cgroups sound cool, I gotta say I haven’t messed with them a whole lot. I wish proxmox had a better resource share system to designate a specific group as having X amount of max resources, and then have the CT or vm’s be using those pools.


Yea I plan to try out the new Proxmox version at some point to try that out, thank you again.
Personally, it seems like it’s trustworthy again. The previous owner of the repo did eventually admit that they authorized the transfer, but, The entire transfer process was extremely sketchy and had no chain of custody or trust. It was just the repository got deleted, and then a few days later showed under a whole blank state again with a user with no profile, no contribution history, and it was just a trust me bro, I knew the original maintainer look I have the keys to prove it.
The maintainer of the Google Play build of it seems to trust them though, and they are established in the community, plus they archived their sync thing builds again in favor of just using one repo, so it’s likely fine.
For future people wondering about it as well, it doesn’t help that the new maintainer of the app has deleted every issue that had to do with the migration, so you no longer can research the issue for yourself. The only information you have available to you is the discussion chain listed on the community forums, But any type of issue that they link to were deleted.
Personally though, I plan on keeping my current version pinned to prior to the transfer until either I’m forced to update due to bugs or I feel comfortable with the current maintainer again. I’m not sure how long that will be.
For an app that contains very sensitive information, I was not impressed with how the transfer process underwent.