Just your normal everyday casual software dev. Nothing to see here.

People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.

been trying to lower my social presence on services as of late, may go inactive randomly as a result.

  • 0 Posts
  • 974 Comments
Joined 2 years ago
cake
Cake day: August 15th, 2023

help-circle
  • Personally, it seems like it’s trustworthy again. The previous owner of the repo did eventually admit that they authorized the transfer, but, The entire transfer process was extremely sketchy and had no chain of custody or trust. It was just the repository got deleted, and then a few days later showed under a whole blank state again with a user with no profile, no contribution history, and it was just a trust me bro, I knew the original maintainer look I have the keys to prove it.

    The maintainer of the Google Play build of it seems to trust them though, and they are established in the community, plus they archived their sync thing builds again in favor of just using one repo, so it’s likely fine.

    For future people wondering about it as well, it doesn’t help that the new maintainer of the app has deleted every issue that had to do with the migration, so you no longer can research the issue for yourself. The only information you have available to you is the discussion chain listed on the community forums, But any type of issue that they link to were deleted.

    Personally though, I plan on keeping my current version pinned to prior to the transfer until either I’m forced to update due to bugs or I feel comfortable with the current maintainer again. I’m not sure how long that will be.

    For an app that contains very sensitive information, I was not impressed with how the transfer process underwent.




  • An LI alternative wouldn’t be super helpful, you would need mainstream and companies to want to use it, and any open alternative would fail to meet that goal. The wants of the employee and the wants of the employer don’t mix, thats why LinkedIn looks so bad to the employee. It’s not meant to be for the employee, its meant to be for the employer. If it was the other way around the employers wouldn’t use it.


  • I don’t think thats a unfair ask. One local representative in each country seems perfectly fair for me.

    Being said? the user information part? strictly locked to their own content. If the user account is registered in that country they have access. Providers could 100% do that with most operational databases out there. It’s a requirement for stores in order to do payment information. Steam and Epic already do this as it is.

    Should they be able to access that information in the first place is a different discussion, that needs to be had in that corresponding country, but if the country has already decided it needs access to continue, there’s no reason it should have access to all user data. The only thing they really have claim to is their own countries data.


  • my issue with what would happen if this ruling solidifies is the precident that it causes.

    I could not care less about reaction videos, they are really low effort videos that I don’t understand why are so popular.

    My issue entirely is that if the plaintiff wins in this case, it’s effectively saying any type of downloaded video on youtube would classify as circumventing DRM, which would open an avenue aside from a fair use violation for studios to go after content creators for.

    Look at lets plays for example. Those operate almost entirely on fair use clauses. I fear that if we start ruling that recording or downloading videos that your computer is able to decode (as this is all that the youtube downloader is doing, just instead of it going to the client its sending to a file), that means by same principle, recording a video game that contains DRM would also be considered circumventing a DRM. Which would outlaw lets plays.

    This is a very bad precedent regardless of if its just low quality trash reaction videos or not.






  • can you elaborate on type=“datetime-local” not existing? It’s been supported in almost every mainstream browser since basically 2012. The last mainstream to adopt it was Safari in 2021. There is argument that FF didn’t have proper support till 2021 as well but, that’s because it was lacking the “time” part of the element. So they modified how it worked for awhile to work like the type=“date” element, that has since been resolved.

    being said, I do agree with you on a lot of those. it would be nice to have some form of UI validation. That is one of it’s flaws that could be expanded on. a disabled dates or invalid days tag on the input would be a lot easier (like allowedDays being a comma separated list of daynames or numbers like how the time standard is), but also add a lot of complexity to it for something that should be being validated via scripts both server and client side. Not all browsers have the clear button as well which is a problem because it’s an extra step when you do make a mistake on it. They do offer a valid range tag though to allocate valid ranges for dates, but it’s so primitive that for a scheduler it can’t really be used unless its on a week by week basis



  • The scary part is how it already somewhat is.

    My friend is currently(or at least considering) job hunting because they added AI to their flow and it does everything past the initial issue report.

    the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn’t work -> AI lints the final product once working -> AI submits the patch as pull.

    Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says “yes this is good” or “no send this back in”


  • I know exactly what date UI you are talking about and it’s a firm agree. Whoever decided that a date UI needed to have the inability to select a year without hitting back 3 times, then ontop of that decided to make it so it undid your month and day selection when you did so, did the world a massive disfavor designing it.

    What is wrong with the simple type=“datetime-local” or type=“date” UI’s that every mainstream browser has native. It’s 3 clicks, you can specify the year at the top, and then month & date in the main body. Why even introduce layers to it. Have everything on the same layer.


  • more than partially actually. 85% of Mozillas income comes from search engine deals.Then when you look at the revenue reports for the year, its stated in it that.

    Approximately 85% and 81% of Mozilla’s revenues from customers with contracts were derived from one customer for the years ended December 31, 2023 and 2022, respectively. Receivables from that one customer represented 70% and 64% of the December 31, 2023 and 2022 outstanding receivables, respectively.

    I’m no accountant and while Google is not specified. That sounds like the signs are pointing at google being 85% of the projects income.





  • They are very nice. They share kernelspace so I can understand wanting isolation but, the ability to just throw a base Debian container on, assign it a resource pool and resource allocation, and install a service directly to it, while having it isolated from everything without having to use Docker’s emphereal by design system(which does have its perks but I hate troubleshooting containers on it) or having to use a full VM is nice.

    And yes, by Docker file I would mean either the Docker file or the compose file(usually compose). By straight on the container I mean on the container, My CTs don’t run Docker period, aside from the one that has the primary Docker stack. So I don’t have that layer to worry about on most CT’s

    As for the memory thing, I was just mentioning that Docker does the same thing that containers do if you don’t have enough RAM for what’s been provisioned. The way I had taken that original post is that specifying 2 gigs of RAM to the point the system exhausts it’s ram would cause corruption and the system crashes, which is true but docker falls for the same issue if the system exhausts it’s ram. That’s all I meant by it. Also cgroups sound cool, I gotta say I haven’t messed with them a whole lot. I wish proxmox had a better resource share system to designate a specific group as having X amount of max resources, and then have the CT or vm’s be using those pools.