• 0 Posts
  • 217 Comments
Joined 3 years ago
cake
Cake day: August 15th, 2023

help-circle



  • So any passionate actor is a bad actor. Which is necessary for wrong information to be disinformation according to your definition.

    I will argue that a disinformation campaign could find agents that are able to remain calm and engage in ‘polite’ debate (via training, scripts and other forms of support, perhaps AI can help write some posts/articles etc). Meanwhile ordinary users are more likely to lose their cool when presented with propaganda even if it is well presented.

    I am also going to address you suggesting that I believe most information is “arbitrarily” subjective. I don’t. The issue is that of course we cannot actually apply the scientific method in a lot of cases, including news and politics.

    For example either the US attacked first or Iran did in the most recent case. How would one apply the scientific method to find out? In a lot of cases there is simply not enough data accessible to people.

    Even in science, both physical and especially social ones we have this issue. We don’t really do experiments on whole countries etc.

    I think you are handwaving away the issue. I am sure you know who the bad actors are and what is disinformation.











  • Why would I need luck?

    safety_checker is indeed an ai image content check that ships with ComfyUI. There are two instances of it under the ComfyUI directory. Nowhere else on my $HOME

    fsdpy_utils is also part of ComfyUI, it’s a module to help run models (not a model itself those are huge). ComfyUI is a webUI to run AI stuff, so it’s expected to be found there. It’s not found anywhere else on my $HOME

    cachetools/func.py is a module providing cache not a tty. It exist in other Python software that I 've installed. No random instances of it all over $HOME either

    You are probably asking an AI chatbot to produce drivel with technical terms you 've seen online to troll this community. It’s probably convincing to people who are unfamiliar with python, ai or linux but it does not pass inspections by anyone familiar with these things.







  • One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

    The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

    I don’t see any of the people celebrating this decision discussing this? Perhaps it’s a misrepresentation by the author since I can’t find the actual decision text.

    This is going to harm small non-corporate websites, not just social media, far more than Facebook or Tiktok. Harmful content is also going to include stuff like LGBTQ, especially anything trans related, and ‘antisemitism’ (but probably not antisemitism.)