China #1

Best friends with the mods at lemmy.ml

If you’re reading this, I’m already banned.

  • 2 Posts
  • 347 Comments
Joined 3 years ago
cake
Cake day: June 10th, 2023

help-circle

  • chemical_cutthroat@lemmy.worldtoScience Memes@mander.xyzhow things become science
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    edit-2
    2 days ago

    And people have been launching products without thought to the ramifications since the dawn of time. I don’t think that will change, either. What we need to do is educate ourselves better when it comes to identifying potential fraud. Taking anything at face value, regardless of it’s source, is dumb. If it’s worth knowing, it’s worth verifying.

    Edit: This ratio on this post is a monument to band-wagoning.


  • That isn’t a fault in the LLM, though, that is a fault in the general make-up of human skepticism, or lack their of. We didn’t invent the word ‘Propaganda’ without having a sentence to use it in. Those that don’t practice skepticism, critical thinking, and even mild reasoning are the ones that will get led astray. That didn’t just start happening when LLMs came around, it’s been here since we first started talking to each other. It’s only more visible now because everything is more visible now. The world is much more connected than it ever has been, and that grows with every literal day. All these fucking idiots that don’t double check what they are being told are the problem, regardless of if it came from an LLM or a human, because I guarantee you they are being led astray by both. They don’t trust the machine because it’s a machine, they trust what they are told because they are lazy. That isn’t the LLMs fault.


  • chemical_cutthroat@lemmy.worldtoScience Memes@mander.xyzhow things become science
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    51
    ·
    edit-2
    3 days ago

    What you may as well have said:

    Additionally, the parents didn’t place the cake on an actual plate. They placed the cake on a napkin which can be very questionable anyway as there is no solid foundation for the cake. The child chose to ignore the napkin and treat the cake as food.

    I really don’t understand why people think that LLMs are GOFAI. They aren’t making the hard choices. They aren’t giving novel solutions to the energy crisis. They aren’t solving the trolley problem. They are shitting out what you feed them. If you feed them garbage, you get garbage in return. No one is surprised when the dog gets worms after eating poop it found in the yard. Why are we shocked that an AI that doesn’t know fact from fiction treats everything the same?


  • I’m failing to see how this is different from making up a fact and then spreading it to news outlets. If you are the authority, and you say something is true, you don’t get to point and laugh when people believe your lies. That’s a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine. It only regurgitates what’s been said. It isn’t going to suddenly start doing science on it’s own to determine if what you’ve said is true or not. That isn’t it’s job. It’s job is to tell you what color the sky is based on what you told it the color of the sky was.