• ExLisper@lemmy.curiana.net
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    5 hours ago

    I think what we’re seeing is similar to lactose intolerance. Most people can handle it just fine but some people simply can’t digest it and get sick. The problem is there’s no way to determine who can handle AI and who can’t.

    When I’m reading about people developing AI delusions their experiences sound completely alien to me. I played with LLMs same as anyone and I never treated it as anything other than a tool that generates responses to my prompts. I never thought “wow, this thing feels so real”. Some people clearly have predisposition to jumping over the “it’s a tool” reaction straight to “it’s a conscious thing I can connect with”. I think next step should be developing a test that can predict how someone will react to it.

    • Tiresia@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      Cults and toxic self-help literature have existed before LLMs copied them. I don’t know if LLMs are getting people who couldn’t have been gotten by human scammers.

      Scams have many different vectors and people can be vulnerable to them depending on their mood or position in life. Testing people on LLM intolerance would be more like testing them on their susceptibility to viruses.

      People can be immunocompromised for various reasons, temporarily or permanently, so as a society public hygiene standards (and the material conditions to produce them) are a lot more valuable. Wash your hands after interacting, keep public spaces clean, that sort of stuff.

    • chunes@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      4 hours ago

      I have yet to see any evidence that AI is inducing problems. People with problems use it just like anyone else and others consider that use problematic.

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    155
    arrow-down
    4
    ·
    13 hours ago

    Huge Study

    *Looks inside

    this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

    Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

    AI sucks in a lot of ways sure, but this feels like fud.

  • amgine@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    13 hours ago

    I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.

    • d00ery@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      I certainly enjoy talking to LLMs about work for example, asking things like “was my boss an arse to say x, y, z” as the LLM always seems to be on my side… Now it could be my boss is an arse, or it could be the LLM sucking up to me. Either way, because of the many examples I’ve read online, I take it with a pinch of salt.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 minutes ago

        It’s definitely sucking up to you. It’s programmed to confirm what you say, because that means you keep using it.

        Consider how you phrase your questions. Try framing a scenario from the position of your boss, or ask “why was my boss right to say x, y, z”, and it’ll still agree with you despite the opposite position.

        If you’re just shooting the shit, consider doing it with a human being. Preferably in person, but there are plenty of random online chat groups too

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    13 hours ago

    As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”

    There’s a certain irony in all the alright techbros really just wanting to be told they were “stunning and brave” this whole time.

    • Tiresia@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      2 hours ago

      Are the users in this study techbros?

      Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.

      For decades there has been a large self-help subculture who consume massive amounts of vacuous positive affirmation produced by humans. Now those vacuous affirmations are copied by the text copying machine with the same result and it’s treated as shocking.

    • A_norny_mousse@piefed.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      Huh. I hate it when people do that. Fake/professional empathy/support. Yet others gobble it up when a machine does that.

  • Hackworth@piefed.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    12 hours ago

    Anthropic has some similar findings, and they propose an architectural change (activation capping) that apparently helps keep the Assistant character away from dark traits (sometimes). But it hasn’t been implemented in any models, I assume because of the cost of scaling it up.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      edit-2
      10 hours ago

      When you talk to a large language model, you can think of yourself as talking to a character

      But who exactly is this Assistant? Perhaps surprisingly, even those of us shaping it don’t fully know

      Fuck me that’s some terrifying anthropomorphising for a stochastic parrot

      The study could also be summarised as “we trained our LLMs on biased data, then honed them to be useful, then chose some human qualities to map models to, and would you believe they align along a spectrum being useful assistants!?”. They built the thing to be that way then are shocked? Who reads this and is impressed besides the people that want another exponential growth investment?

      To be fair, I’m only about 1/3rd of the way through and struggling to continue reading it so I haven’t got to the interesting research but the intro is, I think, terrible

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        10 hours ago

        stochastic parrot

        A phrase that throws more heat than light.

        What they are predicting is not the next word they are predicting the next idea

        • ageedizzle@piefed.ca
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          edit-2
          10 hours ago

          Technically, they are predicting the next token. To do that properly they may need to predict the next idea, but thats just a means to an end (the end being the next token).

          • affenlehrer@feddit.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 hours ago

            Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).