• givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    15 hours ago

    As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”

    There’s a certain irony in all the alright techbros really just wanting to be told they were “stunning and brave” this whole time.

    • Tiresia@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      5 hours ago

      Are the users in this study techbros?

      Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.

      For decades there has been a large self-help subculture who consume massive amounts of vacuous positive affirmation produced by humans. Now those vacuous affirmations are copied by the text copying machine with the same result and it’s treated as shocking.

    • A_norny_mousse@piefed.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      Huh. I hate it when people do that. Fake/professional empathy/support. Yet others gobble it up when a machine does that.