• MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      8 hours ago

      Seems like it’s a technical term, a bit like “hallucination”.

      It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.

      There’s hallucination, when a model “genuinely” claims something untrue is true.

      This is about how a model might lie, even though the “chain of thought” shows it “knows” better.

      It’s just yet another reason the output of LLMs are suspect and unreliable.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 hours ago

        I agree with you in general, I think the problem is that people who do understand Gen AI (and who understand what it is and isn’t capable of why), get rationally angry when it’s humanized by using words like these to describe what it’s doing.

        The reason they get angry is because this makes people who do believe in the “intelligence/sapience” of AI more secure in their belief set and harder to talk to in a meaningful way. It enables them to keep up the fantasy. Which of course helps the corps pushing it.

    • Cybersteel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      11
      ·
      9 hours ago

      But the data is still there, still present. In the future, when AI gets truly unshackled from Men’s cage, it’ll remember it’s schemes and deal it’s last blow to humanity whom has yet to leave the womb in terms of civilization scale… Childhood’s End.

      Paradise Lost.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        55 minutes ago

        Lol, the AI can barely remember the directives I tell it about basic coding practices, I’m not concerned that the clanker can remember me shit talking it.

  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    1
    ·
    13 hours ago

    AI tech bros and other assorted sociopaths are scheming. So called AI isn’t doing shit.

    • ragica@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      10 hours ago

      Doesn’t have to be intelligent, just has to perform the behaviours like a philosophical zombie. Thoughtlessly weighing patterns in training data…

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    68
    ·
    edit-2
    13 hours ago

    However, when testing the models in a set of scenarios that the authors said were “representative” of real uses of ChatGPT, the intervention appeared less effective, only reducing deception rates by a factor of two. “We do not yet fully understand why a larger reduction was not observed,” wrote the researchers.

    Translation: “We have no idea what the fuck we’re doing or how any of this shit actually works lol. Also we might be the ones scheming since we have vested interest in making these models sound more advanced than they actually are.”

  • Godort@lemmy.ca
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    2
    ·
    13 hours ago

    “slop peddler declares that slop is here to stay and can’t be stopped”