• jballs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 hours ago

    Not sure why, but this image wasn’t showing for me in Voyager or when I tried to open it on the web. I was able to get a thumbnail loaded in Firefox, so here’s what it says in case anyone else is having the same problem.

    The dumbest person you know is currently being told “You’re absolutely right!” by ChatGPT.

  • higgsboson@piefed.social
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    6 hours ago

    lol. Nope… I live in MAGA country. The dumbest person I know hasnt a clue what ChatGPT even is. Instead, he has the fucking President and Fox News telling him he’s absolutely right.

  • Jiggle_Physics@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 hours ago

    The dumbest people I know have been being told a large portion of their dumbest thoughts, and ideas, are correct for 30-79 years now.

  • qevlarr@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    4 hours ago

    My kid, the other day

    Let’s play chess, I’ll play white

    Alright, make your first move

    Qxe7# I win

    Ahh, you got me!

    It was harmless but I expected ChatGPT to at least acknowledge this isn’t how any of this works

  • memfree@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    8 hours ago

    Nope, the dumbest people I know have no idea how to find plain ChatGPT. They can get to Gemni, but can only imagine asking it questions.

  • GottaHaveFaith@fedia.io
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    8 hours ago

    Recently had a smart friend says something like “gemini told me so”, I have to say I lost some respect ;p

  • berty@feddit.org
    link
    fedilink
    arrow-up
    6
    ·
    9 hours ago

    True. Southpark had a great episode on chatgpt recently. “She is kissing your ass!”.

  • WanderingThoughts@europe.pub
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    10 hours ago

    You can tell if to switch that off permanently with custom instructions. It makes the thing a whole lot easier to deal with. Of course, that would be bad for engagement so they’re not going to do that by default.

    • BenVimes@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      7 hours ago

      You can, but in my experience it is resistant to custom instructions.

      I spent an evening messing around with ChatGPT once, and fairly early on I gave it special instructions via the options menu to stop being sycophantic, among other things. It ignored those instructions for the next dozen or so prompts, even though I followed up every response with a reminder. It finally came around after a few more prompts, by which point I was bored of it, and feeling a bit guilty over the acres of rainforest I had already burned down.

      I don’t discount user error on my part, particularly that I may have asked too much at once, as I wanted it to dramatically alter its output with so my customizations. But it’s still a computer, and I don’t think it was unreasonable to expect it to follow instructions the first time. Isn’t that what computers are supposed to be known for, unfailingly following instructions?

    • AbsolutelyClawless@piefed.social
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 hours ago

      I sometimes use ChatGPT when I’m stuck troubleshooting an issue. I had to do exactly this because it became extremely annoying when I corrected it for giving me incorrect information and it would still be “sucking up” to me with “Nice catch!” and “You’re absolutely right!”. The fact that an average person doesn’t find that creepy, unflattering and/or annoying is the real scary part.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        23 minutes ago

        Just don’t think that turning off the sycophancy improves the quality of the responses. It’s still just responding to your questions with essentially “what would a plausible answer to this question look like?”

    • Joeffect@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 hours ago

      It’s just how the current chat model works… it basically agrees and makes you feel good… its really annoying