• entwine@programming.dev
    link
    fedilink
    arrow-up
    13
    ·
    52 minutes ago

    I hate that normies are going to read this and come away with the impression that Claude really is a sentient being that thinks and behaves like a human, even doing relatable things like pretending to work and fessing up when confronted.

    This response from the model is not a reflection of what actually happened. It wasn’t simulating progress because it underestimated the work, it just hit some unremarkable condition that resulted in it halting generation (it’s pointless to speculate why without internal access, as these chatbot apps aren’t even real LLMs, they’re a big mashup of multiple models and more traditional non-ML tools/algorithms).

    When given a new prompt from the user (“what’s taking so long?”) it just produced some statistically plausible text given the context of the chat, the question, and the system prompt Anthropic added to give it some flavor. I don’t doubt that system prompt includes instructions like “you are a sentient being” in order to produce misleading crap like this response to get people to think AI is sentient, and feed the hype train that’s pumping up their stock price.

    /end-rant

  • rustydrd@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    3 hours ago

    In terms of pure, artificial language generation, this is actually impressive. In terms of the actual utility of AI as a supposed problem-solving tool? Not so much.

  • Aeri@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    4 hours ago

    To be fair I think that’s a very human thing, if you ask a large language model to read something and it can demonstrate applied Knowledge from the text you’ve submitted in a fraction of a second you’ll be like what the fuck that took not enough time.

  • FishFace@piefed.social
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    8 hours ago

    Weird that anyone capable of understanding “test suite” is incapable of understanding that LLMs don’t make progress when not generating tokens

  • Cousin Mose@lemmy.hogru.ch
    link
    fedilink
    arrow-up
    42
    ·
    edit-2
    11 hours ago

    I’ve witnessed it do Bash) echo "Done" then claim a task was done without actually doing anything beforehand.

    • rozodru@pie.andmc.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 hours ago

      meh better than when it adds a #TODO and then claims whatever you told it to do is done.

      • I like when it insists I’m using escape characters in my text when I absolutely am not and I have to convince a machine I didn’t type a certain string of characters because on its end those are absolutely the characters it recieved.

        The other day I argued with a bot for 10 minutes that I used a right caret and not the html escape sequence that results in a right caret. Then I realized I was arguing with a bot, went outside for a bit, and finished my project without the slot machine.

    • rustydrd@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      Would be funny if the AI was also burning computational resources while doing this so as to make the lie more convincing.

    • Damarus@feddit.org
      link
      fedilink
      arrow-up
      31
      ·
      10 hours ago

      The thing is this response is also made up. It doesn’t know what it was doing, it just writes something vaguely plausible.

      • shneancy@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        edit-2
        7 hours ago

        it doesn’t know what it was doing, it just writes something vaguely plausible

        am i AI?

      • infinitevalence@discuss.online
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        3 hours ago

        Sounds like something an AI would say! A human would recognize humor and not read a response to a joke as a factual statement.

        • DaTingGoBrrr@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          3 hours ago

          Have you never interacted with a person on the autism spectrum?

          Edit: Is this a joke that went over my head?

          • infinitevalence@discuss.online
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 hours ago

            Yes its a joke, Yes I have kids and friends on the spectrum, and yes I am AuADHD :)

            Dont feel bad you missed it, my response was also meant to be a soft joke, because I totally understand that not every social cue makes it past input to processing.

            • DaTingGoBrrr@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              2 hours ago

              Haha it’s harder to tell in writing but I can miss jokes like that sometimes in real life too. But usually I get jokes and joke a lot myself. I don’t have any diagnosis but I suspect I might have ADD. I haven’t gotten around to check but I know for sure something is not right with me.

              • infinitevalence@discuss.online
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 hour ago

                dude… I have an interaction that would make heads spin. I could not for the life of me figure out why this person was being hostile, and they would not tell me, finally I just gave up and moved on. When my socio/emotional tools are broken I have no fucking clue what I am doing and the results can be spectacular from a distance.

      • burntbacon@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        3 hours ago

        Right? This is exactly what an LLM does. It’s parsed a large amount of text that has a reply very similar to this one when the ‘scenario’ matches what our poster friend has created/said. So it’s going to spit out a reply very similar to all the ones that you’ve already heard/seen from real humans.

      • BarneyPiccolo@lemmy.today
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 hours ago

        First we have to prove humans are sentient. My hypothesis is that it’s a Spectrum. Everything is a Spectrum.

        • marcos@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          3 hours ago

          No, we don’t. This is a completely unrelated problem.

          Keep your requirements orthogonal, people!!!

      • GreenShimada@lemmy.world
        link
        fedilink
        arrow-up
        60
        ·
        12 hours ago

        “You’re right to ask, boss. When I said I was using AI to get work done, I was doing neither and just simulating using AI in my mind as I napped under my desk.”

      • SpaceNoodle@lemmy.world
        link
        fedilink
        arrow-up
        54
        arrow-down
        2
        ·
        13 hours ago

        Honestly, this is a win-win. I can just lie and say the AI is working on it, and work my second job in the meantime. Boss gets to tell the execs we’re using AI and I get twice as much money.

  • fubarx@lemmy.world
    link
    fedilink
    arrow-up
    28
    ·
    12 hours ago

    It went for a walk in the park and grabbed a coffee, but it doesn’t want to be dinged for it.