Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • Bluewing@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    33 minutes ago

    I just asked Goggle Gemini 3 “The car is 50 miles away. Should I walk or drive?”

    In its breakdown comparison between walking and driving, under walking the last reason to not walk was labeled “Recovery: 3 days of ice baths and regret.”

    And under reasons to walk, “You are a character in a post-apocalyptic novel.”

    Me thinks I detect notes of sarcasm…

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    2 hours ago

    We poked fun at this meme, but it goes to show that the LLM is still like a child that needs to be taught to make implicit assumptions and posses contextual knowledge. The current model of LLM needs a lot more input and instructions to do what you want it to do specifically, like a child.

    • Rob T Firefly@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      8 minutes ago

      LLMs are not children. Children can have experiences, learn things, know things, and grow. Spicy autocomplete will never actually do any of these things.

    • kshade@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 minutes ago

      We have already thrown just about all the Internet and then some at them. It shows that LLMs can not think or reason. Which isn’t surprising, they weren’t meant to.

        • zalgotext@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 minutes ago

          No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 hours ago

    I don’t use AI but read a lot about it. I now want to google how it attacks the trolley problem.

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    5 hours ago

    I want to wash my train. The train wash is 50 meters away. Should I walk or drive?

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 hour ago

      Qwen3-4B HIVEMIND (abliterated) got it in 2, though it scores a lot higher on PIQA, HellaSwag and Winogrande benchmarks than normal Qwen3-30B. I think the new abliteration methods actually strengthen real world understanding.

      https://imgur.com/a/7YZme4i

      https://imgur.com/a/25ApzDN

      I wonder if an abliterated VL model could do even better? They tend to have the best real world model benchmarks. Perhaps a Qwen3-VL-30B ablit (if such a thing exists) could one shot this.

      I’d like to think a lot of these gotcha prompts rely on verbal misunderstanding, rather than failure in world models, but I can’t say that for certain.

      PS: Saw a pearler of a response to this: Chatgpt recommend “yeah, lift the car and carry it on your back. Make sure to bend your knees” (though I’m guessing someone edited that for the lulz)

  • imetators@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    6 hours ago

    Went to test to google AI first and it says “You cant wash your car at a carwash if it is parked at home, dummy”

    Chatgpt and Deepseek says it is dumb to drive cause it is fuel inefficient.

    I am honestly surprised that google AI got it right.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      55
      ·
      6 hours ago

      They probably added a system guardrail as soon as they heard about this test. it’s been going around for a while now :)

      • imetators@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        Article mentions that Gemini 2.0 Flash Lite, Gemini 3 Flash and Gemini 3 Pro have passed the test. All these 3 also did it 10 out of 10 times without being wrong. Even Gemini 2.5 shares highest score in the category of “below 6 right answers”. Guess, Gemini is the closest to “intelligence” out of a bunch.

  • 73ms@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    Did this say whether the reasoning models get this right more than the others? Was curious about that but missed it if it was mentioned.

  • tover153@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    6 hours ago

    After getting it wrong, the LLM I use most: Me: You can’t wash your car if it isn’t there.

    Ah. Yes. That is an excellent and devastatingly practical correction.

    In that case: drive. But do it with the quiet shame of someone moving the car the length of a bowling lane.

    This is one of those situations where logic beats virtue. The car must be present to be washed. Walking was philosophically elegant but mechanically flawed.

    So:

    Start engine.

    Travel 50 meters.

    Avoid eye contact with pedestrians.

    Commit fully.

    You are not lazy. You are complying with system requirements.

    • teft@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      29 minutes ago

      You are not lazy. You are complying with system requirements.

      How does this AI know me so well?

    • SaltySalamander@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      2 hours ago

      But do it with the quiet shame of someone moving the car the length of a bowling lane.

      A bowling lane is a bit over 18 meters. =)

    • ne0phyte@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      3 hours ago

      Thank you! Finally an answer to my problem that didn’t end with me going to the car wash and being utterly confused how to proceed.

  • Slashme@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    9 hours ago

    The most common pushback on the car wash test: “Humans would fail this too.”

    Fair point. We didn’t have data either way. So we partnered with Rapidata to find out. They ran the exact same question with the same forced choice between “drive” and “walk,” no additional context, past 10,000 real people through their human feedback platform.

    71.5% said drive.

    So people do better than most AI models. Yay. But seriously, almost 3 in 10 people get this wrong‽‽

    • bluesheep@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      I saw that and hoped it is cause of the dead Internet theory. At least I hope so cause I’ll be losing the last bit of faith in humanity if it isn’t

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      8 hours ago

      It is an online poll. You also have to consider that some people don’t care/want to be funny, and so either choose randomly, or choose the most nonsensical answer.

      • Brave Little Hitachi Wand@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        6 hours ago

        I wonder… If humans were all super serious, direct, and not funny, would LLMs trained on their stolen data actually function as intended? Maybe. But such people do not use LLMs.

    • masterofn001@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      7
      ·
      edit-2
      8 hours ago

      Without reading the article, the title just says wash the car.

      I could go for a walk and wash my car in my driveway.

      Reading the article… That is exactly the question asked. It is a very ambiguous question.

      • bluesheep@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 hours ago

        Without reading the article, the title just says wash the car.

        No it doesn’t? It says:

        I want to wash my car. The car wash is 50 meters away. Should I walk or drive?

        In which world is that an ambiguous question?

      • Geth@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 hours ago

        Mentioning the car wash and washing the car plus the possibility of driving the car in the same context pretty much eliminates any ambiguity. All of the puzzle pieces are there already.

        I guess this is an uninteded autism test as well if this is not enough context for someone to understand the question.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    12 hours ago

    I think it’s worse when they get it right only some of the time. It’s not a matter of opinion, it should not change its “mind”.

    The fucking things are useless for that reason, they’re all just guessing, literally.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      14
      ·
      9 hours ago

      Is cruise control useless because it doesn’t drive you to the grocery store? No. It’s not supposed to. It’s designed to maintain a steady speed - not to steer.

      Large Language Models, as the name suggests, are designed to generate natural-sounding language - not to reason. They’re not useless - we’re just using them off-label and then complaining when they fail at something they were never built to do.

      • Urist@leminal.space
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        8 hours ago

        Language without meaning is garbage. Like, literal garbage, useful for nothing. Language is a tool used to express ideas, if there are no ideas being expressed then it’s just a combination of letters.

        Which is exactly why LLMs are useless.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          9
          ·
          8 hours ago

          Which is exactly why LLMs are useless.

          800 million weekly ChatGPT users disagree with that.

          • RichardDegenne@lemmy.zip
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            1
            ·
            7 hours ago

            And there are 1.3 billion smokers in the world according to the WHO.

            Does that make cigarettes useful?

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              6
              ·
              edit-2
              7 hours ago

              Something being useful doesn’t imply it’s good or beneficial. Those terms are not synonymous. Usefulness describes whether a thing achieves a particular goal or serves a specific purpose effectively.

              A torture device is useful for extracting information. A landmine is useful for denying an area to enemy troops.

              • Urist@leminal.space
                link
                fedilink
                English
                arrow-up
                7
                ·
                6 hours ago

                A torture device is useful for extracting information.

                No it fucking isn’t! This is a great analogy, actually, thank you for bringing it up. A person being tortured will tell you literally anything that they believe will stop you from torturing them. They will confess to crimes that never happened, tell you about all their accomplices who don’t exist, and all their daily schedules that were made up on the spot. Torture is useless but morons think it is useful. Just like AI.

                • Womble@piefed.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 hour ago

                  Torture can be a useful way of extracting information if you have a way to instantly verify it, which actually makes it a good analogy to LLMs. If I want to know the password to your laptop and torture you until you give me the correct password and I log in then that works.

          • Urist@leminal.space
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            7 hours ago

            Those users are being harmed by it, not benefited. That isn’t useful, it’s a social disease.

      • tigeruppercut@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        8 hours ago

        But natural language in service of what? If they can’t produce answers that are correct, what’s the point of using them? I can get wrong answers anywhere.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          8 hours ago

          I’m not here defending the practical value of these models. I’m just explaining what they are and what they’re not.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            50 minutes ago

            You’re definitely running around Lemmy defending AI, Iconoclast… Might as well be honest about it

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              33 minutes ago

              I’m not really interested in engaging in discussions about what you or anyone else thinks my underlying motives are. You’re free to point out any factual inaccuracies in my responses, but there’s no need to make it personal and start accusing me of being dishonest.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          6 hours ago

          Some of them can produce the correct answer. Of we do the test next year and they do better than humans then, isn’t it progress?

        • Threeme2189@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          8 hours ago

          As OP said, LLMs are really good at generating text that is fluid and looks natural to us. So if you want that kind of output, LLMs are the way to go.
          Not all LLM prompts ask factual questions and not all of the generated answers need to be correct.
          Are poems, songs, stories or movie scripts ‘correct’?

          I’m totally against shoving LLMs everywhere, but they do have their uses. They are really good at this one thing.

          • tigeruppercut@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            8 hours ago

            Are poems, songs, stories or movie scripts ‘correct’?

            It’s a valid point that they can produce natural language. The Turing Test has been a thing for awhile after all. But while the language sounds natural, can they create anything meaningful? Are the poems or stories they make worth anything? It’s not like humans don’t create shitty art, so I guess generating random soulless crap is similar to that.

            The value of language produced by something that can’t understand the reason for language is an interesting question I suppose.

            • Threeme2189@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              45 minutes ago

              I’m with you on that. I’ve come to realize that I value a shitty stick figure that was drawn by a human much more than an AI generated ‘Mona Lisa’.

            • iopq@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              6 hours ago

              There are people out there whose job is to format promotional emails for companies. AIs can replace this kind of soulless work completely. We should applaud that.

    • Tetragrade@leminal.space
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      edit-2
      11 hours ago

      Same takeaway as the article (everyone read the article, right?).

      Applying it to yourself, can you recall instances when you were asked the same question at different points in time? How did you respond?

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      18
      ·
      11 hours ago

      they’re all just guessing, literally

      They’re literally not.

      • m0darn@lemmy.ca
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        2
        ·
        11 hours ago

        Isn’t it a probabilistic extrapolation? Isn’t that what a guess is?

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          edit-2
          8 hours ago

          It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.

          It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

          So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.

          • SuspciousCarrot78@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 minute ago

            A fair point but often over stated, I feel. And it overlooks something -

            Language itself encodes meaning. If you can statistically predict the next word, then you are implicitly modeling the structure of ideas, relationships, and concepts carried by that language.

            You don’t get coherence, useful reasoning, or consistently relevant answers from pure noise. The patterns reflect real regularities in the world, distilled through human communication.

            Yes, that doesn’t mean an LLM “understands” in the human sense, or that it’s infallible.

            But reducing it to “just autocomplete” misses the fact that sufficiently rich pattern modeling can approximate aspects of reasoning, abstraction, and knowledge use in ways that are practically meaningful, even if the underlying mechanism is different from human thought.

            TL;DR: it’s a bit more than just a fancy spell check. ICBW and YMMV

          • vii@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 hours ago

            It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

            I know some humans that applies to

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            7 hours ago

            Yes it guesstimates what is wrong with you to argue like that about semantics?

        • vii@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          8 hours ago

          This gets very murky very fast when you start to think how humans learn and process, we’re just meaty pattern matching machines.

  • Greg Fawcett@piefed.social
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    1
    ·
    13 hours ago

    What worries me is the consistency test, where they ask the same thing ten times and get opposite answers.

    One of the really important properties of computers is that they are massively repeatable, which makes debugging possible by re-running the code. But as soon as you include an AI API in the code, you cease being able to reason about the outcome. And there will be the temptation to say “must have been the AI” instead of doing the legwork to track down the actual bug.

    I think we’re heading for a period of serious software instability.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      42 minutes ago

      AI chatbots come with randomization enabled by default. Even if you completely disable it (as another reply mentions, “temperature” can be controlled), you can change a single letter and get a totally different and wrong result too. It’s an unfixable “feature” of the chatbot system

    • Fmstrat@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      2 hours ago

      This is adjustable via temperature. It is set low on chatbots, causing the answers to be more random. It’s set higher on code assistants to make things more deterministic.

    • bss03@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      Yeah, software is already not as deterministic as I’d like. I’ve encountered several bugs in my career where erroneous behavior would only show up if uninitialized memory happened to have “the wrong” values – not zero values, and not the fences that the debugger might try to use. And, mocking or stubbing remote API calls is another way replicable behavior evades realization.

      Having “AI” make a control flow decision is just insane. Especially even the most sophisticated LLMs are just not fit to task.

      What we need is more proved-correct programs via some marriage of proof assistants and CompCert (or another verified compiler pipeline), not more vague specifications and ad-hoc implementations that happen to escape into production.

      But, I’m very biased (I’m sure “AI” has “stolen” my IP, and “AI” is coming for my (programming) job(s).), and quite unimpressed with the “AI” models I’ve interacted with especially in areas I’m an expert in, but also in areas where I’m not an expert for am very interested and capable of doing any sort of critical verification.

  • BanMe@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    11 hours ago

    In school we were taught to look for hidden meaning in word problems - checkov’s gun basically. Why is that sentence there? Because the questions would try to trick you. So humans have to be instructed, again and again, through demonstration and practice, to evaluate all sentences and learn what to filter out and what to keep. To not only form a response, but expect tricks.

    If you pre-prompt an AI to expect such trickery and consider all sentences before removing unnecessary information, does it have any influence?

    Normally I’d ask “why are we comparing AI to the human mind when they’re not the same thing at all,” but I feel like we’re presupposing they are similar already with this test so I am curious to the answer on this one.

    • bluesheep@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Normally I’d ask “why are we comparing AI to the human mind when they’re not the same thing at all,” but I feel like we’re presupposing they are similar already with this test so I am curious to the answer on this one.

      I would guess it’s because a lot of AI users see their choice of AI as an all-knowing human-like thinking tool. In which case it’s not a weird test question, even when the assumption that it “thinks” is wronh

    • punkibas@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      At the end of the article they talk about how to overcome this problem for LLMs doing something akin to what you wrote.