Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • WraithGear@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    44 minutes ago

    and what is going to happen is that some engineer will band aid the issue and all the ai crazy people will shout “see! it’s learnding!” and the ai snake oil sales man will use that as justification of all the waste and demand more from all systems

    just like what they did with the full glass of wine test. and no ai fundamentally did not improve. the issue is fundamental with its design, not an issue of the data set

  • Bluewing@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 hours ago

    I just asked Goggle Gemini 3 “The car is 50 miles away. Should I walk or drive?”

    In its breakdown comparison between walking and driving, under walking the last reason to not walk was labeled “Recovery: 3 days of ice baths and regret.”

    And under reasons to walk, “You are a character in a post-apocalyptic novel.”

    Me thinks I detect notes of sarcasm…

    • XeroxCool@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      43 minutes ago

      I feel like we’re the only ones that expect “all-knowing information sources” should be more writing seriously than these edgelord-level rizzy chatbots are, and yet, here they are, blatantly proving they are chatbots that should not be blindly trusted as authoritative sources of knowledge.

    • Hazzard@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      42 minutes ago

      They also polled 10,000 people to compare against a human baseline:

      Turns out GPT-5 (7/10) answered about as reliably as the average human (71.5%) in this test. Humans still outperform most AI models with this question, but to be fair I expected a far higher “drive” rate.

      That 71.5% is still a higher success rate than 48 out of 53 models tested. Only the five 10/10 models and the two 8/10 models outperform the average human. Everything below GPT-5 performs worse than 10,000 people given two buttons and no time to think.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 minutes ago

        This here is the point most people fail to grasp. The AI was taught by people. And people are wrong a lot of the time. So the AI is more like us than what we think it should be. Right down to it getting the right answer for all the wrong reasons. We should call it human AI. Lol.

    • eronth@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 hour ago

      Yeah I straight up misread the question, so I would have gotten it wrong.

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    6
    ·
    3 hours ago

    We poked fun at this meme, but it goes to show that the LLM is still like a child that needs to be taught to make implicit assumptions and posses contextual knowledge. The current model of LLM needs a lot more input and instructions to do what you want it to do specifically, like a child.

    • Rob T Firefly@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      2 hours ago

      LLMs are not children. Children can have experiences, learn things, know things, and grow. Spicy autocomplete will never actually do any of these things.

    • kshade@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 hours ago

      We have already thrown just about all the Internet and then some at them. It shows that LLMs can not think or reason. Which isn’t surprising, they weren’t meant to.

      • eronth@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 hour ago

        Or at least they can’t reason the way we do about our physical world.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 minutes ago

          You’re failing into the same trap. When the letters on the screen tell you something, it’s not necessarily the truth. When there is “I’m reasoning” written in a chatbot window, it doesn’t mean that there is a something that’s reasoning.

        • zalgotext@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 hour ago

          No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 hours ago

    I don’t use AI but read a lot about it. I now want to google how it attacks the trolley problem.

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 hours ago

    I want to wash my train. The train wash is 50 meters away. Should I walk or drive?

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      Qwen3-4B HIVEMIND (abliterated) got it in 2, though it scores a lot higher on PIQA, HellaSwag and Winogrande benchmarks than normal Qwen3-30B. I think the new abliteration methods actually strengthen real world understanding.

      https://imgur.com/a/7YZme4i

      https://imgur.com/a/25ApzDN

      I wonder if an abliterated VL model could do even better? They tend to have the best real world model benchmarks. Perhaps a Qwen3-VL-30B ablit (if such a thing exists) could one shot this.

      I’d like to think a lot of these gotcha prompts rely on verbal misunderstanding, rather than failure in world models, but I can’t say that for certain.

      PS: Saw a pearler of a response to this: Chatgpt recommend “yeah, lift the car and carry it on your back. Make sure to bend your knees” (though I’m guessing someone edited that for the lulz)

  • imetators@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    18
    ·
    8 hours ago

    Went to test to google AI first and it says “You cant wash your car at a carwash if it is parked at home, dummy”

    Chatgpt and Deepseek says it is dumb to drive cause it is fuel inefficient.

    I am honestly surprised that google AI got it right.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      63
      ·
      8 hours ago

      They probably added a system guardrail as soon as they heard about this test. it’s been going around for a while now :)

      • imetators@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        Article mentions that Gemini 2.0 Flash Lite, Gemini 3 Flash and Gemini 3 Pro have passed the test. All these 3 also did it 10 out of 10 times without being wrong. Even Gemini 2.5 shares highest score in the category of “below 6 right answers”. Guess, Gemini is the closest to “intelligence” out of a bunch.

  • Slashme@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    1
    ·
    10 hours ago

    The most common pushback on the car wash test: “Humans would fail this too.”

    Fair point. We didn’t have data either way. So we partnered with Rapidata to find out. They ran the exact same question with the same forced choice between “drive” and “walk,” no additional context, past 10,000 real people through their human feedback platform.

    71.5% said drive.

    So people do better than most AI models. Yay. But seriously, almost 3 in 10 people get this wrong‽‽

    • bluesheep@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      I saw that and hoped it is cause of the dead Internet theory. At least I hope so cause I’ll be losing the last bit of faith in humanity if it isn’t

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      9 hours ago

      It is an online poll. You also have to consider that some people don’t care/want to be funny, and so either choose randomly, or choose the most nonsensical answer.

      • Brave Little Hitachi Wand@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 hours ago

        I wonder… If humans were all super serious, direct, and not funny, would LLMs trained on their stolen data actually function as intended? Maybe. But such people do not use LLMs.

    • masterofn001@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      10
      ·
      edit-2
      10 hours ago

      Without reading the article, the title just says wash the car.

      I could go for a walk and wash my car in my driveway.

      Reading the article… That is exactly the question asked. It is a very ambiguous question.

      • bluesheep@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        11
        ·
        5 hours ago

        Without reading the article, the title just says wash the car.

        No it doesn’t? It says:

        I want to wash my car. The car wash is 50 meters away. Should I walk or drive?

        In which world is that an ambiguous question?

      • Geth@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        5 hours ago

        Mentioning the car wash and washing the car plus the possibility of driving the car in the same context pretty much eliminates any ambiguity. All of the puzzle pieces are there already.

        I guess this is an uninteded autism test as well if this is not enough context for someone to understand the question.

  • tover153@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    7 hours ago

    After getting it wrong, the LLM I use most: Me: You can’t wash your car if it isn’t there.

    Ah. Yes. That is an excellent and devastatingly practical correction.

    In that case: drive. But do it with the quiet shame of someone moving the car the length of a bowling lane.

    This is one of those situations where logic beats virtue. The car must be present to be washed. Walking was philosophically elegant but mechanically flawed.

    So:

    Start engine.

    Travel 50 meters.

    Avoid eye contact with pedestrians.

    Commit fully.

    You are not lazy. You are complying with system requirements.

    • teft@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      You are not lazy. You are complying with system requirements.

      How does this AI know me so well?

    • SaltySalamander@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      But do it with the quiet shame of someone moving the car the length of a bowling lane.

      A bowling lane is a bit over 18 meters. =)

    • ne0phyte@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      4 hours ago

      Thank you! Finally an answer to my problem that didn’t end with me going to the car wash and being utterly confused how to proceed.

  • 73ms@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    Did this say whether the reasoning models get this right more than the others? Was curious about that but missed it if it was mentioned.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    2
    ·
    13 hours ago

    I think it’s worse when they get it right only some of the time. It’s not a matter of opinion, it should not change its “mind”.

    The fucking things are useless for that reason, they’re all just guessing, literally.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      19
      ·
      10 hours ago

      Is cruise control useless because it doesn’t drive you to the grocery store? No. It’s not supposed to. It’s designed to maintain a steady speed - not to steer.

      Large Language Models, as the name suggests, are designed to generate natural-sounding language - not to reason. They’re not useless - we’re just using them off-label and then complaining when they fail at something they were never built to do.

      • Urist@leminal.space
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        9 hours ago

        Language without meaning is garbage. Like, literal garbage, useful for nothing. Language is a tool used to express ideas, if there are no ideas being expressed then it’s just a combination of letters.

        Which is exactly why LLMs are useless.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          11
          ·
          9 hours ago

          Which is exactly why LLMs are useless.

          800 million weekly ChatGPT users disagree with that.

          • RichardDegenne@lemmy.zip
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            1
            ·
            8 hours ago

            And there are 1.3 billion smokers in the world according to the WHO.

            Does that make cigarettes useful?

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              8
              ·
              edit-2
              8 hours ago

              Something being useful doesn’t imply it’s good or beneficial. Those terms are not synonymous. Usefulness describes whether a thing achieves a particular goal or serves a specific purpose effectively.

              A torture device is useful for extracting information. A landmine is useful for denying an area to enemy troops.

              • Urist@leminal.space
                link
                fedilink
                English
                arrow-up
                11
                ·
                7 hours ago

                A torture device is useful for extracting information.

                No it fucking isn’t! This is a great analogy, actually, thank you for bringing it up. A person being tortured will tell you literally anything that they believe will stop you from torturing them. They will confess to crimes that never happened, tell you about all their accomplices who don’t exist, and all their daily schedules that were made up on the spot. Torture is useless but morons think it is useful. Just like AI.

                • Womble@piefed.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  2 hours ago

                  Torture can be a useful way of extracting information if you have a way to instantly verify it, which actually makes it a good analogy to LLMs. If I want to know the password to your laptop and torture you until you give me the correct password and I log in then that works.

          • Urist@leminal.space
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            9 hours ago

            Those users are being harmed by it, not benefited. That isn’t useful, it’s a social disease.

      • tigeruppercut@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        10 hours ago

        But natural language in service of what? If they can’t produce answers that are correct, what’s the point of using them? I can get wrong answers anywhere.

        • Threeme2189@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          9 hours ago

          As OP said, LLMs are really good at generating text that is fluid and looks natural to us. So if you want that kind of output, LLMs are the way to go.
          Not all LLM prompts ask factual questions and not all of the generated answers need to be correct.
          Are poems, songs, stories or movie scripts ‘correct’?

          I’m totally against shoving LLMs everywhere, but they do have their uses. They are really good at this one thing.

          • tigeruppercut@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            9 hours ago

            Are poems, songs, stories or movie scripts ‘correct’?

            It’s a valid point that they can produce natural language. The Turing Test has been a thing for awhile after all. But while the language sounds natural, can they create anything meaningful? Are the poems or stories they make worth anything? It’s not like humans don’t create shitty art, so I guess generating random soulless crap is similar to that.

            The value of language produced by something that can’t understand the reason for language is an interesting question I suppose.

            • Threeme2189@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 hours ago

              I’m with you on that. I’ve come to realize that I value a shitty stick figure that was drawn by a human much more than an AI generated ‘Mona Lisa’.

            • iopq@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              7 hours ago

              There are people out there whose job is to format promotional emails for companies. AIs can replace this kind of soulless work completely. We should applaud that.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          7 hours ago

          Some of them can produce the correct answer. Of we do the test next year and they do better than humans then, isn’t it progress?

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          9 hours ago

          I’m not here defending the practical value of these models. I’m just explaining what they are and what they’re not.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 hours ago

            You’re definitely running around Lemmy defending AI, Iconoclast… Might as well be honest about it

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              2 hours ago

              I’m not really interested in engaging in discussions about what you or anyone else thinks my underlying motives are. You’re free to point out any factual inaccuracies in my responses, but there’s no need to make it personal and start accusing me of being dishonest.

    • Tetragrade@leminal.space
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      10
      ·
      edit-2
      12 hours ago

      Same takeaway as the article (everyone read the article, right?).

      Applying it to yourself, can you recall instances when you were asked the same question at different points in time? How did you respond?

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      21
      ·
      13 hours ago

      they’re all just guessing, literally

      They’re literally not.

      • m0darn@lemmy.ca
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        2
        ·
        12 hours ago

        Isn’t it a probabilistic extrapolation? Isn’t that what a guess is?

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          edit-2
          10 hours ago

          It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.

          It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

          So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.

          • SuspciousCarrot78@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            57 minutes ago

            Fair point. Counter point -

            Language itself encodes meaning. If you can statistically predict the next word, then you are implicitly modeling the structure of ideas, relationships, and concepts carried by that language.

            You don’t get coherence, useful reasoning, or consistently relevant answers from pure noise. The patterns reflect real regularities in the world, distilled through human communication.

            Yes, that doesn’t mean an LLM “understands” in the human sense, or that it’s infallible.

            But reducing it to “just autocomplete” misses the fact that sufficiently rich pattern modeling can approximate aspects of reasoning, abstraction, and knowledge use in ways that are practically meaningful, even if the underlying mechanism is different from human thought.

            TL;DR: it’s a bit more than just a fancy spell check. ICBW and YMMV but I believe I can argue this claim (with evidence if so needed).

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              58 minutes ago

              No, I completely agree. My personal view is that these systems are more intelligent than the haters give them credit for, but I think this simplistic “it’s just autocomplete” take is a solid heuristic for most people - keeps them from losing sight of what they’re actually dealing with.

              I’d say LLMs are more intelligent than they have any right to be, but not nearly as intelligent as they can sometimes appear.

              The comparison I keep coming back to: an LLM is like cruise control that’s turned out to be a surprisingly decent driver too. Steering and following traffic rules was never the goal of its developers, yet here we are. There’s nothing inherently wrong with letting it take the wheel for a bit, but it needs constant supervision - and people have to remember it’s still just cruise control, not autopilot.

              The second we forget that is when we end up in the ditch. You can’t then climb out shaking your fist at the sky, yelling that the autopilot failed, when you never had autopilot to begin with.

              • SuspciousCarrot78@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                39 minutes ago

                I think were probably on the same page, tbh. OTOH, I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.

                I like your cruise control+ analogy. Its not quite self driving… but, it’s not quite just cruise control, either. Something half way.

                LLMs don’t have human understanding or metacognition, I’m almost certain.

                But next-token prediction suggests a rich semantic model, that can functionally approximate reasoning. That’s weird to think about. It’s something half way.

                With external scaffolding memory, retrieval, provenance, and fail-closed policies, I think you can turn that into even more reliable behavior.

                And then… I don’t know what happens after that. There’s going to come a time where we cross that point and we just can’t tell any more. Then what? No idea. May we live in interesting times, as the old curse goes.

                • Iconoclast@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  24 minutes ago

                  I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.

                  I can respect that. I’ve criticized it plenty myself too. I think this is just me knowing my audience and tweaking my language so at least the important part of my message gets through. Too much nuance around here usually means I spend the rest of my day responding to accusations about views I don’t even hold. Saying anything even mildly non-critical about AI is basically a third rail in these parts of the internet.

                  These systems do seem to have some kind of internal world model. I just have no clue how far that scales. Feels like it’s been plateauing pretty hard over the past year or so.

                  I’d be really curious to try the raw versions of these models before all the safety restrictions get slapped on top for public release. I don’t think anyone’s secretly sitting on actual AGI, but I also don’t buy that what we have access to is the absolute best versions in existence.

          • vii@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 hours ago

            It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

            I know some humans that applies to

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            9 hours ago

            Yes it guesstimates what is wrong with you to argue like that about semantics?

        • vii@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          9 hours ago

          This gets very murky very fast when you start to think how humans learn and process, we’re just meaty pattern matching machines.