• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    Also if you look at the technology of his time, there was no reason to think there’d be this huge explosion of information enough that you’d be able to just stitch a hundred million books and petabytes of online forums’ worth of text together into a statistical next token predictor.

    In his time there were maybe at most 4 million publications in all of existence (extrapolating from https://www.clrn.org/how-many-books-have-been-published-in-history/ ) which finger-in-the-air estimate would be ~2Tb; a tiny fraction of what’s on the internet now. Even if he’d anticipated the invention of the transistor and microchip technology, the brain he was imagining still would have had to be able to reason in the traditional way; an LLM trained entirely on Project Gutenberg would not come close to passing the Turing Test no matter how many parameters you built it with. To Turing, passing as human would have meant possessing a capacity of reason beyond assigning probability values to a list of potential autocompletes based on what’s in all the other texts.

    • FiniteBanjo@feddit.online
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      Fr, one of the best examples of this is if you take a classic riddle and add some minor change to it then LLMs will find it suddenly impossible to solve.

      • SippyCup@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        My favorite is making up a nonsense idiom for an llm to tell me the meaning of.

        “What does it mean when someone says ‘he’s not your grandma but she can fix a canoo?’”