Over the past few weeks, several US banks have pulled off from lending to Oracle for expanding its AI data centres, as per a report.

  • CileTheSane@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    23 hours ago

    I agree anyone using an LLM is a bad craftsman, because they’re using a hammer to drive in a screw.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      22 hours ago

      All LLMs are using a tool for the wrong task then, in your opinion? So in the composite object of “LLM” what is the tool and what is the task?

      • CileTheSane@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 hours ago

        So in the composite object of “LLM” what is the tool and what is the task?

        The tool is “Language Learning Model” and the task is “Learn language and mimic human speech.”

        The task is not “Provide accurate information” or “write code” or “provide legal advice” or “Diagnose these symptoms” or “provide customer service” or “manage a database”.

        • Not_mikey@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          13 hours ago

          And a human’s task, along with any other lifeform, is to survive and reproduce. In pursuit of that goal we have learned many different complex strategies and methods to achieve it, same with an llm.

          Peoples tasks are also not to provide accurate information, write code, provide legal advice etc. If a person can earn a living, attract a mate and raise children by lying, writing bad code, giving shitty legal advice etc. they will. It takes external discipline to make sure agents don’t follow those behaviors. For humans that discipline is provided by education, socialization, legal systems etc. For LLMs that discipline is provided by fine tuning, ie. The lying models get down rated while the more truthful models get boosted.

          • CileTheSane@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 hours ago

            They all “lie” because they don’t actually know a damn thing. Everything an LLM outputs is just a guess of what a human might do.

        • vacuumflower@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          14 hours ago

          It’s “Large Language Model”, and the point is in “Large” and that on really large datasets and well-selected attention dimensions set it’s good at extrapolating language describing real world, thus extrapolating how real world events will be described. So the task is more of an oracle.

          I agree that providing anything accurate is not the task. It’s the opposite of the task, actually, all the usefulness of LLMs is in areas where you don’t have a good enough model of the world, but need to make some assumptions.

          Except for “diagnose these symptoms”, with proper framework around it (only using it for flagging things, not for actually making decisions, things that have been discussed thousands of times) that’s a valid task for them.

          • CileTheSane@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 hours ago

            Except for “diagnose these symptoms”, with proper framework around it (only using it for flagging things, not for actually making decisions, things that have been discussed thousands of times) that’s a valid task for them.

            This sounds like someone who knows nothing about construction saying “building a house” is a valid task because they don’t understand why using a hammer to drive in a screw would be incorrect or why it’s even a problem. “The results are good enough right?”