Over the past few weeks, several US banks have pulled off from lending to Oracle for expanding its AI data centres, as per a report.

  • Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    13 hours ago

    And a human’s task, along with any other lifeform, is to survive and reproduce. In pursuit of that goal we have learned many different complex strategies and methods to achieve it, same with an llm.

    Peoples tasks are also not to provide accurate information, write code, provide legal advice etc. If a person can earn a living, attract a mate and raise children by lying, writing bad code, giving shitty legal advice etc. they will. It takes external discipline to make sure agents don’t follow those behaviors. For humans that discipline is provided by education, socialization, legal systems etc. For LLMs that discipline is provided by fine tuning, ie. The lying models get down rated while the more truthful models get boosted.

    • CileTheSane@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      They all “lie” because they don’t actually know a damn thing. Everything an LLM outputs is just a guess of what a human might do.