• Crackhappy@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      21 hours ago

      I can’t keep up. Did you know that ostriches bury their head in the sand to avoid vipers? Vipers can’t see prey if their heads are obscured.

  • supamanc@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    3
    ·
    1 day ago

    Not technically lies, as to lie there has to be an intent to deceive. LLMs don’t have any intentions.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        1 day ago

        Don’t know if I’d call that an intention of the machine but rather the creator. Hate to be that kind of person but it’s similar to the whole thing of “guns don’t kill, people do.”

        LLMs aren’t people. They’re not self-aware and don’t have any inner complexities like say, a dog, or a sheep has. There’s no drive or motivation. It’s just maths.

        If you tie someone to a train track, and a train comes along killing them, it’s not like the train or the track intended to kill the person. That was the intent of you, who “programmed” the scenario.

        Similar to guns, strict control is what will be needed to fix these kinds of things. Megalomaniac billionaires who see people as nothing but numbers running amok with narcissistic manipulator systems isn’t a recipe for anything good.

        • HairyHarry@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          1 day ago

          Ok, technically you are correct. Still they are lies or let’s call it disinformation or propaganda. Wether the output is controlled by the machine it self having a mind (which of course is sci-fi) or by those who control the machine.

          • WhatAmLemmy@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            1 day ago

            What you’re calling lies are false positives. To lie you have to know the truth. AI’s are ignorant. They don’t know what anything is, as all they “know” is mathematical patterns in 1’s and 0’s.

            They would only be lies if Google engineers explicitly overrided the model to output the false information. What most implementations of LLM’s are is weaponized incompetence, for-profit. Capitalists know they output false information, and they don’t care, because their only goal is profit and power.

            • hesh@quokk.au
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 day ago

              If Google knows it outputs falsehoods and lets it continue it becomes purposeful. That makes them lies in my book.

              • supamanc@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                16 hours ago

                If a newspaper prints lies you don’t say the physical piece of pulped up tree you are holding is lying to you, you say the author is.

                • hesh@quokk.au
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  16 hours ago

                  If it’s shown to the newspaper that they are lies and they keep on printing them, then yes I do call them liars as well. Whatever you want to call it, you must admit they are culpable for spreading disinformation.

        • Specter@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          It doesn’t really matter whether it’s the Machine or the creator.

          The point is, AIs can be programmed to lie, much like Grok does. And if they can be programmed to lie, then they are not reliable for anything at all. We are going through a decent period where AI can be used for a few things reliably, but even these will surely be enshittified.

          • deliriousdreams@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            20 hours ago

            It matters because every time we anthropomorphize Generative AI LLM’S we re-enforce peoples belief in their ability to tell lies or truths.

            People’s believe is what leads to trust in them and things like AI psychosis.

            An interesting way to look at it is AI also can’t tell the truth.

            What it does is generate the next likely word or words based on its most significant statistical positive in its database. So it doesn’t know anything. It doesn’t tell truth. It doesn’t tell lies. It isn’t an entity. The people behind it are allowing it to present information as factual and we have no reason to trust them.

          • supamanc@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 day ago

            Oooh, philosophy! I disagree. I think that if a person programs a LLM to give disinformation, that’s all it is. A lie giving misinformation knowing that’s it’s disinformation, intending do deceive. The LLM doesn’t know what’s true or false. It doesn’t intend anything, because it is not a conscious entity. The person who programmed it can be lying by disseminating false information, the LLM cannot, any more than a broken clock or thermometer is ‘lying’ about the time or temperature.

            • Specter@piefed.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              I am trying to get away from the philosophy actually 😅 in the end what matters is how these tools are being used, not so much their inherent characteristics.

              Can you envision a world where AI chatbots will be used to lead you down certain political beliefs (e.g. capitalism good, socialism bad) product recommendations will be made based on how much brands are willing to pay for ad placements, and your psychological state will be measured and molded to the interests of the AI owner? I can. It’s also already happening.

    • 8oow3291d@feddit.dk
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      21 hours ago

      LLMs don’t have any intentions.

      Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions.

      The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.

      • supamanc@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        An LLM is a statistical modeling tool. It doesn’t have goals. It can’t have intentions. It just outputs according to an algorithm.

      • deliriousdreams@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        20 hours ago

        The people who program, run and upkeep the LLM have intentions. The LLM is not a sapient or sentient entity.