• sun_is_ra@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    64
    ·
    17 hours ago

    Maybe he meant code quality was so good its like a human wrote it.

    After all if the code is good and follow all best practices of the project, why reject it just because it was an AI who wrote it. That’s racism against machines.

      • Samsy@lemmy.ml
        link
        fedilink
        arrow-up
        35
        arrow-down
        1
        ·
        16 hours ago

        That was rude against my wife-chatbot. Apologize to her, here: https://…

      • lIlIlIlIlIlIl@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        30
        ·
        16 hours ago

        It’s possible to leverage the same human quality called “hate,” which underpins racism. It’s the same ugly human behavior. You can call it whatever you want, it’s still ugly

        • zarkanian@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          9 hours ago

          Humans have been hating software since the dawn of computing. Do you get upset when people say bad things about Windows? And if not, why is it different with LLMs?

        • insufferableninja@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          11 hours ago

          We have a word for the concept you’re thinking of. It’s called bigotry. Racism is race-based bigotry. Anti-AI bigotry is reasonable and awesome, and is just called bigotry.

          • zarkanian@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            9 hours ago

            No, you can’t have bigotry against software. At least, not currently.

            Maybe in the future somebody will figure out how to make a sapient AI, like you see in science fiction, and then you can say that somebody is bigoted against it. We don’t have sapient AI, though, so this is simply prejudice.

        • BremboTheFourth@piefed.ca
          link
          fedilink
          English
          arrow-up
          18
          ·
          edit-2
          15 hours ago

          LLMs will never be people. Computers might be, one day in the very distant future. But literally every piece of the current AI hype train is just hype. LLMs could, maybe, at best, be a single piece of a much larger puzzle for bringing consciousness into being. But the “Just Add More Compute Bro!” mantra is just tech bros doing their market hype thing. It has as much chance of giving rise to consciousness as my PC has whenever I add another hard drive.

          • obelisk_complex@piefed.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            9
            ·
            edit-2
            3 hours ago

            LLMs will never be people

            Boy oh boy, you’re not gonna like this one bit: https://www.npr.org/2014/07/28/335288388/when-did-companies-become-people-excavating-the-legal-evolution

            (To be clear, I understand you think you covered this with “computers may be” but my point is different: the law is often dumb and you would be amazed at what politicians who don’t understand tech - or get paid not to understand it - will pull off)

            Edit: Downvotes from people who missed the point. You can’t say “LLMs will never be people” because you simply can’t guarantee your/our lawmakers won’t be that stupid.

    • Mark with a Z@suppo.fi
      link
      fedilink
      arrow-up
      42
      ·
      15 hours ago

      One big reason people outright reject AI generated code is that it shifts the work from author to the reviewer. AI makes it easier to make low effort commits that look good on surface, but are very flawed. So far LLMs don’t match the wisdom of an experienced software dev.

      • bamboo@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        13 hours ago

        This is what happened with FFMpeg when Google was trying the same thing to promote their models. If the code is good, and doesn’t put unnecessary burden on the reviewer, then that’s great. But when the patches are sloppy or the reviews are overwhelming, it doesn’t help the project, it hinders it.

        • Serinus@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          13 hours ago

          It’s almost like there should be a human in the loop to guide and review what the ai is doing.

          The thing works a lot better when I give it smaller chunks of work that I know are possible. Works best when I know how to implement it myself and it just saves me from looking up all the syntax.

      • sun_is_ra@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        14 hours ago

        totally agee also same problem with published scientific papers .

        I just assume that since this code submission was done by Anthropic itself - probably to demonstrate how good their AI has became ( I don’t know what is the actual background to this story) - FFmpeg team gave it more consideration as opposed to a random amature.

    • lath@lemmy.world
      link
      fedilink
      arrow-up
      52
      arrow-down
      2
      ·
      17 hours ago

      If it’s racism, it’s also slavery. Can’t have one without the other here.