• ell1e@leminal.space
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    2 days ago

    If the accountability cannot be practically fulfilled, the reasonable policy becomes a ban.

    What good is it to say “oh yeah you can submit LLM code, if you agree to be sued for it later instead of us”? I’m not a lawyer and this isn’t legal advice, but sometimes I feel like that’s what the Linux Foundation policy says.

    • ViatorOmnium@piefed.social
      link
      fedilink
      English
      arrow-up
      54
      arrow-down
      1
      ·
      2 days ago

      But this was already the case. When someone submitted code to Linux they always had to assume responsibility for the legality of the submitted code, that’s one of the points of mandatory Signed-off-by.

      • badgermurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        20
        ·
        2 days ago

        But now, even the person submitting the license-breaching content may be unaware that they are doing that, so the problem is surely worse now that contributors can easily unwittingly be on the wrong side of the law.

        • Traister101@lemmy.today
          link
          fedilink
          English
          arrow-up
          48
          arrow-down
          1
          ·
          2 days ago

          That’s their problem. If they are using an LLM and cannot verify the output they shouldn’t be using an LLM

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            Problem is that broadly most GenAI users don’t take that risk seriously. So far no one can point to a court case where a rights holder successfully sued someone over LLM infringement.

            The biggest chance is getty and their case, with very blatantly obvious infringement. They lost in the UK, so that’s not a good sign.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Nobody can verify that the output of an LLM isn’t from its training data except those with access to its training data.

          • badgermurphy@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            13
            ·
            2 days ago

            It is their problem until the second they submit it, then it is the project’s problem. You can lay the blame for the bad actions wherever you want, but the reality is that the work of verifying the legality and validity of these submissions if being abdicated, crippling projects under increased workloads going through ever more submissions that amount to junk.

            What is the solution for that? The fact that is the fault of the lazy submitter doesn’t clean up the mess they left.

            • Traister101@lemmy.today
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              1
              ·
              2 days ago

              Frankly I expect the kernel dudes to be pretty good about this, their style guides alone are quite strick and any funny business in a PR that isn’t marked correctly is I think likely a ban from making PRs at all. How it worked beforehand, as already stated by others is the author says “I promise this follows the rules” and that’s basically the end of it. Giving an official avenue for generated code is a great way to reduce the negatives of it that’ll happen anyway. We know this from decades of real life experience trying to ban things like alcohol or drugs, time after time providing a legal avenue with some rules makes things safer. Why wouldn’t we see a similar effect here?

              • badgermurphy@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 day ago

                I do think that some projects will fare better than others, particularly ones like you mentioned, where the team is robust and capable of handling the filtering of increased submissions from these new sources.

                I believe we are going to end up having to see some new mechanism for project submissions to deal with the growing imbalance between submission volume and work hours available for review, as became necessary when viruses, malware, and spam first came into being. It has quickly become incredibly easy for anyone to make a PR, but not at all easier to review them, so something is going to have to give in the FOSS world.