• jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    4
    ·
    edit-2
    14 hours ago

    I’d just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time – Amazon’s new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 minutes ago

      being able to do 30% of tasks successfully is already useful.

      If you have a good testing program, it can be.

      If you use AI to write the test cases…? I wouldn’t fly on that airplane.

    • Shayeta@feddit.org
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      12 hours ago

      It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 minutes ago

        I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.

      • Outbound7404@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 hours ago

        A human can review something close to correct a lot better than starting the task from zero.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 minutes ago

          In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.

          • loonsun@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            27 seconds ago

            Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        12 hours ago

        Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        12
        ·
        13 hours ago

        I’m not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.

        • outhouseperilous@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          7
          ·
          13 hours ago

          It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it’s llm shit you know those numbers have been more massaged than any human in history has ever been.

          • jsomae@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            13 hours ago

            I meant the latter, not “it can do 30% of tasks correctly 100% of the time.”

              • jsomae@lemmy.ml
                link
                fedilink
                English
                arrow-up
                5
                ·
                12 hours ago

                yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.

                • Knock_Knock_Lemmy_In@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  17 minutes ago

                  Run something with a 70% failure rate 10x and you get to a cumulative 98% pass rate. LLMs don’t get tired and they can be run in parallel.

                  • jsomae@lemmy.ml
                    link
                    fedilink
                    English
                    arrow-up
                    6
                    arrow-down
                    1
                    ·
                    12 hours ago

                    Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.