• outhouseperilous@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 minutes ago

        Tjose are people who could be living their li:es, pursuing their ambitions, whatever. That could get some shit done. Comparison not valid.

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 hours ago

      yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        39 minutes ago

        Run something with a 70% failure rate 10x and you get to a cumulative 98% pass rate. LLMs don’t get tired and they can be run in parallel.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 minutes ago

          I have actually been doing this lately: iteratively prompting AI to write software and fix its errors until something useful comes out. It’s a lot like machine translation. I speak fluent C++, but I don’t speak Rust, but I can hammer away on the AI (with English language prompts) until it produces passable Rust for something I could write for myself in C++ in half the time and effort.

          I also don’t speak Finnish, but Google Translate can take what I say in English and put it into at least somewhat comprehensible Finnish without egregious translation errors most of the time.

          Is this useful? When C++ is getting banned for “security concerns” and Rust is the required language, it’s at least a little helpful.

        • jsomae@lemmy.ml
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          12 hours ago

          Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.

          • outhouseperilous@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            edit-2
            12 hours ago

            Its not a magical 30%, factors apply. It’s not even a mind that thinks and just isnt very good.

            This isnt like a magical dice that gives you truth on a 5 or a 6, and lies on 1,2,3,7, and for.

            This is a (very complicated very large) language or other data graph that programmatically identifies an average. 30% of the time-according to one potempkin-ass demonstration. Which means the more possible that is, the easier it is to either use a simpler cheaper tool that will give you a better more reliable answer much faster.

            And 20 tons of human shit has uses! If you know its providence, there’s all sorts of population level public health surveillance you can do to get ahead of disease trends! Its also got some good agricultural stuff in it-phosphorous and stuff, if you can extract it.

            Stop. Just please fucking stop glazing these NERVE-ass fascist shit-goblins.

            • jsomae@lemmy.ml
              link
              fedilink
              English
              arrow-up
              4
              ·
              11 hours ago

              I think everyone in the universe is aware of how LLMs work by now, you don’t need to explain it to someone just because they think LLMs are more useful than you do.

              IDK what you mean by glazing but if by “glaze” you mean “understanding the potential threat of AI to society instead of hiding under a rock and pretending it’s as useless as a plastic radio,” then no, I won’t stop.